AI Fights Back Part 3: Welcome to the Matrix. Safety of the Enterprise vs. Freedom of the Individual
In part 1 and part 2 of the AI Fights Back Series, we examined vulnerabilities that allow nefarious hackers to penetrate healthcare institutions, battleground protocols, and a new weapon (AI) to be used in the fight against hackers. The final installment in this series covers the cybersecurity adoption journey from the vantage point of IT professionals who are expected to deploy AI while also preserving the integrity of the Enterprise.
AI integration should not be accomplished without proper due diligence and there is little guidance or prior experience currently available to shepherd IT professionals through this process. Those tasked with this delicate mission must exercise common sense, prudence, and acute awareness of a new set of challenges posed by AI in Healthcare.

The Matrix Code written and directed by the Wachowskis. Image Credit Warner Bros.
End-User Adoption
Continuous authentication should not be considered a replacement for more traditional methods. Instead, these approaches should be complementary to each other, and stronger together. In a behavior modeling approach, a rich data lake can be leveraged to mathematically derive a dynamic threat score, which can then be applied to every behavior by comparing the usage pattern to the known norm. It is far easier to catch a threat actor by detecting a score deviation than it is to catch them after the fact via data analysis. AI microservices can act faster than any data analyst could, and can shut down an illegitimate attempt with far less latency than traditional threat response methodologies.

Cybersecurity threat risk assessment. Source: Corporate Compliance Insights
Virtually every key sector of our economy and government is a potential victim of hackers. In a September 13, 2021 CNN Business article by Sean Lyngaas, the extent of the risk is an ominous warning about the vulnerability of our infrastructure.
“Colonial Pipeline, one of the largest fuel pipelines in the United States, was forced offline for days this spring, leading to widespread shortages at gas stations along the east coast. The company paid millions to a hacking group to resolve the incident, though some of that money was later recovered by authorities.
Victims of ransomware attacks paid some $350 million in ransoms in 2020, according to Chainalysis, a firm that tracks cryptocurrency. But that’s only a partial view of total ransoms paid, and those who don’t pay can spend millions of dollars rebuilding their computer infrastructure.
Hacks can also be difficult to detect, and U.S. officials have worried that a lack of transparency about how attacks spread can mean that a single breach has the ability to ripple across many industries.
Last year, for example, alleged Russian spies exploited software made by federal contractor SolarWinds to infiltrate at least nine US agencies and about 100 companies. Hundreds of electric utilities in North America also downloaded the malicious software update used by the Russian hackers, offering a potential foothold into those organizations, (…)”
A Fast Evolving and Increasingly Vulnerable Ecosystem
We are now revisiting this topic from two fundamentally new perspectives: (a) the emergence of AI and ML in healthcare, and (b) the increasingly difficult infosec circumstances caused by relentless ransomware attacks.
The sudden ubiquity of AI and ML in healthcare has substantially expanded the digital footprint of any healthcare organization having adopted and deployed Cloud-based algorithms. With an expanding footprint also comes expanded exposure. Does each microservice constitute a mini backdoor into a hospital? How was the algorithm trained, and with what kind of data? Is it an inscrutable black box too risky for a hospital to deploy in its live clinical workflows? How much due diligence can a hospital infosec team afford to perform on each new incremental algorithm? More importantly: how much due diligence can they afford NOT to perform?
Which brings us to the cybersecurity aspect of AI and ML. IT professionals not only need to make sure algorithms are implemented in a way that will not upend existing clinical workflows, they also need to ensure that each algorithm doesn’t represent a potential privacy or security threat–accidentally or deliberately. Because hackers are very good at learning from their mistakes, any and all vulnerability is guaranteed to be exploited successfully at some point.
Lines Are Blurring
A Crucial Tradeoff: Safety of the Enterprise vs. Freedom of the Individual
In the Spring of 1999, a revolutionary and prescient sci-fi production was released, starring Keanu Reeves: The Matrix. The movie was revolutionary not only for its stunning special effects, but also for its message. The premise of the movie is that in a dystopian future, humans had essentially become little more than bio-batteries, mere sources of power for the Matrix, a supercomputer that ultimately enslaved humans unbeknownst to them, for the purpose of perpetuating itself and the illusion that humans are free. The Matrix provided all the creature comforts that a human could want, immersing them in a fabricated digital world humans didn’t know was fictitious.

The Matrix written and directed by the Wachowskis. Image Credit Warner Bros.
One of the most important conclusions we can reach from this analysis, is that individuals have already relinquished much of their freedom by indiscriminately sharing personal information on multiple social media platforms. While social media has generally made it easier for individuals and businesses to exponentially make new connections, our expanding digital personae constitute many more potential angles of attack for determined cybercriminals.
Internet giants (collectively, The Matrix) find themselves in a paradoxical position, burdened with multiple conflicts of interest: the necessity to deliver Earnings Per Share (EPS) to shareholders by exploiting and manipulating end-user data to satisfy revenue goals (to maintain the illusion that personal data are safe from prying eyes) and the necessity to increasingly turn to continuous authentication and monitoring to keep end-users and themselves, safe from cybercriminals.
Some may argue that social media platforms give us privacy options and preferences, and in fact they do; these measures provide a fictitious and misguided illusion of privacy and safety. Why illusion? Because these privacy options and preferences are designed and coded by the very platforms that financially reap the reward of our addiction to social media – it’s the fox writing the rules in the hen house. Navigating privacy preferences is an exercise in futility because end-users don’t actually own their own data. The data, the bits and bytes that constitute our digital personae, are literally owned by the owners of the physical infrastructure hosting the data. The vast majority of end-users willingly contribute their personal data, not knowing they also relinquish ownership, and therefore control, of their data.
In other words, we’re already living in the Matrix, like it or not. While the United States Constitution remains a beacon of individual freedoms, our addiction to social media has gradually eroded our freedoms, and our ever-expanding digital footprint has had an adverse impact on our safety and expectation of privacy. By indiscriminately sharing so much of our personal lives and opinions publicly, and for the permanent record, we have inevitably caused government and corporations to take steps that address ever-multiplying cybersecurity threats.
So where does that leave other vital sectors: government, defense, healthcare, finance, manufacturing, transportation, and education? They must tame the beast by permanently monitoring the beast. Next generation continuous authentication, for better or for worse, is the only logical option open to organizations to fend off cybercriminals.
We already blindly trust bots to decide for us what content we should be looking at, so why not entrust our cyber safety to AI algorithms that are purposefully designed with our safety in mind? Welcome to the Matrix.
AI Conductor
Unifier with AI Conductor for PACS and EHR drives and conducts AI workflows to get the right information to the right location at the right time and in the right format.