AI Fights Back Part 3: Welcome to the Matrix. Safety of the Enterprise vs. Freedom of the Individual

By Florent Saint-Clair

In part 1 and part 2 of the AI Fights Back Series, we examined vulnerabilities that allow nefarious hackers to penetrate healthcare institutions, battleground protocols, and a new weapon (AI) to be used in the fight against hackers. The final installment in this series covers the cybersecurity adoption journey from the vantage point of IT professionals who are expected to deploy AI while also preserving the integrity of the Enterprise.  

AI integration should not be accomplished without proper due diligence and there is little guidance or prior experience currently available to shepherd IT professionals through this process. Those tasked with this delicate mission must exercise common sense, prudence, and acute awareness of a new set of challenges posed by AI in Healthcare.

The Matrix Code written and directed by the Wachowskis. Image Credit Warner Bros.

End-User Adoption

Behavior modeling coupled with data science can be a scary perspective for individuals who won’t compromise their freedoms. For this reason, it is far more likely that organizations and governments will readily accept a fundamental shift in their approach to cybersecurity. Ultimately, because individuals increasingly rely upon Cloud-based infrastructures that do not belong to them, they inevitably also relinquish some of their freedoms. The responsibility and choice to adopt continuous authentication will rest with the likes of Google, Facebook, and Amazon, whose infrastructures are being leveraged for the vast majority of our digital activities.

Continuous authentication should not be considered a replacement for more traditional methods. Instead, these approaches should be complementary to each other, and stronger together. In a behavior modeling approach, a rich data lake can be leveraged to mathematically derive a dynamic threat score, which can then be applied to every behavior by comparing the usage pattern to the known norm. It is far easier to catch a threat actor by detecting a score deviation than it is to catch them after the fact via data analysis. AI microservices can act faster than any data analyst could, and can shut down an illegitimate attempt with far less latency than traditional threat response methodologies.

Cybersecurity threat risk assessment. Source: Corporate Compliance Insights

Healthcare providers would particularly benefit from continuous authentication where AI-enabled monitoring could be leveraged for cross-enterprise sharing of real-time security threat information, effectively turning the intel into a digital vaccine against hackers with nefarious intent. The digital interaction among physicians, nursing staff, and patients constitutes an ever-shifting digital pattern that can be derived from every node within the healthcare system’s IT ecosystem (EHR/EMR, PACS/MIMPS, CVIS, RIS, LIS, etc). Not only does continuous authentication help reduce physician and nurse friction and fatigue by limiting the need to enter passwords, it can also be leveraged by patients to more easily grant access to their medical records, making their medical history more portable and shareable with legitimate protagonists along the patient care continuum, and without compromising privacy or security.

Virtually every key sector of our economy and government is a potential victim of hackers. In a September 13, 2021 CNN Business article by Sean Lyngaas, the extent of the risk is an ominous warning about the vulnerability of our infrastructure. 

“Colonial Pipeline, one of the largest fuel pipelines in the United States, was forced offline for days this spring, leading to widespread shortages at gas stations along the east coast. The company paid millions to a hacking group to resolve the incident, though some of that money was later recovered by authorities.

Victims of ransomware attacks paid some $350 million in ransoms in 2020, according to Chainalysis, a firm that tracks cryptocurrency. But that’s only a partial view of total ransoms paid, and those who don’t pay can spend millions of dollars rebuilding their computer infrastructure.

Hacks can also be difficult to detect, and U.S. officials have worried that a lack of transparency about how attacks spread can mean that a single breach has the ability to ripple across many industries.

Last year, for example, alleged Russian spies exploited software made by federal contractor SolarWinds to infiltrate at least nine US agencies and about 100 companies. Hundreds of electric utilities in North America also downloaded the malicious software update used by the Russian hackers, offering a potential foothold into those organizations, (…)”

A Fast Evolving and Increasingly Vulnerable Ecosystem

In a 2016 blog post, we paid homage to the Unsung Heroes of Health IT. Five years ago our musings contemplated the fate of IT professionals who are expected to build and maintain complex ecosystems of IT solutions, having to fulfill their mission-critical duties using marginally efficient and fragmented tools. 

We are now revisiting this topic from two fundamentally new perspectives: (a) the emergence of AI and ML in healthcare, and (b) the increasingly difficult infosec circumstances caused by relentless ransomware attacks.

The sudden ubiquity of AI and ML in healthcare has substantially expanded the digital footprint of any healthcare organization having adopted and deployed Cloud-based algorithms. With an expanding footprint also comes expanded exposure. Does each microservice constitute a mini backdoor into a hospital? How was the algorithm trained, and with what kind of data? Is it an inscrutable black box too risky for a hospital to deploy in its live clinical workflows? How much due diligence can a hospital infosec team afford to perform on each new incremental algorithm? More importantly: how much due diligence can they afford NOT to perform?

Which brings us to the cybersecurity aspect of AI and ML. IT professionals not only need to make sure algorithms are implemented in a way that will not upend existing clinical workflows, they also need to ensure that each algorithm doesn’t represent a potential privacy or security threat–accidentally or deliberately. Because hackers are very good at learning from their mistakes, any and all vulnerability is guaranteed to be exploited successfully at some point.

Lines Are Blurring

When it comes to authentication, the line distinguishing humans and “things” is quickly disappearing. What is the difference between a human interacting with data, and a device acting on behalf of a human? AI or ML interrogating a patient’s record, for all intents and purposes, may as well be a physician querying the EHR. A remote patient monitoring system sending data to a central telemedicine module still needs to have proper authentication in order to legitimately access patient records, or to contribute patient data to the patient’s record. Yet, every device connected to the telemedicine module represents a possible vulnerability for the enterprise if it were to be exploited for nefarious purposes.

A Crucial Tradeoff: Safety of the Enterprise vs. Freedom of the Individual

Clearly, end-user authentication is a trade-off between our personal freedoms as individuals, and the necessity to maintain the integrity of enterprises. There is inherent comfort in knowing that a system is continuously monitoring everyday behavior in order to keep us safe. There is also inherent discomfort in feeling like “Big Brother” is watching. Ultimately, organizations (especially government) would be better served by continuous authentication than individuals because some forms of government and corporations care less about personal freedom and privacy than an individual would.

In the Spring of 1999, a revolutionary and prescient sci-fi production was released, starring Keanu Reeves: The Matrix. The movie was revolutionary not only for its stunning special effects, but also for its message. The premise of the movie is that in a dystopian future, humans had essentially become little more than bio-batteries, mere sources of power for the Matrix, a supercomputer that ultimately enslaved humans unbeknownst to them, for the purpose of perpetuating itself and the illusion that humans are free. The Matrix provided all the creature comforts that a human could want, immersing them in a fabricated digital world humans didn’t know was fictitious.

The Matrix written and directed by the Wachowskis. Image Credit Warner Bros.

The interesting parallel between The Matrix’s message 22 years ago and today’s Internet reality is that our society has become completely immersed and invested in digital personae, faithfully and blindly contributing energy (personal data, private content) for the world (the Matrix) to see and consume. This content we so eagerly contribute to social media companies continues to feed the beast, making the distinction between our physical person and digital persona increasingly elusive. By extension, it’s getting more and more difficult for IT professionals to protect the sanctity of privacy and security for end-users and for organizations, especially in healthcare.

One of the most important conclusions we can reach from this analysis, is that individuals have already relinquished much of their freedom by indiscriminately sharing personal information on multiple social media platforms. While social media has generally made it easier for individuals and businesses to exponentially make new connections, our expanding digital personae constitute many more potential angles of attack for determined cybercriminals. 

Internet giants (collectively, The Matrix) find themselves in a paradoxical position, burdened with multiple conflicts of interest: the necessity to deliver Earnings Per Share (EPS) to shareholders by exploiting and manipulating end-user data to satisfy revenue goals (to maintain the illusion that personal data are safe from prying eyes) and the necessity to increasingly turn to continuous authentication and monitoring to keep end-users and themselves, safe from cybercriminals.

Some may argue that social media platforms give us privacy options and preferences, and in fact they do; these measures provide a fictitious and misguided illusion of privacy and safety. Why illusion? Because these privacy options and preferences are designed and coded by the very platforms that financially reap the reward of our addiction to social media – it’s the fox writing the rules in the hen house. Navigating privacy preferences is an exercise in futility because end-users don’t actually own their own data. The data, the bits and bytes that constitute our digital personae, are literally owned by the owners of the physical infrastructure hosting the data. The vast majority of end-users willingly contribute their personal data, not knowing they also relinquish ownership, and therefore control, of their data.

In other words, we’re already living in the Matrix, like it or not. While the United States Constitution remains a beacon of individual freedoms, our addiction to social media has gradually eroded our freedoms, and our ever-expanding digital footprint has had an adverse impact on our safety and expectation of privacy. By indiscriminately sharing so much of our personal lives and opinions publicly, and for the permanent record, we have inevitably caused government and corporations to take steps that address ever-multiplying cybersecurity threats.

So where does that leave other vital sectors: government, defense, healthcare, finance, manufacturing, transportation, and education? They must tame the beast by permanently monitoring the beast. Next generation continuous authentication, for better or for worse, is the only logical option open to organizations to fend off cybercriminals.

We already blindly trust bots to decide for us what content we should be looking at, so why not entrust our cyber safety to AI algorithms that are purposefully designed with our safety in mind? Welcome to the Matrix.