Taming The AI Gold Rush

By Florent Saint-Clair

In the dystopian HBO series Westworld, Hosts (androids) are built to look and act like humans; some Hosts have built-in flaws that make them unpredictable, and therefore a danger to the humans who created them. In the show however, these “flaws” are deliberately planted vs accidental, making the flaws actual features vs bugs in their code. 

In a less dystopian world, failing to recognize and mitigate built-in flaws in AI, especially in healthcare, could have some very real consequences.

Physicians are here to stay, and we need more of them, not less. Once we eliminate the elements of emotion and fear of the unknown stemming from the sudden emergence of AI, we are free to shift our focus back on what matters most: the patient. 

Dolores Abernathy was the first “host” in the HBO series Westworld. Image credit: HBO.com

Physician Burnout Is On The Rise, and AI Could Help Mitigate It

A pre-pandemic 2019 Advisory Board report indicated unusually high physician burnout and depression, between 40% and 50% of participants. These figures can only have been exacerbated by the Covid-19 crisis that has placed immeasurable amounts of stress on healthcare professionals for the past year. These statistics constitute more indicators that patients need more physicians and nurses, and that those same healthcare professionals need all the help AI can give them. 

Physicians today are working longer hours and are also expected to deliver coordinated care in an aging society with complex conditions and comorbidities where health care costs are increasing and regulations are putting an additional burden on administrative processes. Machines are far better and faster than humans at many tasks, in particular quickly scanning through vast amounts of structured and unstructured data, recognizing patterns, and presenting information in an actionable format for the benefit of humans.

What we are sorely missing today in healthcare is more education about AI, and a concerted willingness to acknowledge and bridge the knowledge gap that exists between AI and healthcare. Although medicine has seen major changes over the last decades, medical education is still largely based on traditional curricula. Education of healthcare professionals on the subject of AI has been severely lacking and therefore they often do not fully understand the virtues and possibilities of AI. 

The AI Gold Rush

Prospector pans for gold. Image credit: Findgoldprospecting.com

The problem with gold rushes (yep, AI is a gold rush) is that the gold seekers are adventurous and entrepreneurial risk-takers who tend to cut corners to stake their claims to the biggest veins, not knowing whether or not they will in fact strike gold. AI has followed a very similar pattern in that AI developers rushed to create AI, in many cases not knowing whether or not the solutions they were creating would solve real clinical challenges. Adding fuel to the fire is a seemingly endless supply of venture capital being made available to AI entrepreneurs, mimicking a dangerous pattern that occurred in the early 2000s during the dotcom bubble.

If AI in healthcare had followed a proper product management process, it would have been executed far differently. A methodical product manager is typically a subject matter expert to start with, and will make sure they fully understand the nature of the problem they are trying to solve before any code is written. Instead, the healthcare AI industry rushed to create solutions for problems that were rarely pre-validated by clinicians. 

In a Feb 2018 article about AI validation in healthcare, in collaboration with Dr. Alan Pitt of the Barrow Neurological Institute in Phoenix, AZ, we outlined the five key chronological steps that are often overlooked in AI development, each requiring deep physician involvement:

  1. Identify the problem. What is the specific problem the algorithm is intended to solve; how exactly would it help physicians more effectively improve quality of patient care?
  2. Validate the problem. Is the identified problem real?  Has it been validated by many physicians as a real problem, versus a pet project for a handful of entrepreneurial physicians?
  3. Validate the team’s competence. to solve the problem. Do the algorithm creators have a high percentage of likelihood (95% of more) that they will effectively address the stated problem?
  4. Validate marketability. Will someone agree to pay money for the algorithm? Are caregiver organizations willing to pay, and how much are they willing to pay, for the use of the algorithm in clinical settings?  Whose budget will pay for the technology? Radiology? IT? Cardiology? ER?
  5. Is the success scalable/replicable? What is the deployment methodology to implement and scale the technology? Will physicians use the algorithm from within their PACS/MIMPS user interface? Is deployment contingent upon other health IT vendors willingness to “play ball” or can the technology be deployed on a standalone basis. Will delivery be on premises or Cloud-based (or hybrid)?

The Industry’s Bias Problem

AI developers have tended to suffer from poor or fragmented knowledge of the clinical workflows they seek to transform. Due to scarce availability of medical images that are owned and jealously guarded by healthcare providers, AI developers have often had to compromise and resort to using limited data sets to train their algorithms – a few hundred images in some cases. 

Compounding the issue is the use of plain unstructured JPEG (non-DICOM) images to train algorithms, and the subsequent attempts to introduce algorithms that were trained without DICOM standards, in DICOM-driven imaging workflows. The particular issue of image data scarcity has also created a built-in challenge for AI: unavoidable bias built into the training of algorithms – too few images, too little diversity in the data sets, and too few clinicians helping to train the algorithms. 

In a Nov 2017 article, we took a closer look at the risks of bias in medical imaging AI. The scarcity of usable (properly de-identified) images, biased image-labeling (too few physicians training algorithms), and the general lack of checks and balances on AI Research & Development, all contribute to the industry’s bias problem.

In a November 2, 2017 TechCrunch article, OpenAI Director Shivon Zilis raises the specter of unchecked research and development by technocrats in a dangerous vacuum.  Researchers and software developers are essentially writing policy into code, thereby fundamentally influencing an important aspect of human cultural evolution without enough input from the public.  

“…artificial intelligence researchers are, in some ways, “basically writing policy in code” because of how influential the particular perspectives or biases inherent in these systems will be, and suggested that researchers could actually consciously set new cultural norms via their work, Zilis added that the total number of people setting the tone for incredibly intelligent AI is probably “in the low thousands.”

These multiple levels of challenges all contribute to the less-than-perfect track record of AI in healthcare today. Additionally, the youthful exuberance of AI developers and entrepreneurs stands in sharp contrast with the skeptical attitude of physicians when presented with the idea to implement AI into their clinical workflows. 

Healthcare’s Reluctant Adoption of AI

From the viewpoint of an outsider looking in, it’s almost as if the healthcare community were a skeptical adult listening to the antics of an excited child with bemused indulgence, but with little intention to actually hear what the child has to say. There is clearly work to be done on both sides – more maturity and composure is needed on the part of AI developers, who typically approach a problem from the perspective of innovation and creative engineering. 

Conversely, more positivity and openness is needed on the part of the healthcare community, which has been traditionally trained to think about the well-being of patients first (“do no harm”). Healthcare has historically been a laggard when it comes to trusting and embracing new technologies, for the very valid reason that patient safety should always come first. 

Even the most educated and experienced individuals are susceptible to the paralyzing effects of fear of the unknown. Five years ago, physicians’ fear of the unknown was also compounded by hospital IT’s fear and avoidance of the Cloud. Fast forward to 2021, a rapidly increasing number of hospital systems are embracing the Cloud as a sensible approach to solving enterprise IT challenges, some out of pragmatism, others out of financial and operational necessity. 

It is important to note that in the category of Enterprise Imaging, we’ve observed this area requires substantial improvement: while AI developers show clear competence in the field of machine learning, they often lack basic knowledge of the industry standards and workflows that drive imaging. This knowledge gap makes it challenging for the imaging community to readily embrace newly developed algorithms: lack of knowledge about RESTful APIs such as DICOMweb and FHIR tends to be a crippling blind spot when it comes to realistically integrating AI into live clinical workflows. Knowledge and experience of the workflows themselves also tends to be problematic, as AI developers may not fully understand the context in which clinicians use their AI. We discussed these challenges in depth in our previous blog post, Top 4 Challenges of Artificial Intelligence Adoption in Enterprise Imaging.

AI Professionals Strive To Close The Imaging Knowledge Gap

However, we are seeing evidence that AI developers are quickly eliminating this blind spot by adding imaging professionals and subject matter experts to their core development teams, ensuring that the AI they develop will naturally fit within existing clinical workflows.

Academia is quickly adapting and creating new programs that systematically address the knowledge gap between AI and Healthcare. Industry leaders such as Ajit Jaokar, Course Director for Oxford University’s AI programs, are making sure that students graduate from their programs with the practical knowledge necessary to ensure the AI they develop can be realistically implemented in their chosen fields.

The Pandemic Promotes Increased Collaboration and Innovation

This year we have reached an inflexion point at which the curve of AI reliability, and the curve of Cloud adoption are merging, compounding each other’s acceleration. We project that 2021 will be the year that the AI education gap will be addressed, and will result in an explosion of adoption of AI in Healthcare thanks to increased collaboration between healthcare professionals and AI developers. 

In the age of Covid-19, we’re seeing the necessity to substantially accelerate the timelines that healthcare professionals and regulators are accustomed to. Processes that used to take years of peer-reviewed research have recently proven to yield results in far shorter timeframes, and it’s interesting to take a closer look at why this is true. Granted, regulators have cut tremendous amounts of red tape to address the COVID crisis and help release vaccines far sooner than would otherwise be possible. However, without global cooperation among researchers and the AI and that was leveraged to produce vaccines, no amount of red tape cutting would have amounted to the results we’ve seen in these past twelve months. 

Moderna, Pfizer, J&J, among others, have all engaged in a level of collaboration that goes against their respective financial interests. Intellectual property in the pharma world is the ultimate value driver for these companies to reap the rewards of their drug development efforts. For competitors to collaborate, compare and share notes, and to learn from each other is in and of itself a remarkable deviation from business as usual. However, beyond this commendable willingness to share and collaborate, it should be noted that AI has played a key catalytic role in the fast-tracking of COVID vaccine development. 

In December 2020, Susan Caminiti wrote an article covering in more detail the direct role AI has played in the battle against COVID-19.

“…artificial intelligence, machine learning, and natural language processing have been deployed in the battle against Covid-19 since the pandemic’s earliest days. It started with the ability of AI to track the spread of the disease in order to limit the number of positive cases. It’s also been used to parse billions of pieces of data and thousands of scientific papers on the topic needed to develop the vaccines that are about to be distributed. And now, AI and other digital tools are being deployed to manage complex supply chains for the vaccine, and to collect the immense amount of data needed to track any adverse effects individuals might have after receiving it.”

Will this spirit of collaboration endure beyond this pandemic? Perhaps, perhaps not. It all depends on what the economic fallout will be in the months and years after the pandemic has been neutralized, and the necessity for the pharma sector to go back to business as usual, quarterlies, and pre-pandemic Wall Street expectations.

What AI needs now is tailwinds, amplified by trusted voices in healthcare, giving physicians the confidence that AI will enhance and supplement their efforts for the benefit of patients. AI can help physicians provide the kind of healthcare to patients they envisioned when they entered medical school as idealistic students.