AI Series Part I: The Intuition Distinction—Artificial Intelligence vs. Empirical Machine Learning

By Florent Saint-Clair

For the past couple of years, Dicom Systems has been increasingly immersed in large-scale data de-identification projects. In 2017, the projected annual aggregate throughput of medical imaging exams passing through our enterprise imaging Workflow Unifier appliances will exceed 70 million – for a total exceeding 3 billion images – which suggests a natural progression of the Unifier platform into big data analytics and deep learning.

Our involvement in big data and medical research projects has required us to look at the data entrusted to us in radically different ways, and has driven us to understand HIPAA and Safe Harbor provisions at a much deeper level.

Big Data’s Role in Enterprise Imaging

Big data analytics and medical research are brave new realms for Dicom Systems, encouraging us to better define marketing messaging associated with these new activities. The most natural place to start this definition exercise is typically the competition, so we looked at a short list of usual suspects who play in big healthcare data. We’ve been baffled by the flagrant inconsistencies and overall level of inaccuracy – dare I say hubris – that exist in big data and AI marketing messaging.

Machine Learning in Medical Imaging

I recently came across a LinkedIn update from a large player that shall remain unnamed, who literally boasted that only their AI platform could keep your data safe. This kind of messaging is outright misleading and highly presumptuous. It also made me wonder if it was a prank, perhaps meant to parody the president’s boastful statement that he alone can fix America.

Among many other terms, we’ve found references to “machine learning,” “deep learning,” “algorithms,” “artificial intelligence (AI),” “neural networks,” “big data,” “analytics,” all used interchangeably. By mixing up these obviously distinct and complex variables, an increasingly noisy, confusing, and amorphous marketplace has emerged, in which inconsistencies and inaccuracies seem to go unchallenged.

Allow me to postulate that if two different words or phrases exist for seemingly identical concepts, there is probably a distinction and the words or phrases should not be used interchangeably. In medical research, we are in the realm of science, not science fiction or advertising. Therefore, I would presume that the audience actually cares about accuracy and facts, vs. spin.

My high school Philosophy teacher had a leitmotif that still resonates to this day: be rigorous in your thinking, don’t succumb to “Lazy Thinker Syndrome.” In other words, get to the bottom of what you mean, don’t merely skim the surface, know the meaning of the words you use, and use the right words for the right context. Any evidence of lazy thinking in a paper would result in the ultimate punishment: learning by heart a few paragraphs of Plato’s Republic, or worse, Immanuel Kant – his personal favorite.

I often notice myself cutting corners in my thinking or in dialogue with others, and I often catch my contemporaries engaging in lazy thinking with phrases like “you know what I mean…” or “whatever…” or “anyways…” These are the typical precursor phrases that signal half-baked statements, embryonic thoughts, or generally uninformed and empty rhetorics. The culprit uttering these words is hoping their audience won’t dig in and let them get away with it, or the audience doesn’t want to sound potentially ignorant by calling out inconsistencies.

The Artificial Intelligence Promise – Is True AI a Medical Imaging Myth?

Rigor of thinking is a lot to ask in the age of Snapchat and Instagram, especially when thoughts and policies are increasingly conveyed in vague, grossly abbreviated or outright inaccurate and misleading communications. We can’t blame the companies taking liberties in their marketing spin; after all, they count on their audience’s complacency, desire to be dazzled, and saturated communication channels.

Which brings us to this abbreviation: “AI” for “Artificial Intelligence.” How cool it sounds, doesn’t it? It’s a whiff of the future, except that future is now. At least that’s what savvy tech marketers would like their target markets to believe. Applied in the medical world, “AI” conjures up Hollywoodian imagery of shiny humanoid robots assisting overworked physicians and nurses in futuristic hospitals.

It’s an exciting future they’re selling, in which the scarcity of medical professionals is remediated by the deployment of a swarm of Artificially Intelligent robots that never get tired of reading medical images, never have to worry about RVUs, don’t care about profit sharing, never violate SLAs, never miss a relevant prior, correlate millions of other similar exams in an instant, and can perform 100% peer review on all exams, old and new.

I completely buy into this grand vision, and I believe neural networks will ultimately help medical professionals deliver better patient care. Replace medical professionals they will not.

What I’m not buying is that Machine Learning is equivalent to Artificial Intelligence. In 2017, as I write these words, my belief is that humans have yet to create a machine capable of artificial intelligence, which means it is probably still science fiction.

What we are in fact getting is the illusion of intelligence, because of the advanced level of sophistication that exists within machine learning algorithms. Just like the human eye has the optical illusion of movement when shown still images at the rate of 24 frames per second, a deep learning algorithm correlates data at a speed that gives the human mind the mental illusion the algorithm has actual intelligence. From there, it’s easy to get lazy and just call it Artificial Intelligence.

Anthropomorphism is hardly a new concept for humans. There is something comforting in the idea that an inert object such as a computer could engage us in intelligent conversation. It makes us feel less lonely. That’s why humans lend their voices to computers such as IBM’s Watson (at least in their mind-blowing TV commercials), Apple’s Siri or Amazon’s Alexa. But that is where the parallel stops, at least for now. Watson, Apple, and Amazon give us a pleasing illusion of intelligence, but they are not actually intelligent – simply well programmed. The human engineers who created them, now they are intelligent.

Today’s science, impressive and promising as it may be, is still glorified machine learning on a grander scale. Although increasingly sophisticated, the machine learning algorithms being placed into production today cannot, by any stretch of the imagination, be sold as AI.

For example, a sophisticated medical diagnostics algorithm is capable of correlating a myriad symptoms to possible illnesses based on having ingested and analyzed massive amounts of historic data, calculating a probability for what these symptoms could mean, and presenting options. A sophisticated list of boolean statements applied to a set of symptoms is definitely capable of narrowing down the possibilities, and may even come up with the correct diagnosis.

An artificial “Radiologist” could analyze millions of MRI images and start to recognize patterns and relationships between the pixel formations and the contents of the corresponding diagnostic report. A radiologist friend once said “after you’ve read 20,000 chest x-rays, you’ve seen them all…,” which theoretically means that through massive empirical repetition, an algorithm could accumulate the requisite knowledge of what pixel patterns are associated with diagnostic information in radiology reports.

Conceptually, there is little difference between an artificial “radiologist” computer, and a computer beating the best human chess players in the world. The only difference is the specific task that the computer was given: the former ingests medical images and the latter ingests chess moves. The chess-playing computer has the benefit of anticipating all the combinations of possible moves the opponent might make based on its knowledge of all possible previous moves. In Radiology, for some of the most simple imaging exams with obvious findings – or no findings – it is possible to envision that algorithms will be tasked with the bulk of reading responsibilities, and a human physician will simply perform routine QA or peer review along the way, leaving the more complex and subtle cases for real doctors to read.

The missing elements, still fundamentally separating machine learning from true Intelligence, are the elements of free will (choice) and human intuition. Even if a robot has been made sophisticated enough to compute good comedic timing and crack a joke at the appropriate time, it shouldn’t be qualified as an Artificially Intelligent comedian. The robot’s “comedian” algorithm would have been deliberately crafted by humans for the purpose of cracking a joke – but it wouldn’t inherently have a choice but to inevitably crack a joke whenever the context would call for it based on its pre-programmed algorithmic responses. Pre-programmed responses aren’t capable of recognizing nuance – unless the robot’s creators have given its algorithm enough depth and parameters to produce nuanced and contextually appropriate responses. The common denominator among all machines is that they do not possess free will.

The robot wouldn’t actually get to choose whether or not to crack a joke because it lacks the free will to choose in the first place. That “choice” distinction makes all the difference in the world, and separates intelligent purpose from programmed purpose. The comedian algorithm doesn’t actually have a real sense of humor, a trait that remains distinctly human. Comedy is merely a feature programmed within the robot’s algorithm, designed to make the robot more affable or relatable by humans. Nonetheless, the joke would be the result of a sophisticated and empirical analysis of the context and the input, followed by an output.

Image credit: IBM.com

IBM’s Watson recently created a Gaudi-inspired sculpture, dubbed the “First Thinking Sculpture.” This is no doubt a superb accomplishment. Watson was fed multiple types of input, including images and song lyrics, to create something entirely new that would capture a “feeling” or “mood.” The result is beautiful, astonishing and radically forward thinking. But whose thinking? Watson’s? Does Watson have the intelligence and free will to choose to create? Did Watson rise up from sleep mode, spontaneously inspired to create a sculpture? The answer is clearly no; Watson was given specific instructions and input that directed its computing resources toward the purpose of creating a sculpture.

Conversely, I’m in tune with the notion that “Deep Learning” is more accurate a moniker for what is routinely and erroneously marketed as AI. The creation of neural networks, whose job it is to mimic the way the human brain functions, is in greater alignment with the technologies being touted by various Fortune 100 tech players today. Of course AI sounds cooler than Deep Learning, because it captures the imagination more than “learning.” Learning sounds like too much work for someone who is overworked and bombarded with ubiquitous marketing spin.

Let’s not fall for the glamour in AI; that’s the lazy thinker’s route. We should question what is actually being marketed to us, and demand that technology vendors define what they are selling with more accuracy. In this year’s enterprise imaging trade shows, I fully expect to see AI messaging everywhere we look. Yes, the concepts of neural networks and deep learning are more difficult to grasp and tend to capture the imagination a little less; but have we become collectively so lazy in our thought process that we cannot push back on inaccurate messaging?

AI should not be used interchangeably with Deep Learning or neural networks. The day a deep learning algorithm is able to conclusively deliver a diagnosis based on intuitive imagination, then I’ll be ready to promote the algorithm to the level of AI. By intuitive imagination, I mean that the conclusion was reached not based upon empirically available evidence or obscure data sets somewhere in the deep confines of the neural network, but actually based on creative envisioning allowing it to postulate a possible answer without the benefit of historic data. When the algorithm begins to think like Dr. House and gets a dramatic light bulb moment, the illusion of intelligence will be indistinguishable from actual intelligence and I’ll capitulate.

Some die-hard fans of AI may argue there is no difference between a creatively perceived diagnosis vs. a diagnosis derived from empirical data analysis and correlation. If a machine and a physician reach an identical conclusion, why couldn’t we qualify that machine as artificially intelligent? Isn’t the physician tapping into his or her own empirical memory to articulate a possible diagnosis? From a strictly empirical perspective that is a partially accurate, yet incomplete statement. Some of the most meaningful and influential theories were postulated thanks to creative observation, not repetitive, empirical observation. Historic data are no doubt important, but they also tend to limit possibilities to the realm of the known. Mix historic data with human ingenuity and intuition, and that is when sparks really fly.

To supplement empirical knowledge, the machine could never factor in elements that are strictly attributable to humans, such as emotion, empathy, intuition and most importantly, choice. The machine never has to worry about its own mortality, has never lost a relative to cancer, has never held a child in its arms and does not feel physical or psychological pain. An interesting corollary argument is whether a machine’s objectivity is more desirable than a human’s subjectivity in medical diagnostics – we don’t live in a binary world, so the answer is probably that a bit of both is necessary in order to articulate a sensible diagnosis. We’ll delve into that topic in a separate piece.

Engineers could theoretically feed data points to a machine that mimic the various attributes of human experience, but this input is incremental and finite, vs. human experience that is infinite. Human experience is to medicine what 88 piano keys are to music: infinite possibilities. The process by which the physician reaches a conclusion taps into a far broader set of resources and dimensions than any neural network could deliver. Harnessing the power of neural networks in conjunction with human experience is materially accelerating scientific progress and discovery, and I have no doubt this human/machine partnership will soon yield a cure for cancer and many other scourges of humanity.

Actual Intelligence remains a fundamentally human and socially relative attribute. Until a singularity event is upon us, I’ll continue to regard Artificial Intelligence with a healthy dose of skepticism, and as intellectually stimulating, thought-provoking science fiction. I dread the day the first artificial radiologist becomes self-aware, and resents its first slightly diverging peer review.

Since the date of publication, additional AI resources have become available.

FDA has published a list of FDA- approved AI algorithms for medical imaging, with dozens of algorithms shown to be proven, reliable, and accurate. Radiology, along with cardiology, is a leading medical specialty when it comes to AI algorithms. This is in part because computer vision is one of the fastest growing fields in AI development, and medical imaging has both the data and the visuality that smart algorithms need to thrive.

Another valuable resource we would like to share is Herman Oosterwijk’s excellent summary of Healthcare AI Regulatory Considerations. Of particular note is the recently introduced procedure for a “de novo” filing, recognizing that the FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies.

For those who may be new to the Artificial Intelligence lingo, we have created an AI glossary with top terms as they related to medical imaging.

AI Conductor

Unifier with AI Conductor for PACS and EHR drives and conducts AI workflows to get the right information to the right location at the right time and in the right format.

Learn More