415-684-8790 Technical support

Previously on the Dicom Systems blog, I expressed deep skepticism (actually it was more like scorn) for marketing spinners who turn AI into the next “gravy train” for medical imaging technology companies.

Functional artificial intelligence, let alone super-intelligence, has yet to be achieved in Healthcare. Making a claim on a concept that, at this point, exists in the realm of science fiction is a bit of a reach for a profession that must err on the side of conservatism. We can definitely innovate in healthcare, but our core values continue to be rooted in patient safety above all else.

Top Use Cases For Machine Learning In Healthcare

We’re grateful for publications such as Health IT Analytics, whose editors choose to stay away from hype, and objectively label the topic as it should be: machine learning.  We’re still learning, and the machines are still learning.  The March 2017 Health IT Analytics article presenting four concrete use cases for machine learning in healthcare, is particularly relevant.

Nonetheless, in recent months, our team has witnessed a few eye-opening use cases that have softened our stance on AI, and that have led me to cautiously set aside my signature skepticism (for now). There’s greater reward in accelerating this quixotic journey we’re all on, starting with open dialogue.

Embracing Machine Learning Opportunities Amidst The AI Debate

When it comes to making a definitive call on AI, the temperature of the water seems either scalding hot or freezing cold, mixed in with a bit of evangelizing and pontification.  Much like the debate surrounding gun control, Artificial Intelligence is a polarizing topic with contrasting points of view.  On one end, Elon Musk takes the pessimist’s view that, if unchecked, AI research could lead to an “existential threat” for humans. On the other end of the spectrum, Mark Zuckerberg takes the optimist’s view that we should go full boar into AI, going so far as to build a new dedicated AI lab in Montreal.

These influencers’ diverging views on Artificial Intelligence reveal a deep divide in what we see for humanity’s future. What they do have in common, and we agree, is that they recognize the very real impact and awe-inspiring possibilities of AI, good or bad.

Note: in this blog post we do not speculate on the question of whether or not human intelligence SHOULD be mimicked or replicated, the answer to which belongs in the realm of philosophy (echoes of Dr. Ian Malcolm, Jurassic Park, speaking to John Hammond: “Yes, but John, your scientists were so preoccupied with whether they could, they never stopped to think if they should…”).

Google Cloud Enables Intelligent Healthcare Solutions

Operating at a less extreme end of the spectrum is Google, not engaged in futile, time-consuming public debate, and quietly focused on accelerating machine learning. We can identify with the constructive questions Google scientists are asking and the concrete steps they are taking: what constitutes human intelligence? What are the individual building blocks that make up human intelligence?  Can they be replicated, or at the very least mimicked?

Last August, our team was in Sunnyvale, CA for a Google Partners Summit, where we learned of some promising applied AI use cases. Google has made substantial strides in identifying the individual building blocks that collectively constitute a rudimentary form of human intelligence: learning as a baby does, by adjusting behavior-based and the “get high” feedback loop which fuels our reward system. In other words, making new artificial neural connections.

Examples shared by Google included a simple neural network that taught itself to master Atari Breakout.  The research team engaged in the very human activity of breaking for lunch, but forgot to turn off the experiment during their absence.  During that short lunch break, the machine taught itself strategy.  The results became a rare WTF moment among Silicon Valley insiders: the neural network had figured out how to build a tunnel through the wall and was bouncing the ball off the ceiling in order to maximize the score.

Similarly, they let a DeepMind avatar teach itself to virtually walk through an artificial landscape strewn with myriad obstacles; the results are all at once funny and awe-inspiring.

Individually taken, these examples of applied AI research are far from super-intelligent.  However, they each represent a concrete step forward to identify, codify and assemble the building blocks of human intelligence.

Practical Use Case for AI in Medical Imaging

Until super-intelligence is upon us, Dicom Systems is setting its sights on more practical goals: feeding usable raw materials to neural networks.  By “usable,” we mean that the raw materials fed into the algorithms can actually incrementally “teach” the algorithm.  In our field of specialty, Enterprise Imaging, raw material for machine learning is medical images.

We at Dicom Systems have been kicking the tires on machine learning for some time. Specifically, our work has involved the large-scale de-identification of DICOM images intended to support diagnosticians in plowing through their ever-increasing workload.

For this new mission, Dicom Systems is thrilled to partner with Google Cloud Platform (GCP) and their visionary team.  Our collaboration with Google ultimately aims to create a deep and broad data lake made up of de-identified images for any medical research organizations around the world to query and exploit.

Dicom Systems Unifier: De-Identification Of Patient Images For Machine Learning

In strict compliance with HIPAA and Safe Harbor provisions, Dicom Systems Unifier appliances enable the streaming of de-identified images from a multitude of sources, into the data lake.  However, that’s just the beginning.  De-identified images constitute but one of the key enablers necessary for machine learning to take place. When taken alone, de-identified images are marginally valuable, like unrefined petroleum.

Image Labeling Is Crucial To AI Efforts In Medical Imaging

In effect, our collaboration with Google is a process comparable to drilling for crude oil.  The data in the GCP data lake still must be refined into usable fuel before it can effectively power machine learning.  The crucial element, or enabler, necessary to unlock AI potential is labeling.

Labeling de-identified images represents a unique challenge not only because there is so much work to be done (petabytes of images to label), but primarily because it is a value-added activity that can legally and credibly be performed only by qualified physicians.

Radiologists are well-compensated for their reading services and diagnostic expertise in the normal course of business, and typically don’t have a lot of spare time to moonlight for free as image labelers for machine learning.  Additionally, many physicians remain deeply suspicious of the impact AI could potentially have on their future, so helping to train neural networks does not rank very high on their list of priorities.

Early Challenges In Image Labeling

A few visionary healthcare AI research organizations understood this challenge early on and secured the services of a few luminaries in exchange for equity in their burgeoning startups, which constitutes a different type of compensation than Radiologists typically earn in exchange for their expertise.  There are but so many startups that could ultimately get away with this model. Labeling de-identified images is a tedious process that requires a considerable effort by many, not few, contributors.

This equity compensation model ultimately isn’t sustainable if entrusted to a limited group of fully vested physician shareholders.  At first, it may be exciting for a physician to anticipate a financial exit when the startup is either acquired or goes public, especially in a climate of declining reimbursements, and when AI is on the rise.  Many physicians are looking for a way to shore up their future income possibilities.  However, no matter how glamorous or lucrative AI will be for these few, there’s still a lot of work to do and not enough labor available to get the job done.

Additionally, this approach also raises the problem of built-in bias in training of machine learning algorithms.  If a handful of physicians are tasked with labeling images for algorithms, then these algorithms will be inexorably limited by the biased diagnostic interpretations of the same handful of diagnosticians.  The topic of human bias in machine learning will be the central subject of our next AI-focused blog post – stay tuned!

A Call For Crowdsourcing and Collaboration

Our belief is that we need to crowdsource the very thing that we often hold guarded: ideas and data.  While it’s not the most intuitive concept in an industry that is fiercely protective of its intellectual property, virtual teams of physicians around the world should be encouraged to contribute their expertise a few images at a time.  Practicing physicians everywhere, as well as those still in school and residency, can be tapped to advance the state of machine learning.

Resources

Medical Image Labeling and Annotation Tools

Since publication date, we have seen rapid development in the image labeling space. Today, in 2020, medical image annotation is largely handled by platforms such as Arterys. These platforms  specialize in image analyzing and tagging, completing tasks that seemed arduous in 2017 with improved speed and accuracy. Here is a helpful roundup of top medical annotation tools currently on the market, leveraging methods such as pattern recognition, image mining, 3D visualizing and more. 

Google: https://ai.googleblog.com/2019/12/developing-deep-learning-models-for.html

In an April 2017 New Yorker article titled “A.I. versus M.D.,” Geoffrey Hinton said that radiologists may soon be experiencing their own Wile E. Coyote moment. “You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. It’s just completely obvious that in five years deep learning is going to do better than radiologists.” Hinton added, “I said this at a hospital. It did not go down too well.” https://www.newyorker.com/magazine/2017/04/03/ai-versus-md