AI Series Part II: Finally Warming Up To Artificial Intelligence
Functional artificial intelligence, let alone super-intelligence, has yet to be achieved in Healthcare. Making a claim on a concept that, at this point, exists in the realm of science fiction is a bit of a reach for a profession that must err on the side of conservatism. We can definitely innovate in healthcare, but our core values continue to be rooted in patient safety above all else.
Top Use Cases For Machine Learning In Healthcare
We’re grateful for publications such as Health IT Analytics, whose editors choose to stay away from hype, and objectively label the topic as it should be: machine learning. We’re still learning, and the machines are still learning. The March 2017 Health IT Analytics article presenting four concrete use cases for machine learning in healthcare, is particularly relevant.
Nonetheless, in recent months, our team has witnessed a few eye-opening use cases that have softened our stance on AI, and that have led me to cautiously set aside my signature skepticism (for now). There’s greater reward in accelerating this quixotic journey we’re all on, starting with open dialogue.
Embracing Machine Learning Opportunities Amidst The AI Debate
When it comes to making a definitive call on AI, the temperature of the water seems either scalding hot or freezing cold, mixed in with a bit of evangelizing and pontification. Much like the debate surrounding gun control, Artificial Intelligence is a polarizing topic with contrasting points of view. On one end, Elon Musk takes the pessimist’s view that, if unchecked, AI research could lead to an “existential threat” for humans. On the other end of the spectrum, Mark Zuckerberg takes the optimist’s view that we should go full boar into AI, going so far as to build a new dedicated AI lab in Montreal.
These influencers’ diverging views on Artificial Intelligence reveal a deep divide in what we see for humanity’s future. What they do have in common, and we agree, is that they recognize the very real impact and awe-inspiring possibilities of AI, good or bad.
Note: in this blog post we do not speculate on the question of whether or not human intelligence SHOULD be mimicked or replicated, the answer to which belongs in the realm of philosophy (echoes of Dr. Ian Malcolm, Jurassic Park, speaking to John Hammond: “Yes, but John, your scientists were so preoccupied with whether they could, they never stopped to think if they should…”).
Google Cloud Enables Intelligent Healthcare Solutions
Operating at a less extreme end of the spectrum is Google, not engaged in futile, time-consuming public debate, and quietly focused on accelerating machine learning. We can identify with the constructive questions Google scientists are asking and the concrete steps they are taking: what constitutes human intelligence? What are the individual building blocks that make up human intelligence? Can they be replicated, or at the very least mimicked?
Last August, our team was in Sunnyvale, CA for a Google Partners Summit, where we learned of some promising applied AI use cases. Google has made substantial strides in identifying the individual building blocks that collectively constitute a rudimentary form of human intelligence: learning as a baby does, by adjusting behavior-based and the “get high” feedback loop which fuels our reward system. In other words, making new artificial neural connections.
Examples shared by Google included a simple neural network that taught itself to master Atari Breakout. The research team engaged in the very human activity of breaking for lunch, but forgot to turn off the experiment during their absence. During that short lunch break, the machine taught itself strategy. The results became a rare WTF moment among Silicon Valley insiders: the neural network had figured out how to build a tunnel through the wall and was bouncing the ball off the ceiling in order to maximize the score.
Similarly, they let a DeepMind avatar teach itself to virtually walk through an artificial landscape strewn with myriad obstacles; the results are all at once funny and awe-inspiring.
Practical Use Case for AI in Medical Imaging
Dicom Systems Unifier: De-Identification Of Patient Images For Machine Learning
In strict compliance with HIPAA and Safe Harbor provisions, Dicom Systems Unifier appliances enable the streaming of de-identified images from a multitude of sources, into the data lake. However, that’s just the beginning. De-identified images constitute but one of the key enablers necessary for machine learning to take place. When taken alone, de-identified images are marginally valuable, like unrefined petroleum.
Image Labeling Is Crucial To AI Efforts In Medical Imaging
In effect, our collaboration with Google is a process comparable to drilling for crude oil. The data in the GCP data lake still must be refined into usable fuel before it can effectively power machine learning. The crucial element, or enabler, necessary to unlock AI potential is labeling.
Labeling de-identified images represents a unique challenge not only because there is so much work to be done (petabytes of images to label), but primarily because it is a value-added activity that can legally and credibly be performed only by qualified physicians.
Radiologists are well-compensated for their reading services and diagnostic expertise in the normal course of business, and typically don’t have a lot of spare time to moonlight for free as image labelers for machine learning. Additionally, many physicians remain deeply suspicious of the impact AI could potentially have on their future, so helping to train neural networks does not rank very high on their list of priorities.
Early Challenges In Image Labeling
A few visionary healthcare AI research organizations understood this challenge early on and secured the services of a few luminaries in exchange for equity in their burgeoning startups, which constitutes a different type of compensation than Radiologists typically earn in exchange for their expertise. There are but so many startups that could ultimately get away with this model. Labeling de-identified images is a tedious process that requires a considerable effort by many, not few, contributors.
A Call For Crowdsourcing and Collaboration
Our belief is that we need to crowdsource the very thing that we often hold guarded: ideas and data. While it’s not the most intuitive concept in an industry that is fiercely protective of its intellectual property, virtual teams of physicians around the world should be encouraged to contribute their expertise a few images at a time. Practicing physicians everywhere, as well as those still in school and residency, can be tapped to advance the state of machine learning.
Medical Image Labeling and Annotation Tools
Since publication date, we have seen rapid development in the image labeling space. Today, in 2020, medical image annotation is largely handled by platforms such as Arterys. These platforms specialize in image analyzing and tagging, completing tasks that seemed arduous in 2017 with improved speed and accuracy. Here is a helpful roundup of top medical annotation tools currently on the market, leveraging methods such as pattern recognition, image mining, 3D visualizing and more.
In an April 2017 New Yorker article titled “A.I. versus M.D.,” Geoffrey Hinton said that radiologists may soon be experiencing their own Wile E. Coyote moment. “You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. It’s just completely obvious that in five years deep learning is going to do better than radiologists.” Hinton added, “I said this at a hospital. It did not go down too well.”