AI Series Part IV: Validating AI In Medical Imaging: Clinical Pull vs. Engineering Push AI  

 

 

by Florent Saint-Clair, EVP, Dicom Systems, Inc. and Alan Pitt, M.D., Neuroradiologist at the Barrow Neurological Institute, Phoenix, AZ

As predicted, RSNA 2017 was essentially painted “AI.”  The noise level was deafening.  Tuning out AI messaging was like attempting to ignore hurricane Maria passing through Puerto Rico.

We originally set out to publish a three-part blog series dedicated to Artificial Intelligence in Medical Imaging.  Now that the three articles have been published, it’s evident that moving on to another topic is premature.  AI is an adventure that’s barely begun. Our previous discussions have sparked many new intense debates, the most heated of which is the commercialization of AI in medicine: Is it too early to try to make money on artificial intelligence in healthcare? 

We’ve recently witnessed some spectacular fails in AI commercialization attempts. At this past RSNA we saw some rather naive attempts at various pricing models, tantamount to throwing mud on the wall to see what sticks – essentially the admission that technology vendors have no idea how they are going to reliably monetize their AI efforts.

A Product Without Demand

The problem with trying to make money from AI is not a complex one to solve. The law of Supply and Demand will suffice. Vendors of AI have been furiously developing algorithms that are fighting for relevance and applicability in clinical workflows, and a chance to generate a return on investment for those who have invested heavily in AI research. The number of medical tech startups in the area of AI is staggering, all around the globe. This surplus in AI intellectual property applies downward pressure on prices and prematurely commoditizes AI pricing.

It comes down to one question: Who needs it? Was the usability and desirability of the algorithm validated by potential end-users, namely clinicians, before development began? In many cases, an algorithm is the subject of a Ph.D. candidate’s thesis, or a student engineer’s interest or passion. For this reason, we qualify the intellectual property as a “push” technology that definitely holds some intellectual value, but not necessarily clinical relevance or commercial value. Without proven clinical relevance and a strong physician champion’s endorsement, the algorithm has a low probability of getting adopted clinically.

The Long Road to Implementation

Additionally, the process of operationalizing an algorithm in live clinical workflows isn’t as simple as finding a paying customer for it. In many ways, it’s no different than implementing a new PACS or any other clinical system. Even when clinical validity is confirmed, physician champions within hospitals face an uphill battle to make their desired solution a priority for their department.

Hospital CIOs and CISOs have deliberate and complex protocols in place to ensure the integrity and security of patient healthcare data; the addition of each new solution introduces the possibility of unforeseen vulnerabilities. As a result there is but a finite number of solutions that Hospital IT can fit within their annual deployment objectives every fiscal year, not only from a clinical validation perspective, but also availability of IT project management, resource provisioning, security scanning, testing and, finally, production.

Because of these operational realities, it is naive to believe that Healthcare is in a position to massively embrace and absorb this tsunami of AI currently threatening to submerge the industry. As they conceive new solutions, it would behoove technology vendors to acquire a deep understanding of the kinds of hoops physicians must jump through in order to get a new technology adopted and funded.

Artificial Intelligence Funded by Real Money

As a software company supporting mission-critical healthcare workflows, we routinely need to assess the relevance and marketability of a new feature before we allocate development resources in any particular direction. This approach is a sound business practice for any business. As entrepreneurs who have chosen to fly with our own wings and not seek outside capital, we have a finite amount of financial and time resources to invest in new features and inventions. Instead, flush with VC money, many healthcare tech startups are compromising sound business practices for a chip in the game of the century: Artificial Intelligence.

The AI bubble has grown to disquieting proportions, risking a similar fate as the dotcom bubble in the late 90’s and early 2000s. It isn’t that long ago that venture capital groups, in fear of missing out, dangerously lowered the bar in their due-diligence practices. VCs indiscriminately poured billions into pre-revenue internet startups that barely had a business model articulated.  When financial markets began to emerge from their dotcom hangover, investors returned to a more conservative approach, and strict due diligence standards were reinstated.

Our contention is that sound business practices are not mutually exclusive with pioneering AI endeavors.  If successful commercialization is the objective, the perspective in healthcare AI needs to shift from an engineering push to a clinical pull.  Let the customers tell us what they really need rather than letting engineers run away with ideas they hope physicians will embrace unconditionally. Hope is not a good business strategy.

Strategic AI Recommendations

Short of pure research in which the admitted ROI is not of an immediate financial nature, several key considerations need to be factored into the equation before resources are poured into the development of an algorithm:
  1. Identify the problem. What is the specific problem the algorithm is intended to solve; how exactly would it help physicians more effectively improve quality of patient care?
  2. Validate the problem. Is the identified problem real?  Has it been validated by many physicians as a real problem, versus a pet project for a handful of entrepreneurial physicians?
  3. Validate the team’s competence. to solve the problem. Do the algorithm creators have a high percentage of likelihood (95% of more) that they will effectively address the stated problem?
  4. Validate marketability. Will someone agree to pay money for the algorithm? Are caregiver organizations willing to pay, and how much are they willing to pay, for the use of the algorithm in clinical settings?  Whose budget will pay for the technology? Radiology? IT? Cardiology? ER?
  5. Is the success scalable/replicable? What is the deployment methodology to implement and scale the technology? Will physicians use the algorithm from within their PACS user interface? Is deployment contingent upon other health IT vendors willingness to “play ball” or can the technology be deployed on a standalone basis. Will delivery be on premises or Cloud-based (or hybrid)?
There is a crucial common denominator inherent in each of the five elements above: Physicians.

Without clinically relevant input, developing new algorithms could be an exercise in futility. Once the physician population gets past their skepticism toward AI, and recognize the potential benefits of AI in their clinical work, they will become an invaluable source of economic validation.

Conclusion: Add Value, Not Experiments

As software vendors, our stance must be that the market – in this case diagnosticians – must be the generators of ideas to identify the algorithms that add value to their workflows. Their input would come in the form of a “shopping list” of algorithms they would pay money to use, and how much they are worth to them.

Last but not least, physician input should be organized. If we were to loosely survey physicians across all specialties to ask them what algorithms they want, the input would have too much variability to be commercially useful.

Each subspecialty in medicine has a professional society associated with it.  For Radiology it’s the ACR; for Cardiology it’s ACC. It is incumbent upon each of these professional organizations to become the focal point of input for their respective physician members.  Physician input being as crucial as it is, there should be a deliberate and concerted effort on the part of each professional society to produce a list of their respective “Top 10 desired algorithms.”

Only with this type of structured, validated and actionable input from physicians should developers of AI roll up their sleeves.

How To Get Involved in AI Today

There is nothing wrong with feeling like you’re late in the game; many players in Healthcare AI were too early. We’re only at the beginning of developing AI for use in clinical applications and the collection of a wide range of data sources enables the healthcare community to develop algorithms with less built-in bias. Google Cloud Platform has been a tremendous accelerator for our endeavors in this exciting new field.

If your organization is contemplating involvement in AI pursuits—specifically in the field of medical imaging—contact Dicom Systems at info@dcmsys.com, or call us directly at +1(415) 684-8790. 

AI Conductor

Unifier with AI Conductor for PACS and EHR drives and conducts AI workflows to get the right information to the right location at the right time and in the right format.

Learn More