Healthcare AI Marketplaces: Three Radiologists Walk Into a Bar…

“Hey! She’s the fastest ship in the galaxy!…” boasts Han Solo, defending the virtues of the Millennium Falcon to Ben and Luke, as they contemplate the ship with some doubt.
In many ways, AI marketplaces remind me of the Mos Eisley cantina, the most famous dive bar in the Star Wars universe. You don’t know what or who you will encounter. Ben Kenobi and Luke Skywalker were looking to hitch a ride, so they traveled to the cantina with purpose: to find someone who had a transport, and the willingness to fly them to their destination. They found Han Solo but hiring him came with its own set of challenges and baggage, as well as an opportunity cost. But if you don’t know what you’re looking for when stepping in, or if you’re lacking purpose and just browsing what’s being offered, then any answer may be the right one.
Could they have found a different ride in a different dive bar? Most likely yes. Was the Millenium Falcon the best option? Perhaps the only option at the time? Clearly Han Solo was the right guy for the Star Wars storyline.

Interior set of the Mos Eisley Cantina with cast of aliens, as seen in Star Wars (1977) Source: Star Wars

Similarly, how can we ensure that we are stepping into the right AI marketplace to look for the right algorithm(s)? The reason the dive bar analogy rings true is because choosing AI is a matter of trust. Physicians, provider organizations, IT, and Infosec professionals, must collectively feel that when they choose an AI algorithm for a specific purpose, they also trust that they are buying the right solution from the right vendor— after all, patient safety is at stake. When you step into a dive bar, you simply cannot assume that anyone there should be trusted until they prove themselves to be trustworthy.
Note: the dive bar analogy is not entirely fair to AI marketplaces, as they are typically run by well-intentioned professionals, with a solid competence and a desire to bring value to the market segment. However, the analogy holds true when it comes to the AI they propose to sell through their marketplace. In their haste to gain a leading position in the most profound technological revolution since the advent of the Internet, AI marketplaces are misguided in their approach and execution.
In the Internet age, there is rarely room for more than two or three dominant brands that become top of mind for consumers, and end up controlling the vast majority of a segment’s activities. Google won the search engine wars and probably the browser wars too (remember Marc Andreesen’s Netscape?). A minority of users use other search engines or browsers.

Netscape Navigator 1.0N Browser for Mac (1994) Source: Version Museum

The Evolution of Online Marketplaces: Survival Of The Simplest

The first-mover advantage has proven to be elusive for many in Silicon Valley. eBay easily won the online auctions wars while Yahoo’s auctions offering withered away. Uber and Lyft invented and continue to dominate the ride-sharing market, Airbnb owns short-term room rentals, and OpenTable has a virtual lock on restaurant reservations. In the world of social media Facebook essentially obliterated first-mover MySpace and essentially monopolized much of the planet’s social media activity, including its acquisitions of WhatsApp and Instagram. TikTok is the clear dominant player for short videos, much to the chagrin of Facebook/Meta. Lime, Bird, and a handful of other companies are making it easy to zip around urban areas on shared electric scooters.

All of these market segments have one key attribute in common: homogeneity. They’re all offering a simple, innovative, and convenient way to solve clearly defined problems. Google solves search; eBay solves auctions; Uber and Lyft solve ride sharing; Lime, Bird, Tier, and Dott solve easy scooter ride sharing. One way to determine how successful a company has become is whether their brand name also becomes a verb in everyday language, for instance, “I just googled it” or “Did you uber to the concert?

Healthcare AI Marketplaces Are Complex

In healthcare AI Marketplaces, such as SymphonyAI and Blackford, are attempting to position themselves as the top brand destinations for finding and using AI algorithms for medical imaging. It’s easy to see where the attraction is, and why these AI Marketplaces would attempt to do so: whenever a ‘gold rush’ of any kind takes place, there are usually early entrants that work hard to achieve rapid dominance over the segment, and ride the wave to a massive IPO.

Panning for gold in California, 1850. Source: Wikipedia

On a smaller scale, and more relevant to healthcare, the same phenomenon occurred in the early days of Cloud-based image sharing, when Life Image, eMix and SeeMyRadiology vied for the attention of hospital administrators everywhere, along with a handful of other more obscure names. Life Image was not first to market, but it achieved early market dominance thanks to having raised a substantial amount of venture capital early on, staffing up, and quickly expanding their offering to a massive global community of hospitals. (Note: SeeMyRadiology was acquired by Nuance and ultimately became PowerShare)
Market dominance by a single or a few well known brands was logical even in the medical image sharing segment because it’s a homogenous, understandable concept that could be easily commoditized in the mind of their target audience.

AI Marketplaces For Radiology Require Vetting and Validation

Now, let’s discuss AI marketplaces and Radiology. There is a big distinction between browser battles and AI marketplaces competing for the attention of Radiologists and the medical imaging community. The advertised value proposition of AI marketplaces is far more complex, and therefore much more challenging to commoditize and distribute in a “portal” approach. The number of variables is staggering when evaluating and navigating AI marketplaces.
First, in order to bring real value to the table, an AI marketplace cannot simply accept any and all algorithms into their platforms. There has to be a clinical validation that an algorithm actually adds value to a physician’s workflow, without adding undue patient safety risks. There are thousands of new algorithms constantly being developed and validated, as the FDA continues to modernize and streamline its 510(k) clearance processes to accommodate this rapidly emerging segment.
While it may be impressive to have thousands of AI brands associated with a marketplace, it’s better to have 50 well-vetted, curated, and embedded algorithms that add value to a Radiologist’s workflow. While the value of a marketplace would typically increase by virtue of the sheer number of SKUs available to choose from, the opposite is true in Radiology, where less is more. Medical imaging professionals are extremely busy individuals who don’t want to sift through a pile of possible algorithms with the hope of finding something valuable, like someone might browse through Amazon for a new electric toothbrush. They typically know what constitutes a diagnostic or workflow pain point and want a specific answer.
In this labyrinthine AI market, physicians are expected to learn and remember which AI brand delivers what value. Our job as vendors is to make things more simple for clinicians, not more complicated. It’s less about the brand of AI, than about its ability to discreetly weave itself into clinical workflows without becoming the main attraction.

The AI 100 is CB Insights’ annual list of the 100 most promising private AI companies in the world. Source: CB Insights

Compounding the issue is that a particular algorithm today may be the best at detecting a specific condition in a medical image, but does it mean it forever holds the top place? What if a random AI development team out of Taiwan or South Africa comes up with an algorithm that is 10% more efficient? What does the marketplace do (if and when it becomes aware of it)? Do they dump the first one and stop servicing it in favor of the new one? That’s not possible because they have customers using the original algorithm in production. So now, theoretically, the same marketplace would need to accommodate two or more competing algorithms doing the same thing. Multiply this issue by the number of algorithms you’ve already curated, and you bought yourself a full-time job for a long, long time.
And what if a hospital system has chosen to work with Marketplace A, but the algorithms they really want are signing up with Marketplace B or C? Is the hospital system going to become a customer of three parallel marketplaces? Considering how overworked IT and Infosec personnel are, it’s highly unlikely they will go through IT and security vetting processes for multiple marketplaces. Frankly, the exercise sounds exhausting, and not even for a finite, definitive result: it is within the DNA of this beast that new and more intelligent AI is constantly being born.

Can Curation Solve The Healthcare AI Marketplace Challenge?

A curated offering only offers a partial solution to the problem. It acknowledges the fact that there are too many market entrants, and that physicians would be helped by having someone (a marketplace), curate the content for them so they don’t have to do it. However, this also suggests that Radiologists (and other physicians too), must place their clinical trust into those doing the curating for the marketplaces. They must trust them to know whether or not an algorithm would improve their workflow. Some physicians oppose the notion that marketplace vendors should be the ones to choose the algorithms on their behalf and also decide what algorithms make it onto a marketplace, finding this method of selection highly presumptuous.
Anyone having spent time in a dark room looking over a Radiologist’s shoulder will know that physicians don’t like to be told by non-physicians what it’s like reading cases. Only Radiologists who have spent countless hours reading cases know what it’s like reading cases. For the curating to be trustworthy, it should be accomplished by practicing physicians or individuals who have demonstrable knowledge and experience of live clinical practice.
Marketplaces can be a positive factor in that they can do the busy work of aggregating pre-vetted content for review by diagnosticians. Marketplaces can also add value by hosting the algorithms in a Cloud infrastructure to make them available for consumption in clinical workflows, which also comes with another set of challenges.

Technical Considerations for Healthcare AI

We will now examine the unique technical “gotchas” associated with live clinical implementation of AI in existing workflows, especially when the AI is hosted in a Cloud versus on premises. The value must go far beyond the mere curation and hosting of AI. Placing an algorithm into orbit in an existing, live clinical workflow is no small undertaking. Bear in mind that if we ask physicians and other clinical staff to change their behavior in order to accommodate new AI-enabled workflows, the new solutions have a very low likelihood of successful adoption.
Let’s go through a simulation of how such a vetting process might go for each algorithm before it makes it into a marketplace (for the sake of this exercise, we are presuming the algorithm is highly efficient and valid):
  1. Presuming that the algorithm is a legitimate, clinically useful solution, one of the first tasks in the vetting process is to determine whether it was trained using actual DICOM images, or JPEG images. The distinction is important because DICOM images are loaded with metadata-rich elements that can be useful in the training process, whereas JPEG images don’t offer this important metadata layer, they are strictly pixels.
  2. Is the algorithm validated and cleared by the FDA for market release (510(k))?
  3. If the AI developers were educated about DICOM and use it to the full extent available, the next element to verify is whether the algorithm is strictly available as a Cloud offering (SaaS). As a matter of corporate policy and Information Security, some hospital systems don’t use the Cloud as a part of their IT infrastructure,although this is rapidly evolving.
  4. How is the connectivity established between the provider’s infrastructure and the algorithm? Is it via VPN? HTTPS? TLS? Is the algorithm capable of communicating via DICOM or DICOMweb? How is the algorithm expecting to receive the input? Is it a push process from PACS/MIMPS? A pull process by the algorithm? How does it know what exams should be handled by a specific algorithm? Is that determination made by the modality? By the PACS/VNA? If there are 50 algorithms available, how do you make sure the right algorithm is conjured up for the right study?
  5. Does the algorithm’s analysis take place in line with PACS/MIMPS workflows, or in parallel? Will the analysis potentially create additional latency in the diagnostics process? Cloud infrastructures these days can offer as high a performance as on-prem infrastructures, if not better. However, the workflow design and the performance should always be benchmarked and validated.
  6. Is the algorithm expecting to receive relevant prior images to also analyze? If so, how is the priors pulling determination made? Is it automated? Manual? The same issue of latency can become problematic if prior images are creating a domino effect in the process, causing the diagnostician to get further and further behind in their TAT (Turnaround Time).
  7. Is the algorithm also performing NLP analysis on unstructured text in reports? How are the reports retrieved? How are they de-identified? How are they transmitted? FHIR? HL7? The answer to some of these questions will determine whether or not a VPN needs to be established, or whether HTTPS/TLS can be leveraged to enable communications with the Cloud-hosted AI. RESTful APIs can function over TLS, HL7 cannot.
  8. If the algorithm is available only as a SaaS subscription, another important consideration is whether or not the vendor has provisioned for HIPAA-compliant de-identification of the images, both at the metadata layer and at the pixel level. HIPAA violations can become cost-prohibitive, and few underwriters still offer E&O policies for healthcare-focused software vendors. Is there a one-size-fits-all de-ID engine for all the algorithms in the marketplace? Does each algorithm require its own specific de-ID in order to function properly? Not all algorithms use the same DICOM tags to do their jobs, which suggests that de-ID is also a complex layer that cannot be addressed with a cookie cutter solution. To make matters worse, some hospitals may require de-ID to take place BEFORE the images are sent to the Cloud, which means the Cloud-based marketplace must also provide a local gateway capable of complex tag-morphing.
  9. Assuming the images have been de-identified properly and the algorithm does its job correctly, the next question is about result formatting. What is the nature/format the algorithm creator chose for the output of their AI? Is it an HL7 message? A blob of plain text? A FHIR result? A DICOM SR (Structured Report)? An image overlay/presentation state? The answer to these questions will determine how-and-how easily, results can be delivered and presented to the diagnostician in their chosen viewing environment.
  10. Last but not least: results delivery. How does the algorithm output find its way to the field of view and acknowledgement by a physician? Is it a flashing icon on the worklist? Is it a reprioritization of the worklist? What if a finding is a critical result? Is it a change in color (and if so, what color)? Radiologists typically prefer to re-prioritize their own worklists, so the manifestation of an AI result is that much more important.
If the AI marketplaces and the AI vendors they aggregate have considered all these pesky little details, then marketplaces will be a successful approach for AI distribution. Short of having a solid answer for all of these elements, they will ultimately fail. I cannot emphasize enough how important it is that AI vendors adhere to the industry standards adopted by the medical imaging ecosystem: DICOM, HL7, FHIR, DICOMweb, standard RESTful API are usually preferred in order to make the integration more straightforward.

AI Marketplaces Are Fundamentally Flawed

As you may have surmised from the list above, AI marketplaces can hardly function as effective distributors of commodities because they aren’t dealing in commodities. Each of the questions above points to AI as being a highly complex proposition, from conception to operationalization. In this type of context, a marketplace is essentially a value added reseller (VAR), but the real value to be unlocked is in the AI.
This is why I state that AI marketplaces are fundamentally flawed. Because of the myriad possible permutations among the elements presented above, what healthcare providers need is not a marketplace, or even several marketplaces. What they need is mass-customization. They need a simple way to plug-in validated new AI into their existing workflows, and customize their workflows accordingly. This suggests 100% cooperation, collaboration, and interoperability with modality and PACS/VNA vendors, whose software is already being used by physicians in their everyday workflows.

If not marketplaces, then what?

As is often the case, the past is a good predictor of the future as we attempt to imagine and project the future of AI in imaging.
Image enhancement of low-field MRI images is an example we can examine to draw this parallel. In the early 1990s, the burgeoning field of open MRI imaging was suffering from a serious problem of low signal to noise ratio, to the point where radiologists couldn’t reasonably read images that were far too noisy to be clinically viable.
To solve this problem, a short-lived new market segment emerged: post-processing open MRI image enhancement. By placing image enhancement algorithms into the workflow after the images were captured, the post-processed images would become smoothed and sharpened enough to be read by a physician.
For a few years, this niche market did quite well. Until open MRI modality vendors such as Hitachi and Toshiba caught on to the fact that image enhancement should be an integral part of generating images versus letting other vendors post-process and deliver beautiful images on their behalf. Image enhancement became embedded in open MRI modalities, essentially killing the image enhancement bubble.
The future of AI points to the gradual embedding of algorithms into modality workloads, viewers, and diagnostic workflows via PACS/MIMPS. In the analogy above, the current marketplace amounts to image post processing efforts when our collective energy should be directed at integrating and embedding AI within existing workflows.
What this means is that imaging AI will most likely undergo a substantial M&A-driven consolidation trend, whereby modality and PACS/MIMPS vendors will acquire the algorithms that have the most clinical relevance and the AI will gradually disappear deeper and deeper within their respective workflows, quietly delivering their value without so much regard to brand recognition. Ultimately, physicians don’t really care what brand of AI is helping them increase their productivity or lower error rates. What they do care about is that the AI is not detrimental to patient safety, or disruptive to diagnostic workflows.
The last thing our industry needs is yet another new market segment. AI should be an integral part of workflows, working discreetly, in the background, not front and center like walking into Times Square in New York and being inundated with advertising (although sometimes enticing). We don’t need Radiologists asking themselves “Wait, what did I come here for? And returning again after time has been spent implementing and deploying (insert big brand ai algorithm company name here)’s application only to realize it wasn’t what was needed at all… but the logo sure was shiny (wink).