This Insight is part of your subscription to Signify Premium Insights – Medical Imaging. This content is only available to individuals with an active account for this paid-for service and is the copyright of Signify Research. Content cannot be shared or distributed to non-subscribers or other third parties without express written consent from Signify Research. To view other recent Premium Insights that are part of the service please click here.
“With the medical imaging AI market still maturing, suggestions of autonomous AI may be a leap of faith radiologists are yet unwilling to take.”
Dr Sanjay Parekh
Lithuanian AI developer Oxipit recently announced that it had secured CE Class IIb certification for its ChestLink autonomous AI solution. According to the vendor, the tool assesses chest X-rays and reports on those scans showing no abnormalities without any radiologist involvement. The vendor claims it is the first such autonomous solution, where a diagnostic evaluation will be carried out solely by an AI tool.
The Signify View
Medical imaging AI has long promised to help relieve the burden faced by overworked radiologists in understaffed hospital networks. Many of the solutions currently available are contributing towards this goal, but do so by enhancing a radiologist, enabling them to do more in the same amount of time. They might, for instance, incrementally accelerate a radiologist’s reading of X-rays, or automatically quantify features on an MR scan, allowing exams to be interpreted more swiftly. However, while these use cases are beneficial, they are not transformative, speeding up existent processes rather than replacing them, or at least complementing them with something new. This additional value is what several of the most successful medical imaging AI vendors have, or at least are attempting, to do.
One application for medical imaging AI, which is arguably the most transformative of all, is autonomous solutions. While not designed to replace radiologists, these tools automate some of the more laborious tasks radiologists undertake, effectively promoting radiologists to a supervisory role and allowing them to focus on more complex and demanding tasks. Oxipit claims its ChestLink solution is the first such tool to have received regulatory approval, therefore becoming the first such tool that can be used commercially in Europe.
The tool is designed to scan chest X-rays and rule out healthy patients, automatically sending results for those that display no indicators of pathology. Any image that the tool does not determine as healthy, is flagged as such for radiologist review. The tool’s use is focused on primary care, where, according to Oxipit, as many as 80% of X-rays are of healthy patients. By ruling out this proportion of patients, ChestLink does have the potential to significantly reduce radiologists’ workloads. With, according to Signify Research’s Diagnostic Imaging Procedure Volumes Database, more than 70m chest X-rays undertaken in Western Europe in 2021, any automation could have a dramatic impact.
Market Malaise
However, while the potential advantages of such autonomous technology is evident, the case for the market being ready for it is less clear. The adoption of medical imaging AI is still very much in its infancy, with many of the most successful tools, even those which are used under close physician supervision, augmenting rather than replacing established clinical workflows. Stroke solutions from the likes of Viz.ai and RapidAI, for example, identify stroke patients more quickly so they can be prioritised. However, if the tool fails to detect an occlusion, the patient will not be prioritised, but will instead be left positioned on the worklist as though the tool had not been used at all. The worst case is the de facto pathway.
The stakes are much higher for an autonomous AI solution, however. If Oxipit’s ChestLink suite is wrong, and incorrectly clears a patient as healthy, the results could be severe. Oxipit has made efforts to assure potential customers of its accuracy, highlighting that prior to its approval the tool has been operating in a supervised manner, in multiple pilot locations for more than a year, and in doing so has processed more than half a million real-world chest X-rays. What’s more the vendor boasts of its 99% sensitivity metric and the “zero clinically-relevant” errors made during pilot studies. Outwardly, these claims are impressive, although, like many young AI start-ups, a lack of published clinical validation makes these figures difficult to fully assess.
Regardless, there are other challenges, a specificity rate of 99% implies a false negative rate of 1%, which, on the volume of scans that such a tool could be assessing could be significant. For edge cases and ambiguous scans, radiologists can also use other contextual information which may have a bearing on a diagnosis. AI suites don’t have that luxury, potentially impacting some diagnoses. Oxipit will no doubt ensure there are systems in place to address these shortcomings, but without a human’s reactive, situationally aware cognizance, there is still scope for problems given the range of unprecedented and unforeseen issues that could arise. This could be a particular barrier when it comes to taking responsibility for errors. If ChestLink is wrong and a patient is impacted, who is liable? Presumably either the autonomous solution must offer such a saving that expensive lawsuits can be stomached as an operating cost, or Oxipit must be held liable, otherwise providers would be unlikely to risk such litigation on such a tool.
What is Normal Anyway?
One of the central imperatives in medical imaging AI is the creation of value for radiologists. One of the ways vendors are achieving this is by developing increasingly comprehensive AI tools, which ‘solve’ a modality and body area combination, striving to identify every single finding for a given body area/modality combination. In the future, this could lead to autonomous tools which mimic a radiologist; assessing a scan for any finding and, either reporting that finding, or, if no finding is present, reporting a negative.
In contrast, Oxipit claims its ChestEye solution can identify 75 findings, relating to 90% of potential pathologies, which is technically impressive but still far less thorough than is required for the basis of an autonomous solution. Oxipit’s approach to ensuring that patients with one of these 10% of pathologies, to which ChestLink is essentially blind, are not given a clean bill of health is by effectively making ‘normal’ a positive diagnosis. Images are, in essence, compared to normal images, meaning that anything which does not conform to the trained pattern of normality is considered abnormal, and sent for radiologist review.
This is not necessarily problematic. In practice, what looks ‘normal’ is a lot vaguer than specific findings, but assuming ChestLink can deal with these vagaries, along with the myriad combination of patient demographic, modality manufacturer, inconsistencies in patient positioning etc, it will have value as a rule-out tool. However, this will be its only use, compared to other more sophisticated comprehensive solutions which offer a more nuanced analysis, and in the future could autonomously make positive diagnoses, as well as negative diagnoses based on the absence of findings.
Fighting for Firsts
These are issues that must be considered, both from a fundamental standpoint of whether such a tool can be usefully deployed, and the question of whether providers will choose to deploy it? This latter question may be complicated further by competition in the space. While Oxipit claims its CE Mark Class IIb approval is the first of its kind for such an autonomous solution, in practice, it appears functionally similar to Behold.ai’s Red Dot solution. This tool was awarded Class IIa approval in 2020, an approval Behold CEO Simon Rasalingham claimed was a “first in kind”, with the tool being the “first autonomous AI algorithm to rule out normal chest X-rays”.
Regardless, Oxipit’s attainment of European approval is an achievement. With the vendor last having secured funding in a seed round likely to be looking for more funding soon, especially if it is to avoid losing touch with competitors like Lunit and Annalise.ai, which have raised $137.8m and $117m respectively. Such an achievement, and the publicity it brings will no doubt help in this regard, so the timing is particularly fortuitous.
The ultimate question, however, is whether providers, radiologists and patients will be willing to hand over control to a computer programme. Interim solutions can be implemented; the UK’s Care Quality Commission, for example, insists that all examinations auto-reported by Behold.ai’s tool must be reviewed by a human within 24 hours. Similar stipulations would likely be attached to the use of ChestLink, with the vendor already insisting that the tool’s deployment begins with a phase devoted to retrospective analysis, a second stage in which the AI shadows radiologists, and only then, in the third stage, can autonomous operation begin.
These, and other such solutions to very central challenges are clumsy and inelegant. They are, however, necessary until more sophisticated answers can be found. These fixes rob autonomous AI of some of the potential it holds, mitigating the benefits it can bring providers. Despite this, they mean that the tools can be used in hospitals. This makes Oxipit’s clearance significant, something to be celebrated and a factor that could tip the scales in the fight for VC funding. But, it is also clear that ChestLink isn’t a complete answer to all the questions posed by autonomous AI, that transformative leap remains, for the time being, out of reach.
About Signify Premium Insights
This Insight is part of your subscription to Signify Premium Insights – Medical Imaging. This content is only available to individuals with an active account for this paid-for service and is the copyright of Signify Research. Content cannot be shared or distributed to non-subscribers or other third parties without express written consent from Signify Research. To view other recent Premium Insights that are part of the service please click here