The Next Frontier – Medical Imaging AI in the Age of Edge Computing

Published 09/05/2018

As AI gains traction in the medical imaging market, and its benefits for clinical diagnosis and more accurate early disease detection are being better understood by healthcare stakeholders, it is no longer a question of will AI be used, but rather how will it be deployed. AI algorithms will not only be deployed on post-processing workstations and cloud-based platforms, but also in medical imaging hardware, enabling data to be processed at source. This is being enabled by the next major revolution in computing – edge computing. From a technological standpoint, edge computing is being fuelled by the recent affordability of GPUs and the tremendous growth of the GPU industry in applications such as autonomous vehicles, IoT and drones, with the likes of NVIDIA, AMD and Intel claiming a big piece of the GPU market share pie earlier on.

Medical imaging modality vendors are increasingly integrating GPU capabilities into their image acquisition hardware, with a view to adding AI capabilities as in-house and third-party applications are commercialised. This enables the imaging OEMs to position themselves as innovators, with a stronger value proposition for their customers. Although clinical validation and regulatory approvals to support greater adoption of AI in clinical settings are just beginning to see the light, the road ahead for AI in medical imaging is starting to look clearer. After an initial period of hype, the major medical imaging OEMs are firmly backing AI in a variety of ways, from home-grown solutions to value-adding partnerships and collaboration projects.

From Image Reconstruction to Clinical Decision Support

Medical imaging OEMs are already using GPU computing to accelerate reconstruction times and to maximise visual clarity and fidelity. Moving forward, specific use-cases are likely to include:

  • Reduce the radiation dose for patients by improving the reconstruction of noisy low dose CT images.
  • Reduce MRI scan times by enhancing under-sampled scans (caused by the decrease in data from a faster image acquisition speed).
  • Reduce patient call-backs and repeat scans due to poor image quality. For example, AI can detect if the patient is positioned correctly and if the technician has selected the correct protocol and scan angle.
  • Reduce the need for additional scans, by detecting abnormalities other than the initial target and optimising the scan for the additional finding.
  • Reduce intra/inter-operator variability in image acquisition, particularly for ultrasound.
  • Simplify the set-up and operation of imaging hardware, particularly for emerging markets where there is a lack of skilled imaging technicians and clinicians.

In parallel, developments are underway to embed AI to aid diagnostic decision support. Initially, one of the most compelling use-cases is expected to be detection algorithms to prioritise cases in the radiologist worklist based on the initial AI findings. For example, a CT scanner with embedded AI to detect stroke could immediately send the CT scan to the radiology, neurology and interventional teams, reducing the time to make diagnoses and treatment decisions for this highly time sensitive condition. Image quantification tools are also likely to be increasingly embedded in imaging hardware, with ultrasound the first example. Most of the major ultrasound OEMs offer systems with embedded quantification tools for clinical applications that require lots of measurements, such as cardiovascular and OB/GYN, and new tools for a broader range of clinical applications will be added in the coming years.

AI at the Point of Care

The embedded AI revolution is expected to extend to imaging devices with smaller form factors, such as ultraportable handheld ultrasound scanners. Algorithms that simplify the process of obtaining sufficient quality images to make a diagnosis and the subsequent interpretation will make the devices easier to use and expand the use of ultrasound, particularly in developing countries. For example, AI can guide the user on how to find specific body parts, the optimal place to position the transducer and how to optimise the image quality. Once the image has been obtained, the AI can automatically detect, segment and quantify imaging biomarkers, such as ejection fraction, segmental wall motion, strain analysis, etc. Even for experienced users, AI can optimise the image quality for individual patients (e.g. bariatrics) and support on more challenging exams.

However, the handheld ultrasound market is still relatively small, with fewer than 10K devices sold last year. The relatively high cost of the devices, averaging around $7500, is one of the major barriers and OEMs need to drive down the price to sub $5000 for this market to take-off. This will likely limit the uptake of AI to some extent, with OEMs initially focusing on navigation and image optimisation tools rather than clinical decision support. The cost of obtaining clinical validation and regulatory clearance for the latter may prove prohibitive, at least in the short term.

At the Edge or in the Cloud?

Over the last decade we’ve seen medical imaging IT vendors make moves towards a remote-computing model, either building in-house cloud service provider capabilities or partnering with leading cloud-providers such as AWS, Google and Microsoft, to take imaging data to the cloud for storage and manipulation. At this year’s GTC conference, NVIDIA announced its plans to build Clara, which leverages virtualization technology to give access to graphics-intensive applications on any connected medical imaging device. Clara can be thought of as a “virtual upgrade” to existing medical imaging systems that do not have edge computing capabilities. NVIDIA describes Clara as a medical imaging supercomputer, that can run many systems simultaneously and in parallel and that can run on any medical imaging system, including MRI, CT, X-ray and ultrasound.

However, not all devices can be connected to a high-bandwidth network and many healthcare providers are reluctant to move imaging data to the cloud due to data security concerns. These constraints mean that edge computing for AI is necessary but also differs in terms of applicability and purpose than the high-performance computing and deep learning capabilities that can be provided by the cloud. Whilst edge computing will enable real-time analytics to optimise image acquisition and for initial clinical decision support, most post-processing applications are likely to remain locally deployed or cloud-based. The result is going to be an AND game rather than an OR game, and a dual edge-remote computing approach will allow solutions to be built with enhanced flexibility, speed and accuracy.

The Start of a Long Journey

Edge computing is still in its infancy in medical imaging, but it offers huge potential for the more widespread uptake of AI. In less than a decade, the computing power of GPUs has grown 20x ‚Äî representing growth of 1.7x per year and far outstripping Moore’s law. However, the trend to AI at the edge will take time, both in terms of the pace of adoption of edge computing by the imaging modality OEMs and the commercialisation of AI-based applications. Regarding the latter, more large scale clinical validation is needed to demonstrate the efficacy of AI in routine clinical use. The results from early studies are certainly encouraging and with several validation studies now under way, we expect to see a growing body of supporting evidence as the year progresses and into 2019. Regulatory approval also remains a challenge, but a flurry of approvals by the US FDA this year suggests the regulators are becoming more familiar and comfortable with AI.

In the longer term (10 years+), it is envisaged that medical imaging systems will evolve to AI mini-supercomputers, with a host of embedded algorithms. Edge computing could well be the answer to make medical imaging AI ubiquitous.