RSNA: Deep Learning Takes Centre Stage, but Beware the Hype
Published: December 8, 2016
Artificial Intelligence was undoubtedly one of the key themes at this year’s RSNA, featuring prominently on the exhibition floor and in several scientific sessions. At least 20 companies displayed products featuring AI technologies and a handful more used AI as a key part of their marketing messages, even if the use-case wasn’t entirely clear.
The use of artificial intelligence in medical imaging is not a new trend. The first generation of computer-assisted detection (CADe) products entered the market in the late 90s and used machine learning techniques such as shallow neural networks and support vector machines. What’s new is the increasing use of deep learning techniques, and in particular, convolutional neural networks.
With traditional machine learning, the algorithms are hand-crafted, meaning that the programmer essentially hard-codes the system to look for specific features. This is a time-intensive process that requires extensive clinical domain knowledge. Moreover, the performance of the algorithm is limited by the underlying rules and statistical modelling; hence the high number of false-positives generated by early CADe systems.
Deep learning techniques feature much larger (typically 10 layers or more) neural networks and the algorithms are trained using large sets of images. This requires considerably more computational power than traditional machine learning, which has been enabled by the introduction in recent years of affordable GPU-accelerated computing, which allows the algorithms to run much faster than with CPUs alone. By feeding the algorithms with radiologist annotated images and a “ground-truth”, the system automatically learns about the image features, rather than being programmed what to look for. As such, deep learning methods typically produce faster and more accurate results over traditional hand-coded classification techniques.
So how is deep learning being applied in radiology? From walking the exhibition floor at RSNA there were two key themes, as discussed below.
Next Generation CADe
Deep learning has the potential to significantly enhance the performance of existing CADe systems, by offering improved sensitivity without burdening radiologists with a high rate of false-positives. Increasingly, CADe systems will supplement detection with automatic quantification of imaging biomarkers. Additionally, the results from computer-assisted detection (and quantification) can be presented alongside patient information extracted from an EHR, such as patient history, laboratory results and prior studies, to provide the clinician with an imaging decision support tool (see next section).
Moreover, deep learning offers improved support in the detection of co-morbidities and incidental findings. For example, research published earlier this year by researchers at Icahn School of Medicine at Mount Sinai in New York City found that existing mammograms can be used to detect calcified plaques in breast tissue, which can lead to heart attack or stroke. In commercial healthcare systems, such as the US, this may help to ensure that opportunities to bill for additional procedures are not missed. The combination of improved accuracy and enhanced functionality will make next generation CADe systems a far more compelling proposition than earlier systems.
iCAD made a big play on deep learning at RSNA, with a large part of its booth dedicated to PowerLook® Tomo Detection, a CADe system for breast tomosynthesis that’s built on deep learning technology. Each image in a tomosynthesis data set is analysed to detect potential areas of interest and the system blends those areas onto a synthetic 2D image so that they are visible on a single image of the breast. Based on initial trials, the company claims that the additional reading time associated with breast tomosynthesis over 2D mammography is significantly reduced by using its CADe software, by an average of 29.2%. iCAD received CE Mark certification for PowerLook Tomo Detection in April 2016 and is in active dialogue with the US FDA regarding pre-market approval.
Riverain Technologies, which is best known for its image analysis tools for nodule detection in chest x-rays, used RSNA 2016 for the commercial launch of its ClearRead CT Suite, comprising ClearRead CT Vessel Suppress and ClearRead CT Detect, which aids in the detection of nodules in chest CT scans. The vessel suppression tool features deep learning technology. Riverain received FDA 510(k) clearance for ClearRead CT in September.
From CADe to CADx and Imaging Decision Support
The theme of this year’s RSNA was Beyond Imaging, to reflect the broadening role that radiologists are playing in the larger medical community. The theme also reflects how radiologists will increasingly be able to leverage non-imaging data extracted from EHRs and other sources to assist in making diagnostic decisions. In addition to patient data, imaging decision support tools can provide radiologists with other supporting information, such as the treatment outcomes of patients who presented with similar conditions. Beyond Imaging also captures how radiology is evolving from a largely qualitative to an increasingly quantitative discipline, with the increasing use of automated quantification tools to provide accurate and repeatable metrics of lesions and tumours, for example.
The first generation of imaging decision support and computer-assisted diagnosis (CADx) products are starting to enter the market and a handful were on show at RSNA. RADLogics presented its Virtual Resident™ decision support solution, based on its AlphaPoint™ cloud-based image analysis platform. The platform incorporates machine learning algorithmic tools for automatic analysis of X-ray and CT images. The results are combined with the patient’s medical record information into a preliminary report, in much the same way that a resident prepares information for a radiologist to review.
HealthMyne used RSNA to preview its QIDS software platform which provides radiologists with a quantitative imaging dashboard, including time-sequenced Epic EHR information. Laboratory results, treatment details, and the health status for each patient are viewable in a timeline-based longitudinal representation. As an example, a longitudinal representation could feature a plot of tumour size relative to the duration of a course of radiotherapy, with icons to denote the dates of follow-up CT scans from which tumour size was determined. Scans can then be examined by clicking on the icon and opening a viewer. QIDS retrieves prior studies, performs image registration and localization of previously identified lesions. The analytics software, which is not built on deep learning, also provides information such as tumour size, Lung-RADS categories for use in lung cancer screening, and other quantitative metrics. The product will be fully launched in January 2017.
Quantitative Insights (QI) Inc. showed its QuantX breast imaging workstation. Alongside a multi-modality viewer, QuantX provides automatic detection and quantification on MRI images for the characterization of breast lesions, to assist in breast cancer diagnosis. QuantX features a breast imaging decision support system with direct correlation to a database of lesions with known pathology, based on biopsy results. The system generates a QI Score™ to represent the probability of malignancy. QI has submitted a de novo 510(k) application to the FDA and believes that a decision is imminent. Should the company be successful, QuantX will be the first CADx product cleared by the FDA.
IBM gave demos of several Watson-powered initiatives, under both the Merge Healthcare and Watson Health Imaging brands. Examples included a solution for aggregating and filtering electronic health records and technology for automated analysis of cardiac ultrasounds and improved diagnosis of aortic stenosis. The most impressive demo was for a decision support tool code-named Avicenna, which automatically detects and quantifies anatomical features and abnormalities (the demos used a CT scan), and extracts relevant information from a patient’s electronic health record. Avicenna has a cognitive ‘reasoning’ capability that considers the imaging and non-imaging information to suggest possible diagnoses. Big Blue was tight lipped about the release date for Avicenna, but it will likely need another year at least, and most likely two years, to complete clinical trials and obtain regulatory approval. IBM’s first cognitive solution for radiology to hit the market will be a Cognitive Peer Review Tool, intended to help healthcare professionals reconcile differences between a patient’s clinical evidence and the data in that patient’s EHR, which is due to be released in 1Q 2017.
Separate the Hype from Reality
In addition to the above examples, several start-ups, including Enlitic, Zebra Medical, Lunit and Vuno, used RSNA to showcase how they are applying deep learning to medical imaging. For example, Enlitic gave a demo of a chest x-ray triage product and a solution for lung cancer screening, both powered by deep learning. Enlitic is in the process of gathering clinical validation for its products and does not yet have regulatory clearance to sell.
However, some of the other start-ups were less forthcoming regarding their product development plans, with one company’s booth no more than a display carrying the company’s logo. Many radiologists remain sceptical of the capabilities of artificial intelligence and some see it as threat. Moreover, many remember the limitations of early generation mammography CADe systems. Vendors need to complete and promote clinical studies to validate their claims, otherwise marketing soundbites may impede the acceptance of deep learning in radiology. More customer education is required so that the conversations at next year’s RSNA move on from “what’s deep learning?” to “tell me how can deep learning help me do my job better”.
New Market Report from Signify Research Publishing Soon
This and other issues will be explored in full in Signify Research’s upcoming market report ‘Machine Learning in Radiology – World Market Report’, publishing in January 2017. For further details please click here or contact Simon.Harris@signifyresearch.net