Applying AI to Improve Interpretation of Medical Imaging
Author and Curator: Dror Nir, PhD
3.5.2.5 Applying AI to Improve Interpretation of Medical Imaging, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine
The idea that we can use machines’ intelligence to help us perform daily tasks is not an alien any more. As consequence, applying AI to improve the assessment of patients’ clinical condition is booming. What used to be the field of daring start-ups became now a playground for the tech-giants; Google, Amazon, Microsoft and IBM.
Interpretation of medical-Imaging involves standardised workflows and requires analysis of many data-items. Also, it is well established that human-subjectivity is a barrier to reproducibility and transferability of medical imaging results (evident by the reports on high intraoperative variability in imaging-interpretation).Accepting the fact that computers are better suited that humans to perform routine, repeated tasks involving “big-data” analysis makes AI a very good candidate to improve on this situation.Google’s vision in that respect: “Machine learning has dozens of possible application areas, but healthcare stands out as a remarkable opportunity to benefit people — and working closely with clinicians and medical providers, we’re developing tools that we hope will dramatically improve the availability and accuracy of medical services.”
Google’s commitment to their vision is evident by their TensorFlow initiative. “TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.” Two recent papers describe in length the use of TensorFlow in retrospective studies (supported by Google AI) in which medical-images (from publicly accessed databases) where used:
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, Nature Biomedical Engineering, Authors: Ryan Poplin, Avinash V. Varadarajan, Katy Blumer, Yun Liu, Michael V. McConnell, Greg S. Corrado, Lily Peng, and Dale R. Webster
As a demonstrator to the expected benefits the use of AI in interpretation of medical-imaging entails this is a very interesting paper. The authors show how they could extract information that is relevant for the assessment of the risk for having an adverse cardiac event from retinal fundus images collected while managing a totally different medical condition. “Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic
blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70).”
Clearly, if such algorithm would be implemented as a generalised and transferrable medical-device that can be used in routine practice, it will contribute to the cost-effectiveness of screening programs.
End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography, Nature Medicine, Authors: Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse , Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich and Shravya Shetty.
This paper is in line of many previously published works demonstrating how AI can increase the accuracy of cancer diagnosis in comparison to current state of the art: “Existing challenges include inter-grader variability and high false-positive and false-negative rates. We propose a deep learning algorithm that uses a patient’s current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases.”
The benefit of using an AI based application for lung cancer screening (If and when such algorithm is implemented as a generalised and transferable medical device) is well summarised by the authors: “The strong performance of the model at the case level has important potential clinical relevance. The observed increase in specificity could translate to fewer unnecessary follow up procedures. Increased sensitivity in cases without priors could translate to fewer missed cancers in clinical practice, especially as more patients begin screening. For patients with prior imaging exams, the performance of the deep learning model could enable gains in workflow efficiency and consistency as assessment of prior imaging is already a key component of a specialist’s workflow. Given that LDCT screening is in the relatively early phases of adoption, the potential for considerable improvement in patient care in the coming years is substantial. The model’s localization directs follow-up for specific lesion(s) of greatest concern. These predictions are critical for patients proceeding for further work-up and treatment, including diagnostic CT, positron emission tomography (PET)/CT or biopsy. Malignancy risk prediction allows for the possibility of augmenting existing, manually created interpretation guidelines such as Lung-RADS, which are limited to subjective clustering and assessment to approximate cancer risk.
BTW: The methods section in these two papers is detailed enough to allow any interested party to reproduce the study.
For the sake of balance-of-information, I would like to note that:
- Amazon is encouraging access to its AI platform Amazon SageMaker “Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.” Amazon is offering training courses to help programmers get proficiency in Machine-Learning using its AWS platform: “We offer 30+ digital ML courses totaling 45+ hours, plus hands-on labs and documentation, originally developed for Amazon’s internal use. Developers, data scientists, data platform engineers, and business decision makers can use this training to learn how to apply ML, artificial intelligence (AI), and deep learning (DL) to their businesses unlocking new insights and value. Validate your learning and your years of experience in machine learning on AWS with a new certification.”
- IBM is offering a general-purpose AI platform named Watson. Watson is also promoted as a platform to develop AI applications in the “health” sector with the following positioning: “IBM Watson Health applies data-driven analytics, advisory services and advanced technologies such as AI, to deliver actionable insights that can help you free up time to care, identify efficiencies, and improve population health.”
- Microsoft is offering its AI platform as a tool to accelerate development of AI solutions. They are also offering an AI school : “Dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI and Cognitive Toolkit. Our platform enables any developer to code in any language and infuse AI into your apps. Whether your solutions are existing or new, this is the intelligence platform to build on.”
Leave a Reply