Advertisements
Feeds:
Posts
Comments

Archive for the ‘Arrhythmia Detection with Machine Learning Algorithms’ Category


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Advertisements

Read Full Post »


VIDEO: Editor’s Choice of the Most Innovative New Cardiac Technology at AHA 2018

Read Full Post »


Heart Murmur Detection done by AI Algorithm (Eko Core and Eko Duo) Devices Outperform most Auscultatory Skills of Cardiologists

Reporter: Aviva Lev-Ari, PhD, RN

 

AI Algorithm Outperforms Most Cardiologists in Heart Murmur Detection

Eko’s heart murmur detection algorithm outperformed four out of five cardiologists in recent clinical study

“Artificial Intelligence Detects Pediatric Heart Murmurs With Cardiologist-Level Accuracy,” the study demonstrates the power of machine learning and artificial intelligence (AI) to enhance cardiac care.

The neural network AI algorithm was trained on thousands of heart sound recordings. The algorithm was then tested on an independent dataset of pediatric heart sounds and compared to gold-standard echocardiogram imagery. Five pediatric cardiologists also listened to the heart sound recordings and independently made a determination whether a recording contained a murmur. This advancement will help narrow the clinical skill gap between the 27,000 cardiologists in the U.S. — the experts at murmur detection — and the 3.8 million other clinicians who are less experienced in the identification of heart murmurs through a stethoscope.

A study published in the Journal of the American Medical Association revealed that, on average, internal medicine and family practice physician residents misdiagnose 80 percent of common cardiac events.1 Cardiologists on the other hand, can effectively diagnose 90 percent of cardiac events using a stethoscope.2

Eko’s murmur screening algorithm, when coupled with the company’s U.S. Food and Drug Administration (FDA)-cleared Eko Core and Eko Duo devices, will enable any and all clinicians to more accurately screen for heart murmurs.

Eko is currently pursuing FDA clearance for the algorithm and will be rolling it out with its existing cardiac monitoring devices upon securing regulatory clearance.

For more information: http://www.ekohealth.com

References

1. Mangione S., Nieman L.Z. Cardiac auscultatory skills of internal medicine and family practice trainees. A comparison of diagnostic proficiency. Journal of the American Medical Association, Sept. 3, 1997. doi:10.1001/jama.1997.03550090041030

2. Thompson W.R. In defence of auscultation: a glorious future? Heart Asia, Feb. 1, 2017. doi:  [10.1136/heartasia-2016-010796]

 

SOURCE

https://www.dicardiology.com/content/ai-algorithm-outperforms-most-cardiologists-heart-murmur-detection?eid=333021707&bid=2308309

Read Full Post »


Arrhythmias Detection: Speeding Diagnosis and Treatment – New deep learning algorithm can diagnose 14 types of heart rhythm defects by sifting through hours of ECG data generated by some REMOTELY iRhythm’s wearable monitors

Reporter: Aviva Lev-Ari, PhD, RN

 

Long term, the group hopes this algorithm could be a step toward expert-level arrhythmia diagnosis for people who don’t have access to a cardiologist, as in many parts of the developing world and in other rural areas. More immediately, the algorithm could be part of a wearable device that at-risk people keep on at all times that would alert emergency services to potentially deadly heartbeat irregularities as they’re happening.

said Pranav Rajpurkar, a graduate student and co-lead author of the paper. “For example, two forms of the arrhythmia known as second-degree atrioventricular block look very similar, but one requires no treatment while the other requires immediate attention.”

To test accuracy of the algorithm, the researchers gave a group of three expert cardiologists 300 undiagnosed clips and asked them to reach a consensus about any arrhythmias present in the recordings. Working with these annotated clips, the algorithm could then predict how those cardiologists would label every second of other ECGs with which it was presented, in essence, giving a diagnosis.

http://news.stanford.edu/2017/07/06/algorithm-diagnoses-heart-arrhythmias-cardiologist-level-accuracy/

 iRhythm, maker of portable ECG devices

Image Source:

https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/s/608234/the-machines-are-getting-ready-to-play-doctor/amp/

Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks

We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. We build a dataset with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of board-certified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value).

Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:1707.01836 [cs.CV]
(or arXiv:1707.01836v1 [cs.CV] for this version)

Submission history

From: Awni Hannun [view email]
[v1] Thu, 6 Jul 2017 15:42:46 GMT (852kb,D)

SOURCE

Active Learning Applied to Patient-Adaptive Heartbeat Classification

Part of: Advances in Neural Information Processing Systems 23 (NIPS 2010)

[PDF] [BibTeX] [Supplemental]

Authors

Abstract

While clinicians can accurately identify different types of heartbeats in electrocardiograms (ECGs) from different patients, researchers have had limited success in applying supervised machine learning to the same task. The problem is made challenging by the variety of tasks, inter- and intra-patient differences, an often severe class imbalance, and the high cost of getting cardiologists to label data for individual patients. We address these difficulties using active learning to perform patient-adaptive and task-adaptive heartbeat classification. When tested on a benchmark database of cardiologist annotated ECG recordings, our method had considerably better performance than other recently proposed methods on the two primary classification tasks recommended by the Association for the Advancement of Medical Instrumentation. Additionally, our method required over 90% less patient-specific training data than the methods to which we compared it.

SOURCE

Cardiologist-Level Arrhythmia Detection With Convolutional Neural Networks

Pranav Rajpurkar*, Awni Hannun*, Masoumeh Haghpanahi, Codie Bourn, and Andrew Ng

A collaboration between Stanford University and iRhythm Technologies

https://stanfordmlgroup.github.io/projects/ecg/

JULY 6, 2017

Stanford computer scientists develop an algorithm that diagnoses heart arrhythmias with cardiologist-level accuracy

A new deep learning algorithm can diagnose 14 types of heart rhythm defects, called arrhythmias, better than cardiologists. This could speed diagnosis and improve treatment for people in rural locations.

The Machines Are Getting Ready to Play Doctor

An algorithm that spots heart arrhythmia shows how AI will revolutionize medicine—but patients must trust machines with their lives.

by Will Knight,  July 7, 2017

https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/s/608234/the-machines-are-getting-ready-to-play-doctor/amp/

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

by Will Knight, April 11, 2017

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

 

Read Full Post »