Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence in Medicine – Application for Diagnosis’ Category


Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

Reporter: Aviva Lev-Ari, PhD, RN

 

HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package, the First AI-assisted Cardiac MRI Scan Solution

The future of imaging is here—and FDA cleared.

LOS ALTOS, Calif.–(BUSINESS WIRE)–HeartVista, a pioneer in AI-assisted MRI solutions, today announced that it received 510(k) clearance from the U.S. Food and Drug Administration to deliver its AI-assisted One Click™ MRI acquisition software for cardiac exams. Despite the many advantages of cardiac MRI, or cardiac magnetic resonance (CMR), its use has been largely limited due to a lack of trained technologists, high costs, longer scan time, and complexity of use. With HeartVista’s solution, cardiac MRI is now simple, time-efficient, affordable, and highly consistent.

“HeartVista’s Cardiac Package is a vital tool to enhance the consistency and productivity of cardiac magnetic resonance studies, across all levels of CMR expertise,” said Dr. Raymond Kwong, MPH, Director of Cardiac Magnetic Resonance Imaging at Brigham and Women’s Hospital and Associate Professor of Medicine at Harvard Medical School.

A recent multi-center, outcome-based study (MR-INFORM), published in the New England Journal of Medicine, demonstrated that non-invasive myocardial perfusion cardiovascular MRI was as good as invasive FFR, the previous gold standard method, to guide treatment for patients with stable chest pain, while leading to 20% fewer catheterizations.

“This recent NEJM study further reinforces the clinical literature that cardiac MRI is the gold standard for cardiac diagnosis, even when compared against invasive alternatives,” said Itamar Kandel, CEO of HeartVista. “Our One Click™ solution makes these kinds of cardiac MRI exams practical for widespread adoption. Patients across the country now have access to the only AI-guided cardiac MRI exam, which will deliver continuous imaging via an automated process, minimize errors, and simplify scan operation. Our AI solution generates definitive, accurate and actionable real-time data for cardiologists. We believe it will elevate the standard of care for cardiac imaging, enhance patient experience and access, and improve patient outcomes.”

HeartVista’s FDA-cleared Cardiac Package uses AI-assisted software to prescribe the standard cardiac views with just one click, and in as few as 10 seconds, while the patient breathes freely. A unique artifact detection neural network is incorporated in HeartVista’s protocol to identify when the image quality is below the acceptable threshold, prompting the operator to reacquire the questioned images if desired. Inversion time is optimized with further AI assistance prior to the myocardial delayed-enhancement acquisition. A 4D flow measurement application uses a non-Cartesian, volumetric parallel imaging acquisition to generate high quality images in a fraction of the time. The Cardiac Package also provides preliminary measures of left ventricular function, including ejection fraction, left ventricular volumes, and mass.

HeartVista is presenting its new One Click™ Cardiac Package features at the Radiological Society of North America (RSNA) annual meeting in Chicago, on Dec. 4, 2019, at 2 p.m., in the AI Showcase Theater. HeartVista will also be at Booth #11137 for the duration of the conference, from Dec. 1 through Dec. 5.

About HeartVista

HeartVista believes in leveraging artificial intelligence with the goal of improving access to MRI and improved patient care. The company’s One Click™ software platform enables real-time MRI for a variety of clinical and research applications. Its AI-driven, one-click cardiac localization method received first place honors at the International Society for Magnetic Resonance in Medicine’s Machine Learning Workshop in 2018. The company’s innovative technology originated at the Stanford Magnetic Resonance Systems Research Laboratory. HeartVista is funded by Khosla Ventures, and the National Institute of Health’s Small Business Innovation Research program.

For more information, visit www.heartvista.ai

SOURCE

Reply-To: Kimberly Ha <kimberly.ha@kkhadvisors.com>

Date: Tuesday, October 29, 2019 at 11:01 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: HeartVista Receives FDA Clearance for First AI-assisted Cardiac MRI Solution

Read Full Post »


Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Yu Fu1, Alexander W Jung1, Ramon Viñas Torne1, Santiago Gonzalez1,2, Harald Vöhringer1, Mercedes Jimenez-Linan3, Luiza Moore3,4, and Moritz Gerstung#1,5 # to whom correspondence should be addressed 1) European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK. 2) Current affiliation: Institute for Research in Biomedicine (IRB Barcelona), Parc Científic de Barcelona, Barcelona, Spain. 3) Department of Pathology, Addenbrooke’s Hospital, Cambridge, UK. 4) Wellcome Sanger Institute, Hinxton, UK 5) European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.

Correspondence:

Dr Moritz Gerstung European Molecular Biology Laboratory European Bioinformatics Institute (EMBL-EBI) Hinxton, CB10 1SA UK. Tel: +44 (0) 1223 494636 E-mail: moritz.gerstung@ebi.ac.uk

Abstract

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Here we use deep transfer learning to quantify histopathological patterns across 17,396 H&E stained histopathology image slides from 28 cancer types and correlate these with underlying genomic and transcriptomic data. Pan-cancer computational histopathology (PC-CHiP) classifies the tissue origin across organ sites and provides highly accurate, spatially resolved tumor and normal distinction within a given slide. The learned computational histopathological features correlate with a large range of recurrent genetic aberrations, including whole genome duplications (WGDs), arm-level copy number gains and losses, focal amplifications and deletions as well as driver gene mutations within a range of cancer types. WGDs can be predicted in 25/27 cancer types (mean AUC=0.79) including those that were not part of model training. Similarly, we observe associations with 25% of mRNA transcript levels, which enables to learn and localise histopathological patterns of molecularly defined cell types on each slide. Lastly, we find that computational histopathology provides prognostic information augmenting histopathological subtyping and grading in the majority of cancers assessed, which pinpoints prognostically relevant areas such as necrosis or infiltrating lymphocytes on each tumour section. Taken together, these findings highlight the large potential of PC-CHiP to discover new molecular and prognostic associations, which can augment diagnostic workflows and lay out a rationale for integrating molecular and histopathological data.

SOURCE

https://www.biorxiv.org/content/10.1101/813543v1

Key points

● Pan-cancer computational histopathology analysis with deep learning extracts histopathological patterns and accurately discriminates 28 cancer and 14 normal tissue types

● Computational histopathology predicts whole genome duplications, focal amplifications and deletions, as well as driver gene mutations

● Wide-spread correlations with gene expression indicative of immune infiltration and proliferation

● Prognostic information augments conventional grading and histopathology subtyping in the majority of cancers

 

Discussion

Here we presented PC-CHiP, a pan-cancer transfer learning approach to extract computational histopathological features across 42 cancer and normal tissue types and their genomic, molecular and prognostic associations. Histopathological features, originally derived to classify different tissues, contained rich histologic and morphological signals predictive of a range of genomic and transcriptomic changes as well as survival. This shows that computer vision not only has the capacity to highly accurately reproduce predefined tissue labels, but also that this quantifies diverse histological patterns, which are predictive of a broad range of genomic and molecular traits, which were not part of the original training task. As the predictions are exclusively based on standard H&E-stained tissue sections, our analysis highlights the high potential of computational histopathology to digitally augment existing histopathological workflows. The strongest genomic associations were found for whole genome duplications, which can in part be explained by nuclear enlargement and increased nuclear intensities, but seemingly also stems from tumour grade and other histomorphological patterns contained in the high-dimensional computational histopathological features. Further, we observed associations with a range of chromosomal gains and losses, focal deletions and amplifications as well as driver gene mutations across a number of cancer types. These data demonstrate that genomic alterations change the morphology of cancer cells, as in the case of WGD, but possibly also that certain aberrations preferentially occur in distinct cell types, reflected by the tumor histology. Whatever is the cause or consequence in this equation, these associations lay out a route towards genomically defined histopathology subtypes, which will enhance and refine conventional assessment. Further, a broad range of transcriptomic correlations was observed reflecting both immune cell infiltration and cell proliferation that leads to higher tumor densities. These examples illustrated the remarkable property that machine learning does not only establish novel molecular associations from pre-computed histopathological feature sets but also allows the localisation of these traits within a larger image. While this exemplifies the power of a large scale data analysis to detect and localise recurrent patterns, it is probably not superior to spatially annotated training data. Yet such data can, by definition, only be generated for associations which are known beforehand. This appears straightforward, albeit laborious, for existing histopathology classifications, but more challenging for molecular readouts. Yet novel spatial transcriptomic44,45 and sequencing technologies46 bring within reach spatially matched molecular and histopathological data, which would serve as a gold standard in combining imaging and molecular patterns. Across cancer types, computational histopathological features showed a good level of prognostic relevance, substantially improving prognostic accuracy over conventional grading and histopathological subtyping in the majority of cancers. It is this very remarkable that such predictive It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543. The copyright holder for this preprint signals can be learned in a fully automated fashion. Still, at least at the current resolution, the improvement over a full molecular and clinical workup was relatively small. This might be a consequence of the far-ranging relations between histopathology and molecular phenotypes described here, implying that histopathology is a reflection of the underlying molecular alterations rather than an independent trait. Yet it probably also highlights the challenges of unambiguously quantifying histopathological signals in – and combining signals from – individual areas, which requires very large training datasets for each tumour entity. From a methodological point of view, the prediction of molecular traits can clearly be improved. In this analysis, we adopted – for the reason of simplicity and to avoid overfitting – a transfer learning approach in which an existing deep convolutional neural network, developed for classification of everyday objects, was fine tuned to predict cancer and normal tissue types. The implicit imaging feature representation was then used to predict molecular traits and outcomes. Instead of employing this two-step procedure, which risks missing patterns irrelevant for the initial classification task, one might directly employ either training on the molecular trait of interest, or ideally multi-objective learning. Further improvement may also be related to the choice of the CNN architecture. Everyday images have no defined scale due to a variable z-dimension; therefore, the algorithms need to be able to detect the same object at different sizes. This clearly is not the case for histopathology slides, in which one pixel corresponds to a defined physical size at a given magnification. Therefore, possibly less complex CNN architectures may be sufficient for quantitative histopathology analyses, and also show better generalisation. Here, in our proof-of-concept analysis, we observed a considerable dependence of the feature representation on known and possibly unknown properties of our training data, including the image compression algorithm and its parameters. Some of these issues could be overcome by amending and retraining the network to isolate the effect of confounding factors and additional data augmentation. Still, given the flexibility of deep learning algorithms and the associated risk of overfitting, one should generally be cautious about the generalisation properties and critically assess whether a new image is appropriately represented. Looking forward, our analyses revealed the enormous potential of using computer vision alongside molecular profiling. While the eye of a trained human may still constitute the gold standard for recognising clinically relevant histopathological patterns, computers have the capacity to augment this process by sifting through millions of images to retrieve similar patterns and establish associations with known and novel traits. As our analysis showed this helps to detect histopathology patterns associated with a range of genomic alterations, transcriptional signatures and prognosis – and highlight areas indicative of these traits on each given slide. It is therefore not too difficult to foresee how this may be utilised in a computationally augmented histopathology workflow enabling more precise and faster diagnosis and prognosis. Further, the ability to quantify a rich set of histopathology patterns lays out a path to define integrated histopathology and molecular cancer subtypes, as recently demonstrated for colorectal cancers47 .

Lastly, our analyses provide It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.

bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543.

The copyright holder for this preprint proof-of-concept for these principles and we expect them to be greatly refined in the future based on larger training corpora and further algorithmic refinements.

SOURCE

https://www.biorxiv.org/content/biorxiv/early/2019/10/25/813543.full.pdf

 

Other related articles published in this Open Access Online Scientific Journal include the following: 

 

CancerBase.org – The Global HUB for Diagnoses, Genomes, Pathology Images: A Real-time Diagnosis and Therapy Mapping Service for Cancer Patients – Anonymized Medical Records accessible to anyone on Earth

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/28/cancerbase-org-the-global-hub-for-diagnoses-genomes-pathology-images-a-real-time-diagnosis-and-therapy-mapping-service-for-cancer-patients-anonymized-medical-records-accessible-to/

 

631 articles had in their Title the keyword “Pathology”

https://pharmaceuticalintelligence.com/?s=Pathology

 

Read Full Post »


Showcase: How Deep Learning could help radiologists spend their time more efficiently

Reporter and Curator: Dror Nir, PhD

 

The debate on the function AI could or should realize in modern radiology is buoyant presenting wide spectrum of positive expectations and also fears.

The article: A Deep Learning Model to Triage Screening Mammograms: A Simulation Study that was published this month shows the best, and very much feasible, utility for AI in radiology at the present time. It would be of great benefit for radiologists and patients if such applications will be incorporated (with all safety precautions taken) into routine practice as soon as possible.

In a simulation study, a deep learning model to triage mammograms as cancer free improves workflow efficiency and significantly improves specificity while maintaining a noninferior sensitivity.

Background

Recent deep learning (DL) approaches have shown promise in improving sensitivity but have not addressed limitations in radiologist specificity or efficiency.

Purpose

To develop a DL model to triage a portion of mammograms as cancer free, improving performance and workflow efficiency.

Materials and Methods

In this retrospective study, 223 109 consecutive screening mammograms performed in 66 661 women from January 2009 to December 2016 were collected with cancer outcomes obtained through linkage to a regional tumor registry. This cohort was split by patient into 212 272, 25 999, and 26 540 mammograms from 56 831, 7021, and 7176 patients for training, validation, and testing, respectively. A DL model was developed to triage mammograms as cancer free and evaluated on the test set. A DL-triage workflow was simulated in which radiologists skipped mammograms triaged as cancer free (interpreting them as negative for cancer) and read mammograms not triaged as cancer free by using the original interpreting radiologists’ assessments. Sensitivities, specificities, and percentage of mammograms read were calculated, with and without the DL-triage–simulated workflow. Statistics were computed across 5000 bootstrap samples to assess confidence intervals (CIs). Specificities were compared by using a two-tailed t test (P < .05) and sensitivities were compared by using a one-sided t test with a noninferiority margin of 5% (P < .05).

Results

The test set included 7176 women (mean age, 57.8 years ± 10.9 [standard deviation]). When reading all mammograms, radiologists obtained a sensitivity and specificity of 90.6% (173 of 191; 95% CI: 86.6%, 94.7%) and 93.5% (24 625 of 26 349; 95% CI: 93.3%, 93.9%). In the DL-simulated workflow, the radiologists obtained a sensitivity and specificity of 90.1% (172 of 191; 95% CI: 86.0%, 94.3%) and 94.2% (24 814 of 26 349; 95% CI: 94.0%, 94.6%) while reading 80.7% (21 420 of 26 540) of the mammograms. The simulated workflow improved specificity (P = .002) and obtained a noninferior sensitivity with a margin of 5% (P < .001).

Conclusion

This deep learning model has the potential to reduce radiologist workload and significantly improve specificity without harming sensitivity.

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Read Full Post »


Prediction of Cardiovascular Risk by Machine Learning (ML) Algorithm: Best performing algorithm by predictive capacity had area under the ROC curve (AUC) scores: 1st, quadratic discriminant analysis; 2nd, NaiveBayes and 3rd, neural networks, far exceeding the conventional risk-scaling methods in Clinical Use

Reporter: Aviva Lev-Ari, PhD, RN

Best three machine-learning methods with the best predictive capacity had area under the ROC curve (AUC) scores of

  • 0.7086 (quadratic discriminant analysis),
  • 0.7084 (NaiveBayes) and
  • 0.7042 (neural networks)
  • the conventional risk-scaling methods—which are widely used in clinical practice in Spain—fell in at 11th and 12th places, with AUCs below 0.64.

 

Machine learning to predict cardiovascular risk

First published: 01 July 2019

This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as doi:10.1111/ijcp.13389

Abstract

Aims

To analyze the predictive capacity of 15 machine learning methods for estimating cardiovascular risk in a cohort and to compare them with other risk scales.

Methods

We calculated cardiovascular risk by means of 15 machine‐learning methods and using the SCORE and REGICOR scales and in 38,527 patients in the Spanish ESCARVAL RISK cohort, with five‐year follow‐up. We considered patients to be at high risk when the risk of a cardiovascular event was over 5% (according to SCORE and machine learning methods) or over 10% (using REGICOR). The area under the receiver operating curve (AUC) and the C‐index were calculated, as well as the diagnostic accuracy rate, error rate, sensitivity, specificity, positive and negative predictive values, positive likelihood ratio, and number of needed to treat to prevent a harmful outcome.

Results

The method with the greatest predictive capacity was quadratic discriminant analysis, with an AUC of 0.7086, followed by NaiveBayes and neural networks, with AUCs of 0.7084 and 0.7042, respectively. REGICOR and SCORE ranked 11th and 12th, respectively, in predictive capacity, with AUCs of 0.63. Seven machine learning methods showed a 7% higher predictive capacity (AUC) as well as higher sensitivity and specificity than the REGICOR and SCORE scales.

Conclusions

Ten of the 15 machine learning methods tested have a better predictive capacity for cardiovascular events and better classification indicators than the SCORE and REGICOR risk assessment scales commonly used in clinical practice in Spain. Machine learning methods should be considered in the development of future cardiovascular risk scales.

This article is protected by copyright. All rights reserved.

SOURCE

https://onlinelibrary.wiley.com/doi/abs/10.1111/ijcp.13389

Read Full Post »


Tweets, Pictures and Retweets at 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, MIT by @pharma_BI and @AVIVA1950 for #KIsymposium PharmaceuticalIntelligence.com and Social Media

 

Pictures taken in Real Time

 

Notification from Twitter.com on June 14, 2019 and in the 24 hours following the symposium

 

     liked your Tweet

    3 hours ago

  1.  Retweeted your Tweet

    5 hours ago

    1 other Retweet

  2.  liked your Tweets

    11 hours ago

    2 other likes

     and  liked a Tweet you were mentioned in

    11 hours ago

     liked your reply

    12 hours ago

  3. Replying to 

    It was an incredibly touching and “metzamrer” surprise to meet you at MIT

  4.  liked your Tweets

    13 hours ago

    3 other likes

     liked your reply

    15 hours ago

    Amazing event @avivregev @reginabarzilay 2pharma_BI Breakthrough in

     and  liked a Tweet you were mentioned in

    17 hours ago

  5. ‘s machine learning tool characterizes proteins, which are biomarkers of disease development and progression. Scientists can know more about their relationship to specific diseases and can interview earlier and precisely. ,

  6. learning and are undergoing dramatic changes and hold great promise for cancer research, diagnostics, and therapeutics. @KIinstitute by

     liked your Tweet

    Jun 16

     Retweeted your Retweet

    Jun 16

     liked your Retweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Retweet

    Jun 15

     and 3 others liked your reply

    Jun 15

     and  Retweeted your Tweet

    Jun 14

     and  liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  7.  liked your Tweets

    Jun 14

    2 other likes

     liked your Tweet

    Jun 14

     Retweeted your Retweet

    Jun 14

     liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  8. identification in the will depend on highly

  9.  liked your Tweets

    Jun 14

    2 other likes

     Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     and 3 others liked your reply

    Jun 14

     liked your Retweet

    Jun 14

  10. this needed to be done a long time ago

     Retweeted your Tweet

    Jun 14

     and  Retweeted your reply

    Jun 14

     liked your Tweet

    Jun 14

     liked your reply

    Jun 14

     Retweeted your reply

    Jun 14

 

Tweets by @pharma_BI and by @AVIVA1950

&

Retweets and replies by @pharma_BI and @AVIVA1950

eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  1. Top lectures by @reginabarzilay @avivaregev

  2. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  1.   Retweeted

    eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  2. Top lectures by @reginabarzilay @avivaregev

  3. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  4. eProceedings & eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  5.   Retweeted
  6.   Retweeted

    Einstein, Curie, Bohr, Planck, Heisenberg, Schrödinger… was this the greatest meeting of minds, ever? Some of the world’s most notable physicists participated in the 1927 Solvay Conference. In fact, 17 of the 29 scientists attending were or became Laureates.

  7.   Retweeted

    identification in the will depend on highly

  8. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, Cambridge, MA via

 

Read Full Post »

Older Posts »