Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence Applications in Health Care’ Category


Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

Reporter: Aviva Lev-Ari, PhD, RN

 

HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package, the First AI-assisted Cardiac MRI Scan Solution

The future of imaging is here—and FDA cleared.

LOS ALTOS, Calif.–(BUSINESS WIRE)–HeartVista, a pioneer in AI-assisted MRI solutions, today announced that it received 510(k) clearance from the U.S. Food and Drug Administration to deliver its AI-assisted One Click™ MRI acquisition software for cardiac exams. Despite the many advantages of cardiac MRI, or cardiac magnetic resonance (CMR), its use has been largely limited due to a lack of trained technologists, high costs, longer scan time, and complexity of use. With HeartVista’s solution, cardiac MRI is now simple, time-efficient, affordable, and highly consistent.

“HeartVista’s Cardiac Package is a vital tool to enhance the consistency and productivity of cardiac magnetic resonance studies, across all levels of CMR expertise,” said Dr. Raymond Kwong, MPH, Director of Cardiac Magnetic Resonance Imaging at Brigham and Women’s Hospital and Associate Professor of Medicine at Harvard Medical School.

A recent multi-center, outcome-based study (MR-INFORM), published in the New England Journal of Medicine, demonstrated that non-invasive myocardial perfusion cardiovascular MRI was as good as invasive FFR, the previous gold standard method, to guide treatment for patients with stable chest pain, while leading to 20% fewer catheterizations.

“This recent NEJM study further reinforces the clinical literature that cardiac MRI is the gold standard for cardiac diagnosis, even when compared against invasive alternatives,” said Itamar Kandel, CEO of HeartVista. “Our One Click™ solution makes these kinds of cardiac MRI exams practical for widespread adoption. Patients across the country now have access to the only AI-guided cardiac MRI exam, which will deliver continuous imaging via an automated process, minimize errors, and simplify scan operation. Our AI solution generates definitive, accurate and actionable real-time data for cardiologists. We believe it will elevate the standard of care for cardiac imaging, enhance patient experience and access, and improve patient outcomes.”

HeartVista’s FDA-cleared Cardiac Package uses AI-assisted software to prescribe the standard cardiac views with just one click, and in as few as 10 seconds, while the patient breathes freely. A unique artifact detection neural network is incorporated in HeartVista’s protocol to identify when the image quality is below the acceptable threshold, prompting the operator to reacquire the questioned images if desired. Inversion time is optimized with further AI assistance prior to the myocardial delayed-enhancement acquisition. A 4D flow measurement application uses a non-Cartesian, volumetric parallel imaging acquisition to generate high quality images in a fraction of the time. The Cardiac Package also provides preliminary measures of left ventricular function, including ejection fraction, left ventricular volumes, and mass.

HeartVista is presenting its new One Click™ Cardiac Package features at the Radiological Society of North America (RSNA) annual meeting in Chicago, on Dec. 4, 2019, at 2 p.m., in the AI Showcase Theater. HeartVista will also be at Booth #11137 for the duration of the conference, from Dec. 1 through Dec. 5.

About HeartVista

HeartVista believes in leveraging artificial intelligence with the goal of improving access to MRI and improved patient care. The company’s One Click™ software platform enables real-time MRI for a variety of clinical and research applications. Its AI-driven, one-click cardiac localization method received first place honors at the International Society for Magnetic Resonance in Medicine’s Machine Learning Workshop in 2018. The company’s innovative technology originated at the Stanford Magnetic Resonance Systems Research Laboratory. HeartVista is funded by Khosla Ventures, and the National Institute of Health’s Small Business Innovation Research program.

For more information, visit www.heartvista.ai

SOURCE

Reply-To: Kimberly Ha <kimberly.ha@kkhadvisors.com>

Date: Tuesday, October 29, 2019 at 11:01 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: HeartVista Receives FDA Clearance for First AI-assisted Cardiac MRI Solution

Read Full Post »


Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Yu Fu1, Alexander W Jung1, Ramon Viñas Torne1, Santiago Gonzalez1,2, Harald Vöhringer1, Mercedes Jimenez-Linan3, Luiza Moore3,4, and Moritz Gerstung#1,5 # to whom correspondence should be addressed 1) European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK. 2) Current affiliation: Institute for Research in Biomedicine (IRB Barcelona), Parc Científic de Barcelona, Barcelona, Spain. 3) Department of Pathology, Addenbrooke’s Hospital, Cambridge, UK. 4) Wellcome Sanger Institute, Hinxton, UK 5) European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.

Correspondence:

Dr Moritz Gerstung European Molecular Biology Laboratory European Bioinformatics Institute (EMBL-EBI) Hinxton, CB10 1SA UK. Tel: +44 (0) 1223 494636 E-mail: moritz.gerstung@ebi.ac.uk

Abstract

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Here we use deep transfer learning to quantify histopathological patterns across 17,396 H&E stained histopathology image slides from 28 cancer types and correlate these with underlying genomic and transcriptomic data. Pan-cancer computational histopathology (PC-CHiP) classifies the tissue origin across organ sites and provides highly accurate, spatially resolved tumor and normal distinction within a given slide. The learned computational histopathological features correlate with a large range of recurrent genetic aberrations, including whole genome duplications (WGDs), arm-level copy number gains and losses, focal amplifications and deletions as well as driver gene mutations within a range of cancer types. WGDs can be predicted in 25/27 cancer types (mean AUC=0.79) including those that were not part of model training. Similarly, we observe associations with 25% of mRNA transcript levels, which enables to learn and localise histopathological patterns of molecularly defined cell types on each slide. Lastly, we find that computational histopathology provides prognostic information augmenting histopathological subtyping and grading in the majority of cancers assessed, which pinpoints prognostically relevant areas such as necrosis or infiltrating lymphocytes on each tumour section. Taken together, these findings highlight the large potential of PC-CHiP to discover new molecular and prognostic associations, which can augment diagnostic workflows and lay out a rationale for integrating molecular and histopathological data.

SOURCE

https://www.biorxiv.org/content/10.1101/813543v1

Key points

● Pan-cancer computational histopathology analysis with deep learning extracts histopathological patterns and accurately discriminates 28 cancer and 14 normal tissue types

● Computational histopathology predicts whole genome duplications, focal amplifications and deletions, as well as driver gene mutations

● Wide-spread correlations with gene expression indicative of immune infiltration and proliferation

● Prognostic information augments conventional grading and histopathology subtyping in the majority of cancers

 

Discussion

Here we presented PC-CHiP, a pan-cancer transfer learning approach to extract computational histopathological features across 42 cancer and normal tissue types and their genomic, molecular and prognostic associations. Histopathological features, originally derived to classify different tissues, contained rich histologic and morphological signals predictive of a range of genomic and transcriptomic changes as well as survival. This shows that computer vision not only has the capacity to highly accurately reproduce predefined tissue labels, but also that this quantifies diverse histological patterns, which are predictive of a broad range of genomic and molecular traits, which were not part of the original training task. As the predictions are exclusively based on standard H&E-stained tissue sections, our analysis highlights the high potential of computational histopathology to digitally augment existing histopathological workflows. The strongest genomic associations were found for whole genome duplications, which can in part be explained by nuclear enlargement and increased nuclear intensities, but seemingly also stems from tumour grade and other histomorphological patterns contained in the high-dimensional computational histopathological features. Further, we observed associations with a range of chromosomal gains and losses, focal deletions and amplifications as well as driver gene mutations across a number of cancer types. These data demonstrate that genomic alterations change the morphology of cancer cells, as in the case of WGD, but possibly also that certain aberrations preferentially occur in distinct cell types, reflected by the tumor histology. Whatever is the cause or consequence in this equation, these associations lay out a route towards genomically defined histopathology subtypes, which will enhance and refine conventional assessment. Further, a broad range of transcriptomic correlations was observed reflecting both immune cell infiltration and cell proliferation that leads to higher tumor densities. These examples illustrated the remarkable property that machine learning does not only establish novel molecular associations from pre-computed histopathological feature sets but also allows the localisation of these traits within a larger image. While this exemplifies the power of a large scale data analysis to detect and localise recurrent patterns, it is probably not superior to spatially annotated training data. Yet such data can, by definition, only be generated for associations which are known beforehand. This appears straightforward, albeit laborious, for existing histopathology classifications, but more challenging for molecular readouts. Yet novel spatial transcriptomic44,45 and sequencing technologies46 bring within reach spatially matched molecular and histopathological data, which would serve as a gold standard in combining imaging and molecular patterns. Across cancer types, computational histopathological features showed a good level of prognostic relevance, substantially improving prognostic accuracy over conventional grading and histopathological subtyping in the majority of cancers. It is this very remarkable that such predictive It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543. The copyright holder for this preprint signals can be learned in a fully automated fashion. Still, at least at the current resolution, the improvement over a full molecular and clinical workup was relatively small. This might be a consequence of the far-ranging relations between histopathology and molecular phenotypes described here, implying that histopathology is a reflection of the underlying molecular alterations rather than an independent trait. Yet it probably also highlights the challenges of unambiguously quantifying histopathological signals in – and combining signals from – individual areas, which requires very large training datasets for each tumour entity. From a methodological point of view, the prediction of molecular traits can clearly be improved. In this analysis, we adopted – for the reason of simplicity and to avoid overfitting – a transfer learning approach in which an existing deep convolutional neural network, developed for classification of everyday objects, was fine tuned to predict cancer and normal tissue types. The implicit imaging feature representation was then used to predict molecular traits and outcomes. Instead of employing this two-step procedure, which risks missing patterns irrelevant for the initial classification task, one might directly employ either training on the molecular trait of interest, or ideally multi-objective learning. Further improvement may also be related to the choice of the CNN architecture. Everyday images have no defined scale due to a variable z-dimension; therefore, the algorithms need to be able to detect the same object at different sizes. This clearly is not the case for histopathology slides, in which one pixel corresponds to a defined physical size at a given magnification. Therefore, possibly less complex CNN architectures may be sufficient for quantitative histopathology analyses, and also show better generalisation. Here, in our proof-of-concept analysis, we observed a considerable dependence of the feature representation on known and possibly unknown properties of our training data, including the image compression algorithm and its parameters. Some of these issues could be overcome by amending and retraining the network to isolate the effect of confounding factors and additional data augmentation. Still, given the flexibility of deep learning algorithms and the associated risk of overfitting, one should generally be cautious about the generalisation properties and critically assess whether a new image is appropriately represented. Looking forward, our analyses revealed the enormous potential of using computer vision alongside molecular profiling. While the eye of a trained human may still constitute the gold standard for recognising clinically relevant histopathological patterns, computers have the capacity to augment this process by sifting through millions of images to retrieve similar patterns and establish associations with known and novel traits. As our analysis showed this helps to detect histopathology patterns associated with a range of genomic alterations, transcriptional signatures and prognosis – and highlight areas indicative of these traits on each given slide. It is therefore not too difficult to foresee how this may be utilised in a computationally augmented histopathology workflow enabling more precise and faster diagnosis and prognosis. Further, the ability to quantify a rich set of histopathology patterns lays out a path to define integrated histopathology and molecular cancer subtypes, as recently demonstrated for colorectal cancers47 .

Lastly, our analyses provide It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.

bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543.

The copyright holder for this preprint proof-of-concept for these principles and we expect them to be greatly refined in the future based on larger training corpora and further algorithmic refinements.

SOURCE

https://www.biorxiv.org/content/biorxiv/early/2019/10/25/813543.full.pdf

 

Other related articles published in this Open Access Online Scientific Journal include the following: 

 

CancerBase.org – The Global HUB for Diagnoses, Genomes, Pathology Images: A Real-time Diagnosis and Therapy Mapping Service for Cancer Patients – Anonymized Medical Records accessible to anyone on Earth

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/28/cancerbase-org-the-global-hub-for-diagnoses-genomes-pathology-images-a-real-time-diagnosis-and-therapy-mapping-service-for-cancer-patients-anonymized-medical-records-accessible-to/

 

631 articles had in their Title the keyword “Pathology”

https://pharmaceuticalintelligence.com/?s=Pathology

 

Read Full Post »


@CHI 1st AI World Conference and Expo, October 23 – October 25, 2019, Seaport World Trade Center, Boston, MA.  Presentations by Four Israeli companies explaining how they use AI technologies in their products @ NEIBC Meetup AI World Conference and Expo, 10/24/2019 @6:30PM Waterfront 1

#AIWORLD

@AIWORLDEXPO

@pharma_BI

@AVIVA1950

Reporter: Aviva Lev-Ari, PhD, RN

 

  • When: October 24, 2019
  • Time: 6:30 pm
  • Where: Seaport World Trade Center, Boston, MA
  • Room Location: Waterfront 1

Speakers Includes:

Registration:

  • To gain access to NEIBC Meetup please RSVP below and use code 1968-EXHP and get complimentary pass to the exhibit
  • If you want to attend the conference, use NEIBC19 discount code and get $200 off conference registration

RSVP NOW

AI World Conference and Expo has become the industry’s largest independent business event focused on the state of the practice of AI in the enterprise. The AI World 3-day program delivers a comprehensive spectrum of content, networking, and business development opportunities, all designed to help you cut through the hype and navigate through the complex landscape of AI business solutions. Attend AI World and learn how innovators are successfully deploying AI and intelligent automation to accelerate innovation efforts, build competitive advantage, drive new business opportunities, and reduce costs.

250+ Speakers

120+ Sponsors

2700+Attendees

100+Sessions

SOURCE

From: “Dan Trajman @ NEIBC” <dan.trajman@neibc.org>

Reply-To: <dan.trajman@neibc.org>

Date: Wednesday, October 23, 2019 at 11:50 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Israeli Companies Presenting at AI World October 24, 2019

 

Event Brochure

https://aiworld.com/docs/librariesprovider28/agenda/19/aiworld-conference-expo-2019.pdf

 

Plenary Program

WEDNESDAY, OCTOBER 23

9:00 AM SUMMIT KICK OFF: AI Becomes Real

Scott Lundstrom, Group Vice President and General Manager, IDC

Government and Health Insights, IDC and AI World, Conference Co-Chair

 

9:10 AM SUMMIT KEYNOTE: Business Strategy with Artificial Intelligence

Sam Ransbotham, PhD, Professor, Academic Contributing Editor,

Information Systems, Boston College; MIT Sloan Management Review

 

9:35 AM EXECUTIVE ROUNDTABLE:

AI Drives Innovation in Enterprise Applications

Moderator: Mickey North-Rizza, Research Vice President, Enterprise Applications, IDC

Panelists:

David Castillo, PhD, Managing Vice President, Machine Learning, Capital One

Mukesh Dalal, PhD, Chief Analytics Officer & Chief Data Scientist, Bose Corporation

Madhumita Bhattacharyya, Managing Director – Enterprise Data & Analytics,

Protiviti Sasha Caskey, CTO & Co-Founder, Kasisto

 

10:05 AM KEYNOTE: Evolving Role of CDAOS in the New Era – An Organizational Construct

Anju Gupta, Vice President, Chief Data and Analytics Officer, Enterprise Holdings

 

10:30 – 10:50 AM NETWORKING BREAK

 

10:50 AM EXECUTIVE ROUNDTABLE:

 

The Evolution of Conversational Assistants

 

Moderator:

Reenita Malholtra Hora, Director of Marketing & Communications, SRI International

Panelists:

William Mark, PhD, President, SRI

Karen Myers, Lab Director, SRI International’s AI Center

 

11:20 AM Talk Title to be Announced

Genevy Dimitrion VP, Enterprise Data and Analytics, Humana

 

11:40 AM How AI Maturity Impacts a Winning Corporate Strategy

Ritu Jyoti, Program Vice President, IDC

 

4:20 PM PLENARY KEYNOTE PANEL:

Learning from XPRIZE Startups to Achieve Successful AI Innovation

Moderator:

Devin Krotman, Director, IBM Watson

AI XPRIZE,

XPRIZE Foundation

 

Panelists: Eleni Miltsakaki, Founder and CEO, Choosito

Ellie Gordon, Founder, CEO, & Designer, Behaivior AI

Daniel Fortin, President, AITera Inc.

 

12:00 PM LUNCHEON KEYNOTE:

Case Studies of Conversational AI – Real Deployments at Scale

Jim Freeze, Chief Marketing Officer, Interactions

 

Sponsored by Ben Bauks, Senior Business Systems Analyst, Constant Contact

 

THURSDAY, OCTOBER 24

 

8:20 AM BREAKFAST KEYNOTE:

The Promise and Pain of Computer Vision in Retail, Healthcare, and Agriculture

Ben Schneider, Vice President, Product, Alegion

 

9:00 AM CONFERENCE INTRODUCTION

Eliot Weinman, Founder & Conference Chair, AI World; Executive Editor, AI Trends

 

9:05 AM INTRODUCTORY REMARKS

Scott Lundstrom, Group Vice President and General Manager, IDC Government and

Health Insights, IDC and AI World, Conference Co-Chair

 

9:15 AM KEYNOTE PRESENTATION:

The Human Strategy

Alex Sandy Pentland, PhD, Professor, Engineering, Business, Media Lab, MIT

 

9:45 AM KEYNOTE:

Uber’s Intelligent Insights Assistant

Franziska Bell, PhD, Director, Data Science, Data Science Platforms, Uber

 

10:15 AM KEYNOTE:

AI in Finance: Present and Future, Hype and Reality

Charles Elkan, PhD, Managing Director, Goldman Sachs

 

10:40 – 11:00 AM COFFEE BREAK

 

11:00 AM KEYNOTE:

AI at Work in Legal, News and Tax & Accounting

Khalid Al-Kofahi, PhD, Vice President, Research and Development, Head –

Center for AI and Cognitive Computing, Thomson Reuters

 

11:25AM EXECUTIVE ROUNDTABLE:

Disinformation, Infosec, Cognitive Security and Influence Manipulation

Moderator:

Michael Krigsman, Industry Analyst, CXOTalk

 

Panelist:

Sara-Jayne Terp, Data Scientist, Bodacea Light Industries LLC

Bob Gourley, Co-Founder and CTO, OODA LLC

Pablo Breuer, Director of US Special Operations Command Donovan Group and Senior Military Advisor and Innovation Officer, SOFWERX

Anthony Scriffignano, PhD, SVP, Chief Data Scientist, Dun & Bradstreet

 

Sponsored by

PUSHING THE BOUNDARIES OF AI – Providing the expertise required to accelerate the evolution of human progress in the age of artificial intelligence http://dellemc.com/AI

 

11:30 AM KEYNOTE:

How AI is Helping to Improve Canadian Lives Through AML

Vishal Gossain, Vice President, Global Risk Management, ScotiaBank

 

FRIDAY, OCTOBER 25

 

8:15 AM KEYNOTE:

AI World Society Roundtable on AI-Healthcare

Moderator:

Ed Burns, Site Editor, TechTarget

 

Panelists:

Professor David Silbersweig, Board Member of BGF, Harvard Medical

School

Professor Mai Trong Khoa, PhD, Chairman of the Nuclear Medicine and Oncology

Council, Director of the Gene-Stem cell Center, Bach Mai hospital, Senior lecturer,

Hanoi Medical University, Secrectary of the National Council of Professorship in

Medicine in Vietnam

Truong Van Phuoc, PhD, Former Acting Chairman, State Inspectory Committee

of Finance of Vietnam, Senior Advisor to Chairman, Vietbank

Truong Vinh Long, MD, CEO, Gia An 115 Hospital

 

10:00 AM KEYNOTE:

AI in Pharma: Where we are Today and How we Will Succeed in the Future

Natalija Jovanovic, PhD, Chief Digital Officer, Sanofi Pasteur

 

10:30 AM Startup Awards Announcement

John Desmond, Principal at JD Content Services, Editor AI Trends

 

10:35 – 10:50 AM COFFEE BREAK IN THE EXPO

 

10:50 AM EXECUTIVE ROUNDTABLE:

Enterprise AI Innovations

Moderator:

Nick Patience, Founder & Research Vice President, Software, 451 Research

Rudina Seseri, Founder and Managing Partner, Glasswing Venture

Norbert Monfort, Vice President, IT Transformation and Innovation, Assurant Global Technology

Dawn Fitzgerald, Director of Digital Transformation Data Center Operations, Schneider Electric

 

PLENARY PROGRAM

 

8:45 AM CONFERENCE INTRODUCTION

Scott Lundstrom, Group Vice President and General Manager, IDC Government and

Health Insights, IDC and AI World, Conference Co-Chair

 

8:50 AM KEYNOTE:

Artificial Intelligence in Sustainable Development: An Educational Perspective

Enver Yucel, Chairman, Bahçeşehir University

 

9:00 AM KEYNOTE:

Enhancing Human Capability with Intelligent Machine Teammates

Julie Shah, Associate Professor, Dept of Aeronautics and Astronautics, Computer Science and AI Lab, MIT

 

9:30 AM KEYNOTE:

Democracy, Ethics and the Rule of Law in the Age of Artificial Intelligence

Paul F. Nemitz, Principal Advisor in the Directorate-General for Justice and Consumers of the European Commission

 

12:00 PM LUNCHEON KEYNOTE:

How AI/ML is Changing the Face of Enterprise IT

Robert Ames, Senior Director, National Technology Strategy,

VMware Research, VMware

 

SOURCE

https://aiworld.com/docs/librariesprovider28/agenda/19/aiworld-conference-expo-2019.pdf

Read Full Post »


Showcase: How Deep Learning could help radiologists spend their time more efficiently

Reporter and Curator: Dror Nir, PhD

 

The debate on the function AI could or should realize in modern radiology is buoyant presenting wide spectrum of positive expectations and also fears.

The article: A Deep Learning Model to Triage Screening Mammograms: A Simulation Study that was published this month shows the best, and very much feasible, utility for AI in radiology at the present time. It would be of great benefit for radiologists and patients if such applications will be incorporated (with all safety precautions taken) into routine practice as soon as possible.

In a simulation study, a deep learning model to triage mammograms as cancer free improves workflow efficiency and significantly improves specificity while maintaining a noninferior sensitivity.

Background

Recent deep learning (DL) approaches have shown promise in improving sensitivity but have not addressed limitations in radiologist specificity or efficiency.

Purpose

To develop a DL model to triage a portion of mammograms as cancer free, improving performance and workflow efficiency.

Materials and Methods

In this retrospective study, 223 109 consecutive screening mammograms performed in 66 661 women from January 2009 to December 2016 were collected with cancer outcomes obtained through linkage to a regional tumor registry. This cohort was split by patient into 212 272, 25 999, and 26 540 mammograms from 56 831, 7021, and 7176 patients for training, validation, and testing, respectively. A DL model was developed to triage mammograms as cancer free and evaluated on the test set. A DL-triage workflow was simulated in which radiologists skipped mammograms triaged as cancer free (interpreting them as negative for cancer) and read mammograms not triaged as cancer free by using the original interpreting radiologists’ assessments. Sensitivities, specificities, and percentage of mammograms read were calculated, with and without the DL-triage–simulated workflow. Statistics were computed across 5000 bootstrap samples to assess confidence intervals (CIs). Specificities were compared by using a two-tailed t test (P < .05) and sensitivities were compared by using a one-sided t test with a noninferiority margin of 5% (P < .05).

Results

The test set included 7176 women (mean age, 57.8 years ± 10.9 [standard deviation]). When reading all mammograms, radiologists obtained a sensitivity and specificity of 90.6% (173 of 191; 95% CI: 86.6%, 94.7%) and 93.5% (24 625 of 26 349; 95% CI: 93.3%, 93.9%). In the DL-simulated workflow, the radiologists obtained a sensitivity and specificity of 90.1% (172 of 191; 95% CI: 86.0%, 94.3%) and 94.2% (24 814 of 26 349; 95% CI: 94.0%, 94.6%) while reading 80.7% (21 420 of 26 540) of the mammograms. The simulated workflow improved specificity (P = .002) and obtained a noninferior sensitivity with a margin of 5% (P < .001).

Conclusion

This deep learning model has the potential to reduce radiologist workload and significantly improve specificity without harming sensitivity.

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


Artificial throat may give voice to the voiceless

Reporter
Irina Robu, PhD

Flexible sensors have fascinated more and more attention as a fundamental part of anthropomorphic robot research, medical diagnosis and physical health monitoring. The fundamental mechanism of the sensor is based on triboelectric effect inducing electrostatic charges on the surfaces between two different materials. Just like a plate capacitor, current is produced while the size of the parallel capacitor fluctuations caused by the small mechanical disturbances and therefore the output current/voltage is produced.

Chinese scientists combine ultra sensitive motion detectors with thermal sound-emitting technology invented an “artificial throat” that could enable speech in people with damaged or non-functioning vocal cords. Team members from University in Beijing, fabricated a homemade circuit board on which to build out their dual-mode system combining detection and emitting technologies.

Graphene is a wonder material because it is thinnest material in the universe and the strongest ever measured. And graphene is only a one-atom thick layer of graphite and possess a high Young’s modulus as well as superior thermal and electrical conductivities. Graphene-based sensors have attracted much attention in recent years due to their variety of structures, unique sensing performances, room-temperature working conditions, and tremendous application prospects.

The skin like device, wearable artificial graphene throat (WAGT) is as similar as a temporary tattoo, at least as perceived by the wearer. In order to make the device functional and flexible, scientists designed a laser-scribed graphene on a thin sheet of polyvinyl alcohol film. The device is the size of two thumbnails side by side and can use water to attach the film to the skin over the volunteer’s throat and connected to electrodes to a small armband that contained a circuit board, microcomputer, power amplifier and decoder. At the development phase, the system transformed subtle throat movements into simple sounds like “OK” and “No.” During the trial of the device, volunteers imitated throat motions of speech and the device converted these movements into single-syllable words.

It is believed that this device, would be able to train mute people to generate signals with their throats and the device would translate signals into speech.

SOURCE
https://www.aiin.healthcare/topics/robotics/artificial-throat-may-give-voice-voiceless?utm_source=newsletter

Read Full Post »


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Read Full Post »

Older Posts »