Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence Applications in Health Care’ Category



Google AI improves accuracy of reading mammograms, study finds

Google AI improves accuracy of reading mammograms, study finds

Google CFO Ruth Porat has blogged about twice battling breast cancer.

Artificial intelligence was often more accurate than radiologists in detecting breast cancer from mammograms in a study conducted by researchers using Google AI technology.

The study, published in the journal Nature, used mammograms from approximately 90,000 women in which the outcomes were known to train technology from Alphabet Inc’s DeepMind AI unit, now part of Google Health, Yahoo news reported.

The AI system was then used to analyze images from 28,000 other women and often diagnosed early cancers more accurately than the radiologists who originally interpreted the mammograms.

In another test, AI outperformed six radiologists in reading 500 mammograms. However, while the AI system found cancers the humans missed, it also failed to find cancers flagged by all six radiologists, reports The New York Times.

The researchers said the study “paves the way” for further clinical trials.

Writing in NatureEtta D. Pisano, chief research officer at the American College of Radiology and professor in residence at Harvard Medical School, noted, “The real world is more complicated and potentially more diverse than the type of controlled research environment reported in this study.”

Ruth Porat, senior vice president and chief financial officer Alphabet, Inc., wrote in a company blog titled “Breast cancer and tech…a reason for optimism” in October about twice battling the disease herself, and the importance of her company’s application of AI to healthcare innovations.

She said that focus had already led to the development of a deep learning algorithm to help pathologists assess tissue associated with metastatic breast cancer.

“By pinpointing the location of the cancer more accurately, quickly and at a lower cost, care providers might be able to deliver better treatment for more patients,” she wrote.

Google also has created algorithms that help medical professionals diagnose lung cancer, and eye disease in people with diabetes, per the Times.

Porat acknowledged that Google’s research showed the best results occur when medical professionals and technology work together.

Any insights provided by AI must be “paired with human intelligence and placed in the hands of skilled researchers, surgeons, oncologists, radiologists and others,” she said.

Anne Stych is a staff writer for Bizwomen.
Industries:

Read Full Post »


Artificial Intelligence Innovations in Cardiac Imaging

Reporter: Aviva Lev-Ari, PhD, RN

‘CTA-for-All’ fast-tracks intervention, improves LVO detection in stroke patients

A “CTA-for-All” stroke imaging policy improved large vessel occlusion (LVO) detection, fast-tracked intervention and improved outcomes in a recent study of patients with acute ischemic stroke (AIS), researchers reported in Stroke.

“Combined noncontrast computed tomography (NCCT) and CT angiography (CTA) have been championed as the new minimum standard for initial imaging of disabling stroke,” Mayer, a neurologist at Henry Ford Hospital in Detroit, and co-authors wrote in their paper. “Patient selection criteria that impose arbitrary limits on time from last known well (LKW) or baseline National Institutes of Health Stroke Scale (NIHSS) score may delay CTA and the diagnosis of LVO.”

“These findings suggest that a uniform CTA-for-All imaging policy for stroke patients presenting within 24 hours is feasible and safe, improves LVO detection, speeds intervention and can improve outcomes,” the authors wrote. “The benefit appears to primarily affect patients presenting within six hours of symptom onset.”

SOURCE

https://www.cardiovascularbusiness.com/topics/cardiovascular-imaging/cta-all-fast-tracks-intervention-improves-lvo-detection-stroke?utm_source=newsletter&utm_medium=cvb_cardio_imaging

 

How to integrate AI into the cardiac imaging pipeline

Hsiao said physicians can expect “a little bit of generalization” from neural networks, meaning they’ll work okay on data that they’ve never seen, but they’re not going to produce perfect results the first time around. If a model was trained on 3T MRI data, for example, and someone inputs 1.5T MRI data, it might not be able to analyze that information comprehensively. If some 1.5T data were fed into the model’s training algorithm, though, that could change.

According to Hsiao, all of this knowledge means little without clinical validation. He said he and his colleagues are working to integrate algorithms into the clinical environment such that a radiologist could hit a button and AI could auto-prescribe a set of images. Even better, he said, would be the ability to open up a series and have it auto-prescribe itself.

“That’s where we’re moving next, so you don’t have to hit any buttons at all,” he said.

SOURCE

https://www.cardiovascularbusiness.com/topics/cardiovascular-imaging/how-integrate-ai-cardiac-imaging-pipeline?utm_source=newsletter&utm_medium=cvb_cardio_imaging

 

DiA Imaging, IBM pair to take the subjectivity out of cardiac image analysis

SOURCE

https://www.cardiovascularbusiness.com/topics/cardiovascular-imaging/dia-imaging-ibm-partner-cardiac-image-analysis?utm_source=newsletter&utm_medium=cvb_cardio_imaging

 

FDA clears Ultromics’ AI-based CV image analysis system

Smartphone app accurately finds, identifies CV implants—and fast

According to the study, the finalized model achieved 95% sensitivity and 98% specificity.

Ferrick et al. said that since their training sample size was somewhat small and limited to a single institution, it would be valuable to validate the model externally. Still, their neural network was able to accurately identify CIEDs on chest radiographs and translate that ability into a phone app.

“Rather than the conventional ‘bench-to-bedside’ approach of translational research, we demonstrated the feasibility of ‘big data-to-bedside’ endeavors,” the team said. “This research has the potential to facilitate device identification in urgent scenarios in medical settings with limited resources.”

SOURCE

https://www.cardiovascularbusiness.com/topics/cardiovascular-imaging/smartphone-app-accurately-finds-identifies-cv-implants?utm_source=newsletter&utm_medium=cvb_cardio_imaging

Machine learning cuts cardiac MRI analysis from minutes to seconds

“Cardiovascular MRI offers unparalleled image quality for assessing heart structure and function; however, current manual analysis remains basic and outdated,” Manisty said in a statement. “Automated machine learning techniques offer the potential to change this and radically improve efficiency, and we look forward to further research that could validate its superiority to human analysis.”

It’s estimated that around 150,000 cardiac MRIs are performed in the U.K. each year, she said, and based on that number, her team thinks using AI to read scans could mean saving 54 clinician-days per year at every health center in the country.

“Our dataset of patients with a range of heart diseases who received scans enabled us to demonstrate that the greatest sources of measurement error arise from human factors,” Manisty said. “This indicates that automated techniques are at least as good as humans, with the potential soon to be ‘superhuman’—transforming clinical and research measurement precision.

SOURCE

https://www.cardiovascularbusiness.com/topics/cardiovascular-imaging/machine-learning-speeds-cardiac-mri-analysis?utm_source=newsletter&utm_medium=cvb_cardio_imaging

 

General SOURCE

From: Cardiovascular Business <news@mail.cardiovascularbusiness.com>

Reply-To: Cardiovascular Business <news@mail.cardiovascularbusiness.com>

Date: Tuesday, December 17, 2019 at 9:31 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Cardiovascular Imaging | December 2019

Read Full Post »


AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights

Reporter: Stephen J. Williams, Ph.D.

Recent report from CBI Insights shows the rapid pace at which the biggest tech firms (Google, Apple, Microsoft, Facebook, and Amazon) are acquiring artificial intelligence (AI) startups, potentially confounding the AI talent shortage that exists.

The link to the report and free download is given here at https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/

Part of the report:

TECH GIANTS LEAD IN AI ACQUISITIONS

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, & Apple (FAMGA) have all been aggressively acquiring AI startups in the last decade.

Among the FAMGA companies, Apple leads the way, making 20 total AI acquisitions since 2010. It is followed by Google (the frontrunner from 2012 to 2016) with 14 acquisitions and Microsoft with 10.

Apple’s AI acquisition spree, which has helped it overtake Google in recent years, was essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhone X just by looking at it, stems from Apple’s M&A moves in chips and computer vision, including the acquisition of AI company RealFace.

In fact, many of FAMGA’s prominent products and services came out of acquisitions of AI companies — such as Apple’s Siri, or Google’s contributions to healthcare through DeepMind.

That said, tech giants are far from the only companies snatching up AI startups.

Since 2010, there have been 635 AI acquisitions, as companies aim to build out their AI capabilities and capture sought-after talent (as of 8/31/2019).

The pace of these acquisitions has also been increasing. AI acquisitions saw a more than 6x uptick from 2013 to 2018, including last year’s record of 166 AI acquisitions — up 38% year-over-year.

In 2019, there have already been 140+ acquisitions (as of August), putting the year on track to beat the 2018 record at the current run rate.

Part of this increase in the pace of AI acquisitions can be attributed to a growing diversity in acquirers. Where once AI was the exclusive territory of major tech companies, today, smaller AI startups are becoming acquisition targets for traditional insurance, retail, and healthcare incumbents.

For example, in February 2018, Roche Holding acquired New York-based cancer startup Flatiron Health for $1.9B — one of the largest M&A deals in artificial intelligence. This year, Nike acquired AI-powered inventory management startup Celect, Uber acquired computer vision company Mighty AI, and McDonald’s acquired personalization platform Dynamic Yield.

Despite the increased number of acquirers, however, tech giants are still leading the charge. Acquisitive tech giants have emerged as powerful global corporations with a competitive advantage in artificial intelligence, and startups have played a pivotal role in helping these companies scale their AI initiatives.

Apple, Google, Microsoft, Facebook, Intel, and Amazon are the most active acquirers of AI startups, each acquiring 7+ companies.

To read more on recent Acquisitions in the AI space please see the following articles on this Open Access Online Journal

Diversification and Acquisitions, 2001 – 2015: Trail known as “Google Acquisitions” – Understanding Alphabet’s Acquisitions: A Sector-By-Sector Analysis

Clarivate Analytics expanded IP data leadership by new acquisition of the leading provider of intellectual property case law and analytics Darts-ip

2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Forbes Opinion: 13 Industries Soon To Be Revolutionized By Artificial Intelligence

Artificial Intelligence and Cardiovascular Disease

Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

 

Read Full Post »


Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

Reporter: Aviva Lev-Ari, PhD, RN

 

HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package, the First AI-assisted Cardiac MRI Scan Solution

The future of imaging is here—and FDA cleared.

LOS ALTOS, Calif.–(BUSINESS WIRE)–HeartVista, a pioneer in AI-assisted MRI solutions, today announced that it received 510(k) clearance from the U.S. Food and Drug Administration to deliver its AI-assisted One Click™ MRI acquisition software for cardiac exams. Despite the many advantages of cardiac MRI, or cardiac magnetic resonance (CMR), its use has been largely limited due to a lack of trained technologists, high costs, longer scan time, and complexity of use. With HeartVista’s solution, cardiac MRI is now simple, time-efficient, affordable, and highly consistent.

“HeartVista’s Cardiac Package is a vital tool to enhance the consistency and productivity of cardiac magnetic resonance studies, across all levels of CMR expertise,” said Dr. Raymond Kwong, MPH, Director of Cardiac Magnetic Resonance Imaging at Brigham and Women’s Hospital and Associate Professor of Medicine at Harvard Medical School.

A recent multi-center, outcome-based study (MR-INFORM), published in the New England Journal of Medicine, demonstrated that non-invasive myocardial perfusion cardiovascular MRI was as good as invasive FFR, the previous gold standard method, to guide treatment for patients with stable chest pain, while leading to 20% fewer catheterizations.

“This recent NEJM study further reinforces the clinical literature that cardiac MRI is the gold standard for cardiac diagnosis, even when compared against invasive alternatives,” said Itamar Kandel, CEO of HeartVista. “Our One Click™ solution makes these kinds of cardiac MRI exams practical for widespread adoption. Patients across the country now have access to the only AI-guided cardiac MRI exam, which will deliver continuous imaging via an automated process, minimize errors, and simplify scan operation. Our AI solution generates definitive, accurate and actionable real-time data for cardiologists. We believe it will elevate the standard of care for cardiac imaging, enhance patient experience and access, and improve patient outcomes.”

HeartVista’s FDA-cleared Cardiac Package uses AI-assisted software to prescribe the standard cardiac views with just one click, and in as few as 10 seconds, while the patient breathes freely. A unique artifact detection neural network is incorporated in HeartVista’s protocol to identify when the image quality is below the acceptable threshold, prompting the operator to reacquire the questioned images if desired. Inversion time is optimized with further AI assistance prior to the myocardial delayed-enhancement acquisition. A 4D flow measurement application uses a non-Cartesian, volumetric parallel imaging acquisition to generate high quality images in a fraction of the time. The Cardiac Package also provides preliminary measures of left ventricular function, including ejection fraction, left ventricular volumes, and mass.

HeartVista is presenting its new One Click™ Cardiac Package features at the Radiological Society of North America (RSNA) annual meeting in Chicago, on Dec. 4, 2019, at 2 p.m., in the AI Showcase Theater. HeartVista will also be at Booth #11137 for the duration of the conference, from Dec. 1 through Dec. 5.

About HeartVista

HeartVista believes in leveraging artificial intelligence with the goal of improving access to MRI and improved patient care. The company’s One Click™ software platform enables real-time MRI for a variety of clinical and research applications. Its AI-driven, one-click cardiac localization method received first place honors at the International Society for Magnetic Resonance in Medicine’s Machine Learning Workshop in 2018. The company’s innovative technology originated at the Stanford Magnetic Resonance Systems Research Laboratory. HeartVista is funded by Khosla Ventures, and the National Institute of Health’s Small Business Innovation Research program.

For more information, visit www.heartvista.ai

SOURCE

Reply-To: Kimberly Ha <kimberly.ha@kkhadvisors.com>

Date: Tuesday, October 29, 2019 at 11:01 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: HeartVista Receives FDA Clearance for First AI-assisted Cardiac MRI Solution

Read Full Post »


Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Yu Fu1, Alexander W Jung1, Ramon Viñas Torne1, Santiago Gonzalez1,2, Harald Vöhringer1, Mercedes Jimenez-Linan3, Luiza Moore3,4, and Moritz Gerstung#1,5 # to whom correspondence should be addressed 1) European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK. 2) Current affiliation: Institute for Research in Biomedicine (IRB Barcelona), Parc Científic de Barcelona, Barcelona, Spain. 3) Department of Pathology, Addenbrooke’s Hospital, Cambridge, UK. 4) Wellcome Sanger Institute, Hinxton, UK 5) European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.

Correspondence:

Dr Moritz Gerstung European Molecular Biology Laboratory European Bioinformatics Institute (EMBL-EBI) Hinxton, CB10 1SA UK. Tel: +44 (0) 1223 494636 E-mail: moritz.gerstung@ebi.ac.uk

Abstract

Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis

Here we use deep transfer learning to quantify histopathological patterns across 17,396 H&E stained histopathology image slides from 28 cancer types and correlate these with underlying genomic and transcriptomic data. Pan-cancer computational histopathology (PC-CHiP) classifies the tissue origin across organ sites and provides highly accurate, spatially resolved tumor and normal distinction within a given slide. The learned computational histopathological features correlate with a large range of recurrent genetic aberrations, including whole genome duplications (WGDs), arm-level copy number gains and losses, focal amplifications and deletions as well as driver gene mutations within a range of cancer types. WGDs can be predicted in 25/27 cancer types (mean AUC=0.79) including those that were not part of model training. Similarly, we observe associations with 25% of mRNA transcript levels, which enables to learn and localise histopathological patterns of molecularly defined cell types on each slide. Lastly, we find that computational histopathology provides prognostic information augmenting histopathological subtyping and grading in the majority of cancers assessed, which pinpoints prognostically relevant areas such as necrosis or infiltrating lymphocytes on each tumour section. Taken together, these findings highlight the large potential of PC-CHiP to discover new molecular and prognostic associations, which can augment diagnostic workflows and lay out a rationale for integrating molecular and histopathological data.

SOURCE

https://www.biorxiv.org/content/10.1101/813543v1

Key points

● Pan-cancer computational histopathology analysis with deep learning extracts histopathological patterns and accurately discriminates 28 cancer and 14 normal tissue types

● Computational histopathology predicts whole genome duplications, focal amplifications and deletions, as well as driver gene mutations

● Wide-spread correlations with gene expression indicative of immune infiltration and proliferation

● Prognostic information augments conventional grading and histopathology subtyping in the majority of cancers

 

Discussion

Here we presented PC-CHiP, a pan-cancer transfer learning approach to extract computational histopathological features across 42 cancer and normal tissue types and their genomic, molecular and prognostic associations. Histopathological features, originally derived to classify different tissues, contained rich histologic and morphological signals predictive of a range of genomic and transcriptomic changes as well as survival. This shows that computer vision not only has the capacity to highly accurately reproduce predefined tissue labels, but also that this quantifies diverse histological patterns, which are predictive of a broad range of genomic and molecular traits, which were not part of the original training task. As the predictions are exclusively based on standard H&E-stained tissue sections, our analysis highlights the high potential of computational histopathology to digitally augment existing histopathological workflows. The strongest genomic associations were found for whole genome duplications, which can in part be explained by nuclear enlargement and increased nuclear intensities, but seemingly also stems from tumour grade and other histomorphological patterns contained in the high-dimensional computational histopathological features. Further, we observed associations with a range of chromosomal gains and losses, focal deletions and amplifications as well as driver gene mutations across a number of cancer types. These data demonstrate that genomic alterations change the morphology of cancer cells, as in the case of WGD, but possibly also that certain aberrations preferentially occur in distinct cell types, reflected by the tumor histology. Whatever is the cause or consequence in this equation, these associations lay out a route towards genomically defined histopathology subtypes, which will enhance and refine conventional assessment. Further, a broad range of transcriptomic correlations was observed reflecting both immune cell infiltration and cell proliferation that leads to higher tumor densities. These examples illustrated the remarkable property that machine learning does not only establish novel molecular associations from pre-computed histopathological feature sets but also allows the localisation of these traits within a larger image. While this exemplifies the power of a large scale data analysis to detect and localise recurrent patterns, it is probably not superior to spatially annotated training data. Yet such data can, by definition, only be generated for associations which are known beforehand. This appears straightforward, albeit laborious, for existing histopathology classifications, but more challenging for molecular readouts. Yet novel spatial transcriptomic44,45 and sequencing technologies46 bring within reach spatially matched molecular and histopathological data, which would serve as a gold standard in combining imaging and molecular patterns. Across cancer types, computational histopathological features showed a good level of prognostic relevance, substantially improving prognostic accuracy over conventional grading and histopathological subtyping in the majority of cancers. It is this very remarkable that such predictive It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543. The copyright holder for this preprint signals can be learned in a fully automated fashion. Still, at least at the current resolution, the improvement over a full molecular and clinical workup was relatively small. This might be a consequence of the far-ranging relations between histopathology and molecular phenotypes described here, implying that histopathology is a reflection of the underlying molecular alterations rather than an independent trait. Yet it probably also highlights the challenges of unambiguously quantifying histopathological signals in – and combining signals from – individual areas, which requires very large training datasets for each tumour entity. From a methodological point of view, the prediction of molecular traits can clearly be improved. In this analysis, we adopted – for the reason of simplicity and to avoid overfitting – a transfer learning approach in which an existing deep convolutional neural network, developed for classification of everyday objects, was fine tuned to predict cancer and normal tissue types. The implicit imaging feature representation was then used to predict molecular traits and outcomes. Instead of employing this two-step procedure, which risks missing patterns irrelevant for the initial classification task, one might directly employ either training on the molecular trait of interest, or ideally multi-objective learning. Further improvement may also be related to the choice of the CNN architecture. Everyday images have no defined scale due to a variable z-dimension; therefore, the algorithms need to be able to detect the same object at different sizes. This clearly is not the case for histopathology slides, in which one pixel corresponds to a defined physical size at a given magnification. Therefore, possibly less complex CNN architectures may be sufficient for quantitative histopathology analyses, and also show better generalisation. Here, in our proof-of-concept analysis, we observed a considerable dependence of the feature representation on known and possibly unknown properties of our training data, including the image compression algorithm and its parameters. Some of these issues could be overcome by amending and retraining the network to isolate the effect of confounding factors and additional data augmentation. Still, given the flexibility of deep learning algorithms and the associated risk of overfitting, one should generally be cautious about the generalisation properties and critically assess whether a new image is appropriately represented. Looking forward, our analyses revealed the enormous potential of using computer vision alongside molecular profiling. While the eye of a trained human may still constitute the gold standard for recognising clinically relevant histopathological patterns, computers have the capacity to augment this process by sifting through millions of images to retrieve similar patterns and establish associations with known and novel traits. As our analysis showed this helps to detect histopathology patterns associated with a range of genomic alterations, transcriptional signatures and prognosis – and highlight areas indicative of these traits on each given slide. It is therefore not too difficult to foresee how this may be utilised in a computationally augmented histopathology workflow enabling more precise and faster diagnosis and prognosis. Further, the ability to quantify a rich set of histopathology patterns lays out a path to define integrated histopathology and molecular cancer subtypes, as recently demonstrated for colorectal cancers47 .

Lastly, our analyses provide It is made available under a CC-BY-NC 4.0 International license. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.

bioRxiv preprint first posted online Oct. 25, 2019; doi: http://dx.doi.org/10.1101/813543.

The copyright holder for this preprint proof-of-concept for these principles and we expect them to be greatly refined in the future based on larger training corpora and further algorithmic refinements.

SOURCE

https://www.biorxiv.org/content/biorxiv/early/2019/10/25/813543.full.pdf

 

Other related articles published in this Open Access Online Scientific Journal include the following: 

 

CancerBase.org – The Global HUB for Diagnoses, Genomes, Pathology Images: A Real-time Diagnosis and Therapy Mapping Service for Cancer Patients – Anonymized Medical Records accessible to anyone on Earth

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/28/cancerbase-org-the-global-hub-for-diagnoses-genomes-pathology-images-a-real-time-diagnosis-and-therapy-mapping-service-for-cancer-patients-anonymized-medical-records-accessible-to/

 

631 articles had in their Title the keyword “Pathology”

https://pharmaceuticalintelligence.com/?s=Pathology

 

Read Full Post »


@CHI 1st AI World Conference and Expo, October 23 – October 25, 2019, Seaport World Trade Center, Boston, MA.  Presentations by Four Israeli companies explaining how they use AI technologies in their products @ NEIBC Meetup AI World Conference and Expo, 10/24/2019 @6:30PM Waterfront 1

#AIWORLD

@AIWORLDEXPO

@pharma_BI

@AVIVA1950

Reporter: Aviva Lev-Ari, PhD, RN

 

  • When: October 24, 2019
  • Time: 6:30 pm
  • Where: Seaport World Trade Center, Boston, MA
  • Room Location: Waterfront 1

Speakers Includes:

Registration:

  • To gain access to NEIBC Meetup please RSVP below and use code 1968-EXHP and get complimentary pass to the exhibit
  • If you want to attend the conference, use NEIBC19 discount code and get $200 off conference registration

RSVP NOW

AI World Conference and Expo has become the industry’s largest independent business event focused on the state of the practice of AI in the enterprise. The AI World 3-day program delivers a comprehensive spectrum of content, networking, and business development opportunities, all designed to help you cut through the hype and navigate through the complex landscape of AI business solutions. Attend AI World and learn how innovators are successfully deploying AI and intelligent automation to accelerate innovation efforts, build competitive advantage, drive new business opportunities, and reduce costs.

250+ Speakers

120+ Sponsors

2700+Attendees

100+Sessions

SOURCE

From: “Dan Trajman @ NEIBC” <dan.trajman@neibc.org>

Reply-To: <dan.trajman@neibc.org>

Date: Wednesday, October 23, 2019 at 11:50 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Israeli Companies Presenting at AI World October 24, 2019

 

Event Brochure

https://aiworld.com/docs/librariesprovider28/agenda/19/aiworld-conference-expo-2019.pdf

 

Plenary Program

WEDNESDAY, OCTOBER 23

9:00 AM SUMMIT KICK OFF: AI Becomes Real

Scott Lundstrom, Group Vice President and General Manager, IDC

Government and Health Insights, IDC and AI World, Conference Co-Chair

 

9:10 AM SUMMIT KEYNOTE: Business Strategy with Artificial Intelligence

Sam Ransbotham, PhD, Professor, Academic Contributing Editor,

Information Systems, Boston College; MIT Sloan Management Review

 

9:35 AM EXECUTIVE ROUNDTABLE:

AI Drives Innovation in Enterprise Applications

Moderator: Mickey North-Rizza, Research Vice President, Enterprise Applications, IDC

Panelists:

David Castillo, PhD, Managing Vice President, Machine Learning, Capital One

Mukesh Dalal, PhD, Chief Analytics Officer & Chief Data Scientist, Bose Corporation

Madhumita Bhattacharyya, Managing Director – Enterprise Data & Analytics,

Protiviti Sasha Caskey, CTO & Co-Founder, Kasisto

 

10:05 AM KEYNOTE: Evolving Role of CDAOS in the New Era – An Organizational Construct

Anju Gupta, Vice President, Chief Data and Analytics Officer, Enterprise Holdings

 

10:30 – 10:50 AM NETWORKING BREAK

 

10:50 AM EXECUTIVE ROUNDTABLE:

 

The Evolution of Conversational Assistants

 

Moderator:

Reenita Malholtra Hora, Director of Marketing & Communications, SRI International

Panelists:

William Mark, PhD, President, SRI

Karen Myers, Lab Director, SRI International’s AI Center

 

11:20 AM Talk Title to be Announced

Genevy Dimitrion VP, Enterprise Data and Analytics, Humana

 

11:40 AM How AI Maturity Impacts a Winning Corporate Strategy

Ritu Jyoti, Program Vice President, IDC

 

4:20 PM PLENARY KEYNOTE PANEL:

Learning from XPRIZE Startups to Achieve Successful AI Innovation

Moderator:

Devin Krotman, Director, IBM Watson

AI XPRIZE,

XPRIZE Foundation

 

Panelists: Eleni Miltsakaki, Founder and CEO, Choosito

Ellie Gordon, Founder, CEO, & Designer, Behaivior AI

Daniel Fortin, President, AITera Inc.

 

12:00 PM LUNCHEON KEYNOTE:

Case Studies of Conversational AI – Real Deployments at Scale

Jim Freeze, Chief Marketing Officer, Interactions

 

Sponsored by Ben Bauks, Senior Business Systems Analyst, Constant Contact

 

THURSDAY, OCTOBER 24

 

8:20 AM BREAKFAST KEYNOTE:

The Promise and Pain of Computer Vision in Retail, Healthcare, and Agriculture

Ben Schneider, Vice President, Product, Alegion

 

9:00 AM CONFERENCE INTRODUCTION

Eliot Weinman, Founder & Conference Chair, AI World; Executive Editor, AI Trends

 

9:05 AM INTRODUCTORY REMARKS

Scott Lundstrom, Group Vice President and General Manager, IDC Government and

Health Insights, IDC and AI World, Conference Co-Chair

 

9:15 AM KEYNOTE PRESENTATION:

The Human Strategy

Alex Sandy Pentland, PhD, Professor, Engineering, Business, Media Lab, MIT

 

9:45 AM KEYNOTE:

Uber’s Intelligent Insights Assistant

Franziska Bell, PhD, Director, Data Science, Data Science Platforms, Uber

 

10:15 AM KEYNOTE:

AI in Finance: Present and Future, Hype and Reality

Charles Elkan, PhD, Managing Director, Goldman Sachs

 

10:40 – 11:00 AM COFFEE BREAK

 

11:00 AM KEYNOTE:

AI at Work in Legal, News and Tax & Accounting

Khalid Al-Kofahi, PhD, Vice President, Research and Development, Head –

Center for AI and Cognitive Computing, Thomson Reuters

 

11:25AM EXECUTIVE ROUNDTABLE:

Disinformation, Infosec, Cognitive Security and Influence Manipulation

Moderator:

Michael Krigsman, Industry Analyst, CXOTalk

 

Panelist:

Sara-Jayne Terp, Data Scientist, Bodacea Light Industries LLC

Bob Gourley, Co-Founder and CTO, OODA LLC

Pablo Breuer, Director of US Special Operations Command Donovan Group and Senior Military Advisor and Innovation Officer, SOFWERX

Anthony Scriffignano, PhD, SVP, Chief Data Scientist, Dun & Bradstreet

 

Sponsored by

PUSHING THE BOUNDARIES OF AI – Providing the expertise required to accelerate the evolution of human progress in the age of artificial intelligence http://dellemc.com/AI

 

11:30 AM KEYNOTE:

How AI is Helping to Improve Canadian Lives Through AML

Vishal Gossain, Vice President, Global Risk Management, ScotiaBank

 

FRIDAY, OCTOBER 25

 

8:15 AM KEYNOTE:

AI World Society Roundtable on AI-Healthcare

Moderator:

Ed Burns, Site Editor, TechTarget

 

Panelists:

Professor David Silbersweig, Board Member of BGF, Harvard Medical

School

Professor Mai Trong Khoa, PhD, Chairman of the Nuclear Medicine and Oncology

Council, Director of the Gene-Stem cell Center, Bach Mai hospital, Senior lecturer,

Hanoi Medical University, Secrectary of the National Council of Professorship in

Medicine in Vietnam

Truong Van Phuoc, PhD, Former Acting Chairman, State Inspectory Committee

of Finance of Vietnam, Senior Advisor to Chairman, Vietbank

Truong Vinh Long, MD, CEO, Gia An 115 Hospital

 

10:00 AM KEYNOTE:

AI in Pharma: Where we are Today and How we Will Succeed in the Future

Natalija Jovanovic, PhD, Chief Digital Officer, Sanofi Pasteur

 

10:30 AM Startup Awards Announcement

John Desmond, Principal at JD Content Services, Editor AI Trends

 

10:35 – 10:50 AM COFFEE BREAK IN THE EXPO

 

10:50 AM EXECUTIVE ROUNDTABLE:

Enterprise AI Innovations

Moderator:

Nick Patience, Founder & Research Vice President, Software, 451 Research

Rudina Seseri, Founder and Managing Partner, Glasswing Venture

Norbert Monfort, Vice President, IT Transformation and Innovation, Assurant Global Technology

Dawn Fitzgerald, Director of Digital Transformation Data Center Operations, Schneider Electric

 

PLENARY PROGRAM

 

8:45 AM CONFERENCE INTRODUCTION

Scott Lundstrom, Group Vice President and General Manager, IDC Government and

Health Insights, IDC and AI World, Conference Co-Chair

 

8:50 AM KEYNOTE:

Artificial Intelligence in Sustainable Development: An Educational Perspective

Enver Yucel, Chairman, Bahçeşehir University

 

9:00 AM KEYNOTE:

Enhancing Human Capability with Intelligent Machine Teammates

Julie Shah, Associate Professor, Dept of Aeronautics and Astronautics, Computer Science and AI Lab, MIT

 

9:30 AM KEYNOTE:

Democracy, Ethics and the Rule of Law in the Age of Artificial Intelligence

Paul F. Nemitz, Principal Advisor in the Directorate-General for Justice and Consumers of the European Commission

 

12:00 PM LUNCHEON KEYNOTE:

How AI/ML is Changing the Face of Enterprise IT

Robert Ames, Senior Director, National Technology Strategy,

VMware Research, VMware

 

SOURCE

https://aiworld.com/docs/librariesprovider28/agenda/19/aiworld-conference-expo-2019.pdf

Read Full Post »


Showcase: How Deep Learning could help radiologists spend their time more efficiently

Reporter and Curator: Dror Nir, PhD

 

The debate on the function AI could or should realize in modern radiology is buoyant presenting wide spectrum of positive expectations and also fears.

The article: A Deep Learning Model to Triage Screening Mammograms: A Simulation Study that was published this month shows the best, and very much feasible, utility for AI in radiology at the present time. It would be of great benefit for radiologists and patients if such applications will be incorporated (with all safety precautions taken) into routine practice as soon as possible.

In a simulation study, a deep learning model to triage mammograms as cancer free improves workflow efficiency and significantly improves specificity while maintaining a noninferior sensitivity.

Background

Recent deep learning (DL) approaches have shown promise in improving sensitivity but have not addressed limitations in radiologist specificity or efficiency.

Purpose

To develop a DL model to triage a portion of mammograms as cancer free, improving performance and workflow efficiency.

Materials and Methods

In this retrospective study, 223 109 consecutive screening mammograms performed in 66 661 women from January 2009 to December 2016 were collected with cancer outcomes obtained through linkage to a regional tumor registry. This cohort was split by patient into 212 272, 25 999, and 26 540 mammograms from 56 831, 7021, and 7176 patients for training, validation, and testing, respectively. A DL model was developed to triage mammograms as cancer free and evaluated on the test set. A DL-triage workflow was simulated in which radiologists skipped mammograms triaged as cancer free (interpreting them as negative for cancer) and read mammograms not triaged as cancer free by using the original interpreting radiologists’ assessments. Sensitivities, specificities, and percentage of mammograms read were calculated, with and without the DL-triage–simulated workflow. Statistics were computed across 5000 bootstrap samples to assess confidence intervals (CIs). Specificities were compared by using a two-tailed t test (P < .05) and sensitivities were compared by using a one-sided t test with a noninferiority margin of 5% (P < .05).

Results

The test set included 7176 women (mean age, 57.8 years ± 10.9 [standard deviation]). When reading all mammograms, radiologists obtained a sensitivity and specificity of 90.6% (173 of 191; 95% CI: 86.6%, 94.7%) and 93.5% (24 625 of 26 349; 95% CI: 93.3%, 93.9%). In the DL-simulated workflow, the radiologists obtained a sensitivity and specificity of 90.1% (172 of 191; 95% CI: 86.0%, 94.3%) and 94.2% (24 814 of 26 349; 95% CI: 94.0%, 94.6%) while reading 80.7% (21 420 of 26 540) of the mammograms. The simulated workflow improved specificity (P = .002) and obtained a noninferior sensitivity with a margin of 5% (P < .001).

Conclusion

This deep learning model has the potential to reduce radiologist workload and significantly improve specificity without harming sensitivity.

Read Full Post »

Older Posts »