Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence in Medicine – Applications in Therapeutics’ Category


Artificial Intelligence in Medicine – Part 3: in Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

 

Updated on 2/10/2020

Eric Topol
@EricTopol

There have only been 5 randomized clinical trials of #AI in medicine to date. Here’s the summary: 4 in gastroenterology (2 @LancetGastroHep, 2 @Gut_BMJ) 1 in ophthalmology (@EClinicalMed) All were conducted in China (None in radiology, pathology, dermatology or other specialties)

Eric Topol
@EricTopol
Following
physician-scientist, author, editor. My new book is #DeepMedicine drerictopol.com

The Lancet Gastroenterology & Hepatology
@LancetGastroHep
Follow
The Lancet Gastroenterology & Hepatology publishes high-quality peer-reviewed research and reviews, comment, and news #gastroenterology #hepatology. IF=12.856

Gut Journal
@Gut_BMJ
Follow
Leading international journal in gastroenterology with an established reputation for publishing 1st class research. Find us on Facebook: facebook.com/Gut.BMJ

EClinicalMedicine – Published by The Lancet
@EClinicalMed
Follow
A new open access clinical journal, published by 

, influencing clinical practice and strengthening health systems

Image

Eric Topol
@EricTopol
While there are now hundreds of in silico, retrospective dataset reports, the number of prospective (non-randomized) trials in a real clinical environment testing #AI performance is limited. I only know of 11. Let me know if I’m missing any.

Image

 

Curators: Stephen J. Williams, PhD, Dror Nir, PhD and Aviva Lev-Ari, PhD, RN

 

 

 

Series Content Consultant:

Larry H. Bernstein, MD, FCAP, Emeritus CSO, LPBI Group

 

Volume Content Consultant:

Prof. Marcus W. Feldman

https://www.youtube.com/watch?v=aT-Jb0lKVT8

BURNET C. AND MILDRED FINLEY WOHLFORD PROFESSOR IN THE SCHOOL OF HUMANITIES AND SCIENCES

Stanford University, Co-Director, Center for Computational, Evolutionary and Human Genetics (2012 – Present)

Latest in Genomics Methodologies for Therapeutics:

Gene Editing, NGS & BioInformatics,

Simulations and the Genome Ontology

2019

Volume Two

https://www.amazon.com/dp/B08385KF87

Product details

  • File Size:3138 KB
  • Print Length:217 pages
  • Publisher:Leaders in Pharmaceutical Business Intelligence (LPBI) Group, Boston; 1 edition (December 28, 2019)
  • Publication Date:December 28, 2019
  • Sold by:Amazon Digital Services LLC
  • Language:English
  • ASIN:B08385KF87
  • Text-to-Speech: Enabled 
  • X-Ray:

Not Enabled 

  • Word Wise:Not Enabled
  • Lending:Enabled
  • Enhanced Typesetting:Enabled 

Prof. Marcus W. Feldman, PhD, Editor

Prof. Stephen J. Williams, PhD, Editor

and

Aviva Lev-Ari, PhD, RN, Editor

Introduction to Part 3: AI in Medicine – Voice of Aviva Lev-Ari & Professor Williams  

 

There is a current consensus that of all specialties in Medicine, Artificial Intelligence technologies will benefit the most the specialty of Radiology.

What AI can do

Of course, there is still a lot AI can do for radiologists. Soonmee Cha, MD, neuroradiologist, has served as a program director at the University of California San Francisco since 2012 and currently oversees 100 radiology trainees, said at RSNA 2019 in Chicago

“we can see a future where AI is improving image quality, decreasing acquisition times, eliminating artifacts, improving patient communication and even decreasing radiation dose.

“If AI can detect when machines are being set up incorrectly and alert us, it’s a win for us and for patients,” she said.

https://www.aiin.healthcare/topics/medical-imaging/rsna-ai-imaging-healthcare-costs-radiology-trainees?utm_source=newsletter&utm_medium=ai_news

Radiology societies team up for new statement on ethics of AI

Numerous imaging societies, including the American College of Radiology (ACR) and RSNA, have published a new statement on the ethical use of AI in radiology.

The European Society of Radiology, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics (EuSoMII), Canadian Association of Radiologists and American Association of Physicists in Medicine all also co-authored the statement which is focused on three key areas of AI development: data, algorithms and practice. A condensed summary was shared in the Journal of the American College of RadiologyRadiologyInsights into Imaging and the Canadian Association of Radiologists Journal.

“Radiologists remain ultimately responsible for patient care and will need to acquire new skills to do their best for patients in the new AI ecosystem,” J. Raymond Geis, MD, ACR Data Science Institute senior scientist and one of the document’s leading contributors, said in a prepared statement. “The radiology community needs an ethical framework to help steer technological development, influence how different stakeholders respond to and use AI, and implement these tools to make the best decisions for—and increasingly with—patients.”

“The application of AI tools in radiological practice lies in the hand of the radiologists, which also means that they have to be well-informed not only about the advantages they can offer to improve their services to patients, but also about the potential risks and pitfalls that might occur when implementing them,” Erik R. Ranschaert, MD, PhD, president of EuSoMII. “This paper is therefore an excellent basis to improve their awareness about the potential issues that might arise, and should stimulate them in thinking proactively on how to answer the existing questions.”

Back in September, the Royal Australian and New Zealand College of Radiologists (RANZCR) published its own guidelines on the ethical application of AI in healthcare. The document, “Ethical Principles for Artificial Intelligence in Medicine,” is available on the RANZCR website.

https://www.radiologybusiness.com/topics/artificial-intelligence/radiology-societies-ethics-ai

Selective examples of applications of AI in the specialty of Radiology include the following:

  • RSNA 2019, the world’s largest radiology conference, kicks off at Chicago’s McCormick Place on Sunday, Dec. 1, 2019, and promises to include more AI content than ever before. There will be an expanded AI Showcase this year, giving attendees access to more than 100 vendors in one location.
  1. “Artificial Intelligence and Precision Education: How AI Can Revolutionize Training in Radiology” | Monday, Dec. 2 | 8:30 – 10 a.m. | Room: E450A
  2. “Learning AI from the Experts: Becoming an AI Leader in Global Radiology (Without Needing a Computer Science Degree)” | Tuesday, Dec. 3 | 4:30-6 p.m. | Room: S406B
  3. “Deep Learning in Radiology: How Do We Do It?” | Wednesday, Dec. 4 | 8:30-10 a.m. | Room: S406B

https://www.aiin.healthcare/topics/medical-imaging/rsna-2019-preview-3-ai-sessions-radiology-imaging?utm_source=newsletter&utm_medium=ai_news

 

  • Interview with George Shih, MD, a radiologist at Weill Cornell Medicine and NewYork-Presbyterian and the co-founder of the healthcare startup MD.ai

An academic gold rush, where people are working to apply the latest AI techniques to both existing problems and brand new problems, and it’s all been really great for the field of radiology.

We’re also holding another machine learning competition this year hosted on Kaggle. In previous years, we’ve annotated existing public data that was used for our competition, but this year, we were actually able to acquire high-quality data—more than 25,000 CT examinations that nobody has used or seen before—from four different institutions. The top 10 winning algorithms will also be made public to anyone in the world, which is an amazing way to advance the use of AI in radiology. I think that’s one of the biggest contributions RSNA is making to the academic community this year.

The other exciting part is that our new and improved AI Showcase will include more vendors—more than 100—than any previous year, which shows just how much the market continues to focus on these technologies.

https://www.aiin.healthcare/topics/medical-imaging/radiologist-rsna-2019-ai-radiology-imaging?utm_source=newsletter&utm_medium=ai_news

 

  • AI model could help radiologists diagnose lung cancer

Michael Walter | November 27, 2019 | Medical Imaging

https://www.aiin.healthcare/topics/medical-imaging/ai-model-radiologists-diagnose-lung-cancer-imaging

 

  • AI a hot topic for radiology researchers in 2019

Michael Walter | November 26, 2019 | Medical Imaging

https://www.aiin.healthcare/topics/medical-imaging/ai-radiology-researchers-rsna-citations-downloads?utm_source=newsletter&utm_medium=ai_news

 

  • GE Healthcare launches new program to simplify AI development, implementation

Michael Walter | November 26, 2019 | Business Intelligence

https://www.aiin.healthcare/topics/business-intelligence/ge-healthcare-new-program-simplify-ai-development?utm_source=newsletter&utm_medium=ai_news

 

  • How teleradiologists are helping underserved regions all over the world

Michael Walter | Medical Imaging Review

Sponsored by vRad, a MEDNAX Company

https://www.radiologybusiness.com/sponsored/1065/topics/medical-imaging-review/qa-how-teleradiologists-are-helping-underserved?utm_source=newsletter&utm_medium=ai_news

AI in Healthcare 2020 Leadership Survey Report: 7 Key Findings

Artificial and augmented intelligence are already helping healthcare improve clinically, operationally and financially—and there is extraordinary room for growth. Success starts with leadership, vision and investment and leaders tell us they have all of the above. Here are the top 7 survey findings.

01 C-level healthcare leaders are leading the charge to AI. AI has earned the attention of the C-suite, with 40% of survey respondents saying their strategy is coming from the top down. Chief information officers are most often managing AI across the healthcare enterprise (27%).

02 AI has moved into the mainstream. The future is now. It’s here. Health systems are hiring data scientists and spending on AI and infrastructure. Some 40% of respondents are using AI, with 50% using between one and 10 apps.

03 Health systems are committed to investing in AI. 93% of respondents agree AI is absolutely essential, very important or important to their strategy. There is great willingness to take advantage of intelligent technology and leverage machine intelligence to enhance human intelligence. Administration holds financial responsibility for AI at 43% of facilities, with IT paying the bill at 26% of sites.

04 Fortifying infrastructure is top of mind. 93% of respondents agree AI is absolutely essential, very important or important to their strategy. There is great willingness to take advantage of intelligent technology and leverage machine
intelligence to enhance human intelligence. Administration holds financial responsibility for AI at 43% of facilities, with IT paying the bill at 26% of sites.

05 Improving care is AI’s greatest benefit. Improving accuracy, efficiency and workflow are the top benefits leaders see coming from AI. AI helps to highlight key findings from the depths of the EMR, identify declines in patient conditions earlier and improve chronic disease management. Cancer, heart disease and stroke are the disease states survey respondents see AI holding the greatest promise—the 2nd, 1st and 5th leading killer of Americans.

06 Health systems are both buying and developing AI apps. Some 50% of respondents tell us they are both buying and developing AI apps. About 38% are exclusively opting to purchase commercially developed apps while 13% are developing everything in-house.

07 Radiology is blazing the AI trail. AI apps for imaging outnumber all other categories of FDA-approved apps to date. It’s no surprise then that respondents tell us that rad apps top the list of tools they’re using to enhance breast, chest and cardiovascular imaging.

SOURCE

https://www.aiin.healthcare/sponsored/9667/topics/ai-healthcare-2020-leadership-survey-report/ai-healthcare-2020-leadership-1

 

WATCH VIDEO

https://www.dropbox.com/s/xayeu7ss7f7cahp/AI%20Launch%20v2.mp4?dl=0

 

Like in the past, Dr. Eric Topol is a Tour de Force, again

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again 1st Edition

by Eric Topol  (Author)

https://www.amazon.com/gp/product/1541644638/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=wwwsamharris03-20&creative=9325&linkCode=as2&creativeASIN=1541644638&linkId=e8e2d5410e9b5921f1e21883a9c84cff

Dr Mike Warner

5.0 out of 5 starsCrystal Ball for the Next Era of Healthcare

March 13, 2019

Format: HardcoverVerified Purchase

Dr. Topol’s new book, Deep Medicine – How Artificial Intelligence Can Make Healthcare Human Again, is an encyclopedia of the emerging Fourth Industrial Age; a crystal ball in what is about happen in the next era of healthcare. I’m impressed by the detailed references and touching personal and family stories.

Centers for Medicare & Medicaid Services (CMS) policy modifications in the past 10 months reveal sweeping changes that fortify Dr. Topol’s vision: May 2018 medical students can document for attending physicians in the health record (MLN MM10412), 2019 ancillary staff members and patients can document the History/medical interview into the health record, 2021 medical providers can document based only on Medical Decision Making or Time (Federal Register Nov, 23, 2018).

Part of making healthcare human is also making it fun. The joy of practicing medicine is about to return to the healthcare delivery as computers will be used to empower humanistic traits, not overburden medical professionals with clerical tasks. For patients, you will be heard, understood and personally treated. Deep Medicine is not a vision of what will happen in 50 years as much will start to reveal within the next 5!

Bravo Dr. Topol!
Michael Warner, DO, CPC, CPCO, CPMA, AAPC Fellow

https://www.amazon.com/gp/product/1541644638/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=wwwsamharris03-20&creative=9325&linkCode=as2&creativeASIN=1541644638&linkId=e8e2d5410e9b5921f1e21883a9c84cff#customerReviews

 

AUDIT PODCASTS

  • The perspective of what it truly means to be an AI company and AI platform.

  • How MaxQ AI is reinventing the diagnostic process with AI in time sensitive, life threatening environments.

  • How EnvoyAI is working towards a zero-click approach for physicians to feel confident in their findings.

  • Recognizing the right questions to ask when training algorithms for more accurate results.

  • The value of having a powerful world-class image processing algorithm running on an extensible interoperable platform.

Join Jeff, Gene, and Kevin next time as they continue the conversation on the future of artificial intelligence in healthcare.

https://www.terarecon.com/blog/beyond-the-screen-episode-6-next-generation-ai-companies-providing-physicians-a-starting-point-in-ai?utm_campaign=AuntMinnie%20June%202019&utm_medium=email&utm_source=hs_email

Academic Gallup Poll: The Artificial Intelligence Age, June 2019.

New Northeastern-Gallup poll: People in the US, UK, and Canada want to keep up in the artificial intelligence age. They say employers, educators, and governments are letting them down. – News @ Northeastern

https://news.northeastern.edu/2019/06/27/new-northeastern-gallup-poll-people-in-the-us-uk-and-canada-want-to-keep-up-in-the-artificial-intelligence-age-they-say-employers-educators-and-governments-are-letting-them-down/

 

Dense Map of Artificial Intelligence Start ups in Israel

 

Image Sourcehttps://www.startuphub.ai/multinational-corporations-with-artificial-intelligence-research-and-development-centers-in-israel/

(See here for an interactive version of the infographic above).

https://www.forbes.com/sites/gilpress/2018/09/24/the-thriving-ai-landscape-in-israel-and-what-it-means-for-global-ai-competition/#577a107330c5

https://hackernoon.com/israels-artificial-intelligence-landscape-2018-83cdd4f04281

3.1 The Science

VIEW VIDEO

Max Tegmark lecture on Life 3.0 – Being Human in the age of Artificial Intelligence

https://www.youtube.com/watch?v=1MqukDzhlqA

 

3.1.1   World Medical Innovation Forum, Partners Innovations, ARTIFICIAL INTELLIGENCE | APRIL 8–10, 2019 | Westin, BOSTON

https://worldmedicalinnovation.org/agenda/

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/02/14/world-medical-innovation-forum-partners-innovations-artificial-intelligence-april-8-10-2019-westin-boston/

 

 

3.1.2   LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Real Time Coverage: Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/04/10/live-day-three-world-medical-innovation-forum-artificial-intelligence-boston-ma-usa-monday-april-10-2019/

 

 

3.1.3   LIVE Day Two – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 9, 2019

Real Time Coverage: Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/04/09/live-day-two-world-medical-innovation-forum-artificial-intelligence-boston-ma-usa-monday-april-9-2019/

 

 

3.1.4   LIVE Day One – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 8, 2019

Real Time Coverage: Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/04/08/live-day-one-world-medical-innovation-forum-artificial-intelligence-westin-copley-place-boston-ma-usa-monday-april-8-2019/

 

 

3.1.5   2018 Annual World Medical Innovation Forum Artificial Intelligence April 23–25, 2018 Boston, Massachusetts  | Westin Copley Place https://worldmedicalinnovation.org/

Real Time Coverage: Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/2018-annual-world-medical-innovation-forum-artificial-intelligence-april-23-25-2018-boston-massachusetts-westin-copley-place/

 

 

3.1.6   Synopsis Days 1,2,3: 2018 Annual World Medical Innovation Forum Artificial Intelligence April 23–25, 2018 Boston, Massachusetts  | Westin Copley Place

Real Time Coverage: Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/04/26/synopsis-days-123-2018-annual-world-medical-innovation-forum-artificial-intelligence-april-23-25-2018-boston-massachusetts-westin-copley-place/

 

 

3.1.7   Interview with Systems Immunology Expert Prof. Shai Shen-Orr

Reporter: Aviva Lev-Ari, PhD, RN

https://tmrwedition.com/2018/07/19/interview-with-systems-immunology-expert-prof-shai-shen-orr/

 

 

3.1.8   Unique immune-focused AI model creates largest library of inter-cellular communications at CytoReason. Used  to predict 335 novel cell-cytokine interactions, new clues for drug development.

Reporter: Aviva Lev-Ari, PhD, RN

  • CYTOREASON. CytoReason features in hashtag #DeepKnowledgeVentures‘s detailed Report on AI in hashtag #drugdevelopment report https://lnkd.in/dKV2BB6

https://www.eurekalert.org/pub_releases/2018-06/c-uia061818.php

3.2 Technologies and Methodologies

 

3.2.1   R&D for Artificial Intelligence Tools & Applications: Google’s Research Efforts in 2018

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/16/rd-for-artificial-intelligence-tools-applications-googles-research-efforts-in-2018/

 

3.2.2   Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

 

 

3.2.3   N3xt generation carbon nanotubes

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/12/14/n3xt-generation-carbon-nanotubes/

 

3.2.4   Mindful Discoveries

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/01/28/mindful-discoveries/

 

 

3.2.5   Novel Discoveries in Molecular Biology and Biomedical Science

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/30/novel-discoveries-in-molecular-biology-and-biomedical-science/

 

3.2.6   Imaging of Cancer Cells

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/04/20/imaging-of-cancer-cells/

 

 

3.2.7   Retrospect on HistoScanning: an AI routinely used in diagnostic imaging for over a decade

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/06/22/retrospect-on-histoscanning-an-ai-routinely-used-in-diagnostic-imaging-for-over-a-decade/

 

 

3.2.8    Prediction of Cardiovascular Risk by Machine Learning (ML) Algorithm: Best performing algorithm by predictive capacity had area under the ROC curve (AUC) scores: 1st, quadratic discriminant analysis; 2nd, NaiveBayes and 3rd, neural networks, far exceeding the conventional risk-scaling methods in Clinical Use

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/07/04/prediction-of-cardiovascular-risk-by-machine-learning-ml-algorithm-best-performing-algorithm-by-predictive-capacity-had-area-under-the-roc-curve-auc-scores-1st-quadratic-discriminant-analysis/

 

3.2.9   An Intelligent DNA Nanorobot to Fight Cancer by Targeting HER2 Expression

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/07/24/an-intelligent-dna-nanorobot-to-fight-cancer-by-targeting-her2-expression/

3.3   Clinical Aspects

 

Is AI ready for Medical Applications? – The Debate in August 2019 in Nature

 

Eric Topol (@EricTopol)

8/18/19, 2:17 PM

Why I’ve been writing #AI for medicine is long on promise, short of proof

nature.com/articles/s4159… @NatureMedicine

status update in this schematic, among many mismatches pic.twitter.com/mpifYFwlp8

 

The “inconvenient truth” about AI in healthcare

 

However, “the inconvenient truth” is that at present the algorithms that feature prominently in research literature are in fact not, for the most part, executable at the frontlines of clinical practice. This is for two reasons: first, these AI innovations by themselves do not re-engineer the incentives that support existing ways of working.2 A complex web of ingrained political and economic factors as well as the proximal influence of medical practice norms and commercial interests determine the way healthcare is delivered. Simply adding AI applications to a fragmented system will not create sustainable change. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9 For example, an algorithm trained on mostly Caucasian patients is not expected to have the same accuracy when applied to minorities.10 In addition, such rigorous evaluation and re-calibration must continue after implementation to track and capture those patient demographics and practice patterns which inevitably change over time.11 Some of these issues can be addressed through external validation, the importance of which is not unique to AI, and it is timely that existing standards for prediction model reporting are being updated specifically to incorporate standards applicable to this end.12 In the United States, there are islands of aggregated healthcare data in the ICU,13 and in the Veterans Administration.14 These aggregated data sets have predictably catalyzed an acceleration in AI development; but without broader development of data infrastructure outside these islands it will not be possible to generalize these innovations.

https://www.nature.com/articles/s41746-019-0155-4

3.3.1   9 AI-based initiatives catalyzing immunotherapy in 2018

By Tanima Bose

https://www.prescouter.com/2018/07/9-ai-based-initiatives-catalyzing-immunotherapy-in-2018/

 

 

3.3.2   mRNA Data Survival Analysis

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/06/18/mrna-data-survival-analysis/

 

 

3.3.3   Medcity Converge 2018 Philadelphia: Live Coverage @pharma_BI

Reporter: Stephen J. Williams

https://pharmaceuticalintelligence.com/2018/07/11/medcity-converge-2018-philadelphia-live-coverage-pharma_bi/

 

 

3.3.4   Live Coverage: MedCity Converge 2018 Philadelphia: AI in Cancer and Keynote Address

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2018/07/11/live-coverage-medcity-converge-2018-philadelphia-ai-in-cancer-and-keynote-address/

 

 

3.3.5   VIDEOS: Artificial Intelligence Applications for Cardiology

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/03/11/videos-artificial-intelligence-applications-for-cardiology/

 

 

3.3.6   Artificial Intelligence in Health Care and in Medicine: Diagnosis & Therapeutics

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/21/artificial-intelligence-in-health-care-and-in-medicine-diagnosis-therapeutics/

 

 

3.3.7   Digital Therapeutics: A Threat or Opportunity to Pharmaceuticals

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/03/18/digital-therapeutics-a-threat-or-opportunity-to-pharmaceuticals/

 

 

3.3.8   The 3rd STATONC Annual Symposium, April 25-27, 2019, Hilton Hartford, CT, 315 Trumbull St., Hartford, CT 06103

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2019/02/26/the-3rd-stat4onc-annual-symposium-april-25-27-2019-hilton-hartford-connecticut/

 

 

3.3.9   2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/05/10/2019-biotechnology-sector-and-artificial-intelligence-in-healthcare/

 

 

3.3.10   Artificial intelligence can be a useful tool to predict Alzheimer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/01/26/artificial-intelligence-can-be-a-useful-tool-to-predict-alzheimer/

 

 

3.3.11   Unlocking the Microbiome

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/02/07/unlocking-the-microbiome/

 

 

3.3.12   Biomarker Development

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/11/16/biomarker-development/

 

 

3.3.13   AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

 

 

3.3.14   AI App for People with Digestive Disorders

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/24/ai-app-for-people-with-digestive-disorders/

 

 

3.3.15   Sepsis Detection using an Algorithm More Efficient than Standard Methods

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/25/sepsis-detection-using-an-algorithm-more-efficient-than-standard-methods/

 

 

3.3.16   How Might Sleep Apnea Lead to Serious Health Concerns like Cardiac and Cancer?

Author: Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2013/03/20/how-might-sleep-apnea-lead-to-serious-health-concerns-like-cardiac-and-cancers/

 

 

3.3.17   An Intelligent DNA Nanorobot to Fight Cancer by Targeting HER2 Expression

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/07/24/an-intelligent-dna-nanorobot-to-fight-cancer-by-targeting-her2-expression/

 

3.3.18   Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/07/26/artificial-intelligence-and-cardiovascular-disease/

 

3.3.19   Using A.I. to Detect Lung Cancer gets an A!

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/08/04/using-a-i-to-detect-lung-cancer-gets-an-a/

 

 

3.3.20   Complex rearrangements and oncogene amplification revealed by long-read DNA and RNA sequencing of a breast cancer cell line

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2019/08/14/complex-rearrangements-and-oncogene-amplification-revealed-by-long-read-dna-and-rna-sequencing-of-a-breast-cancer-cell-line/

 

3.3.21   Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

https://pharmaceuticalintelligence.com/2019/07/21/multiple-barriers-identified-which-may-hamper-use-of-artificial-intelligence-in-the-clinical-setting/

 

3.3.22   Deep Learning–Assisted Diagnosis of Cerebral Aneurysms

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

 

3.3.23   Artificial Intelligence Innovations in Cardiac Imaging

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/12/17/artificial-intelligence-innovations-in-cardiac-imaging/

 

3.4 Business and Legal

Image Source: https://www.linkedin.com/pulse/resources-artificial-intelligence-health-care-note-lev-ari-phd-rn/

 

3.4.1   McKinsey Top Ten Articles on Artificial Intelligence: 2018’s most popular articles – An executive’s guide to AI

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/21/mckinsey-top-ten-articles-on-artificial-intelligence-2018s-most-popular-articles-an-executives-guide-to-ai/

 

3.4.2   HOTTEST Artificial Intelligence Hub: Israel’s High Tech Industry – Why?

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/09/30/hottest-artificial-intelligence-hub-israels-high-tech-industry-why/

 

 

3.4.3   The Regulatory challenge in adopting AI

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/04/07/the-regulatory-challenge-in-adopting-ai/

 

 

3.4.4   HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

 

 

3.4.5   IBM’s Watson Health division – How will the Future look like?

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/04/24/ibms-watson-health-division-how-will-the-future-look-like/

 

 

3.4.6   HUBweek 2018, October 8-14, 2018, Greater Boston – “We The Future” – coming together, of breaking down barriers, of convening across disciplinary lines to shape our future

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/10/08/hubweek-2018-october-8-14-2018-greater-boston-we-the-future-coming-together-of-breaking-down-barriers-of-convening-across-disciplinary-lines-to-shape-our-future/

 

 

3.4.7   Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2019/01/03/role-of-informatics-in-precision-medicine-can-it-drive-the-next-cost-efficiencies-in-oncology-care/

 

 

3.4.8   Healthcare conglomeration to access Big Data and lower costs

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/01/13/healthcare-conglomeration-to-access-big-data-and-lower-costs/

 

3.4.9   Linguamatics announces the official launch of its AI self-service text-mining solution for researchers.

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/linguamatics-announces-the-official-launch-of-its-ai-self-service-text-mining-solution-for-researchers/

 

3.4.10   Future of Big Data for Societal Transformation

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/12/14/future-of-big-data-for-societal-transformation/

 

 

3.4.11   Deloitte Analysis 2019 Global Life Sciences Outlook

https://www2.deloitte.com/global/en/pages/life-sciences-and-healthcare/articles/global-life-sciences-sector-outlook.html

https://www.cioapplications.com/news/making-a-breakthrough-in-drug-discovery-with-ai-nid-3114.html

https://healthcare.cioapplications.com/cioviewpoint/leveraging-technologies-to-better-position-the-business-nid-1060.html

 

 

3.4.12   OpenAI: $1 Billion to Create Artificial Intelligence Without Profit Motive by Who is Who in the Silicon Valley

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2015/12/26/openai-1-billion-to-create-artificial-intelligence-without-profit-motive-by-who-is-who-in-the-silicon-valley/

 

 

3.4.13   The Health Care Benefits of Combining Wearables and AI

Reporter: Gail S. Thornton, M.A.

https://pharmaceuticalintelligence.com/2019/07/02/the-health-care-benefits-of-combining-wearables-and-ai/

 

 

3.4.14   These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade.

Reporter: Gail S. Thornton, M.A.

https://pharmaceuticalintelligence.com/2019/07/02/top-12-artificial-intelligence-innovations-disrupting-healthcare-by-2020/

 

 

3.4.15   Forbes Opinion: 13 Industries Soon To Be Revolutionized By Artificial Intelligence

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/07/31/forbes-opinion-13-industries-soon-to-be-revolutionized-by-artificial-intelligence/

 

3.4.16   AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2019/12/11/ai-acquisitions-by-big-tech-firms-are-happening-at-a-blistering-pace-2019-recent-data-by-cbiinsights/

 

3.5 Machine Learning (ML) Algorithms harnessed for Medical Diagnosis: Pattern Recognition & Prediction of Disease Onset

Introduction by Dr. Dror Nir

 

Icahn School of Medicine at Mount Sinai to Establish World Class Center for Artificial Intelligence – Hamilton and Amabel James Center for Artificial Intelligence and Human Health

First center in New York to seamlessly integrate artificial intelligence, data science and genomic screening to advance clinical practice and patient outcomes.

Integrative Omics and Multi-Scale Disease Modeling— Artificial intelligence and machine learning approaches developed at the Icahn Institute have been extensively used for identification of novel pathways, drug targets, and therapies for complex human diseases such as cancer, Alzheimer’s, schizophrenia, obesity, diabetes, inflammatory bowel disease, and cardiovascular disease. Researchers will combine insights in genomics—including state-of-the-art single-cell genomic data—with ‘omics,’ such as epigenomics, pharmacogenomics, and exposomics, and integrate this information with patient health records and data originating from wearable devices in order to model the molecular, cellular, and circuit networks that facilitate disease progression. “Novel data-driven predictions will be tightly integrated with high-throughput experiments to validate the therapeutic potential of each prediction,” said Adam Margolin, PhD, Professor and Chair of the Department of Genetics and Genomic Sciences and Senior Associate Dean of Precision Medicine at Mount Sinai. “Clinical experts in key disease areas will work side-by-side with data scientists to translate the most promising therapies to benefit patients. We have the potential to transform the way care givers deliver cost-effective, high quality health care to their patients, far beyond providing simple diagnoses. Mount Sinai wants to be on the frontlines of discovery.”

Precision Imaging—Researchers will use artificial intelligence to enhance the diagnostic power of imaging technologies—X-ray, MRI, CT, and PET—and molecular imaging, and accelerate the development of therapies. “We see a huge potential in using algorithms to automate the image interpretation and to acquire images much more quickly at high resolution – so that we can better detect disease and make it less burdensome for the patient,” said Zahi Fayad, PhD, Director of the Translational and Molecular Imaging Institute, and Vice Chair for Research for the Department of Radiology, at Mount Sinai. Dr. Fayad plans to broaden the scope of the Translational and Molecular Imaging Institute by recruiting more engineers and scientists who will create new methods to aid in the diagnosis and early detection of disease, treatment protocol development, drug development, and personalized medicine. Dr. Fayad added, “In addition to AI, we envision advance capabilities in two important areas: computer vision and augmented reality, and next generation medical technology enabling development of new medical devices, sensors and robotics.”

https://www.mountsinai.org/about/newsroom/2019/icahn-school-of-medicine-at-mount-sinai-to-establish-world-class-center-for-artificial-intelligence-hamilton-and-amabel-james-center-for-artificial-intelligence-and-human-health

 

A comprehensive overview of ML algorithms applied in health care is presented in the following article:

Survey of Machine Learning Algorithms for Disease Diagnostic

https://www.scirp.org/journal/PaperInformation.aspx?PaperID=73781

 

3.5.1 Cases in Pathology 

 

3.5.1.1   Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

 

3.5.2 Cases in Radiology

 

3.5.2.1   Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/29/cardiac-mri-imaging-breakthrough-the-first-ai-assisted-cardiac-mri-scan-solution-heartvista-receives-fda-510k-clearance-for-one-click-cardiac-mri-package/

 

3.5.2.2   Disentangling molecular alterations from water-content changes in the aging human brain using quantitative MRI

Reporter: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/08/01/disentangling-molecular-alterations-from-water-content-changes-in-the-aging-human-brain-using-quantitative-mri/

 

3.5.2.3   Showcase: How Deep Learning could help radiologists spend their time more efficiently

Reporter and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/08/22/showcase-how-deep-learning-could-help-radiologists-spend-their-time-more-efficiently/

 

3.5.2.4   CancerBase.org – The Global HUB for Diagnoses, Genomes, Pathology Images: A Real-time Diagnosis and Therapy Mapping Service for Cancer Patients – Anonymized Medical Records accessible to anyone on Earth

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/28/cancerbase-org-the-global-hub-for-diagnoses-genomes-pathology-images-a-real-time-diagnosis-and-therapy-mapping-service-for-cancer-patients-anonymized-medical-records-accessible-to/

 

3.5.2.5   Applying AI to Improve Interpretation of Medical Imaging

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/05/28/applying-ai-to-improve-interpretation-of-medical-imaging/

 

 

3.5.2.6   Imaging: seeing or imagining? (Part 2)

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/04/07/imaging-seeing-or-imagining-part-2-2/

 

 

3.5.3 Cases in Prediction Cancer Onset

 

3.5.3.1  A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction

 

3.5.3.2   Comparison of a Deep Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction

Karin Dembrower Yue LiuHossein AzizpourMartin EklundKevin SmithPeter LindholmFredrik Strand

Published Online: Dec 17 2019 https://doi.org/10.1148/radiol.2019190872

See editorial by Manisha Bahl

 

Results

A total of 2283 women, 278 of whom were later diagnosed with breast cancer, were evaluated. The age at mammography (mean, 55.7 years vs 54.6 years; P < .001), the dense area (mean, 38.2 cm2 vs 34.2 cm2P < .001), and the percentage density (mean, 25.6% vs 24.0%; P < .001) were higher among women diagnosed with breast cancer than in those without a breast cancer diagnosis. The odds ratios and areas under the receiver operating characteristic curve (AUCs) were higher for age-adjusted DL risk score than for dense area and percentage density: 1.56 (95% confidence interval [CI]: 1.48, 1.64; AUC, 0.65), 1.31 (95% CI: 1.24, 1.38; AUC, 0.60), and 1.18 (95% CI: 1.11, 1.25; AUC, 0.57), respectively (P < .001 for AUC). The false-negative rate was lower: 31% (95% CI: 29%, 34%), 36% (95% CI: 33%, 39%; P = .006), and 39% (95% CI: 37%, 42%; P < .001); this difference was most pronounced for more aggressive cancers.

Conclusion

Compared with density-based models, a deep neural network can more accurately predict which women are at risk for future breast cancer, with a lower false-negative rate for more aggressive cancers.

Related articles

Radiology2019

Volume: 0Issue: 0

Radiology2019

Volume: 293Issue: 2pp. 246-259

Radiology2019

Volume: 291Issue: 3pp. 582-590

 

Summary of ML in Medicine by Dr. Dror Nir

See Introduction to 3.5, above

 

Part 3: Summary – AI in Medicine – Voice of Aviva Lev-Ari & Professor Williams  

AI applications in healthcare

The potential of AI to improve the healthcare delivery system is limitless. It offers a unique opportunity to make sense out of clinical data to enable fully integrated healthcare that is more predictive and precise. Getting all aspects of AI-enabled solutions right requires extensive collaboration between clinicians, data scientists, interaction designers, and other experts. Here are four applications of artificial intelligence to transform healthcare delivery:

1. Improve operational efficiency and performance

On a departmental and enterprise level, the ability of AI to sift through large amounts of data can help hospital administrators to optimize performance, drive productivity, and improve the use of existing resources, generating time and cost savings. For example, in a radiology department, AI could make a difference in the management of referrals, patient scheduling, and exam preparations. Improvements here can help to enhance patient experience and will allow a more effective and efficient use of the facilities at examination sites.

2. Aiding clinical decision support

AI-enabled solutions can help to combine large amounts of clinical data to generate a more holistic view of patients. This supports healthcare providers in their decision making, leading to better patient outcomes and improved population health. “The need for insights and for those insights to lead to clinical operations support is tremendous,” says Dr. Smythe. “Whether that is the accuracy of interventions or the effective use of manpower – these are things that physicians struggle with. That is the imperative.”

3. Enabling population health management

Combining clinical decision support systems with patient self-management, population health management can also benefit from AI. Using predictive analytics with patient populations, healthcare providers will be able to take preventative action, reduce health risk, and save unnecessary costs.

As the population ages, so does a desire to age in place when possible, and to maximize not only disease management, but quality of life as we do so. The possibility of aggregating, analyzing and activating health data from millions of consumers will enable hospitals to see how socio-economic, behavioral, genetic and clinical factors correlate and can offer more targeted, preventative healthcare outside the four walls of the hospital.

4. Empowering consumers, improving patient care

As recently as 2015 patients reported physically carrying x-rays, test results, and other critical health data from one healthcare provider’s office to another3. The burden of multiple referrals, explaining symptoms to new physicians and finding out that their medical history has gaps in it were all too real. Patients now are demanding more personalized, sophisticated and convenient healthcare services.

The great motivation behind AI in healthcare is that increasingly, as patients become more engaged with their own healthcare and better understand their own needs, healthcare will have to take steps towards them and meet them where they are, providing them with health services when they need them, not just when they are ill.

SOURCE

https://www.usa.philips.com/healthcare/nobounds/four-applications-of-ai-in-healthcare?origin=1_us_en_auntminnie_aicommunity

 

Our Summary for AI in Medicine presents to the eReader the results of the 2020 Survey on that topic, all the live links will take the eReader to the report itself. We provided the reference, below

  • AI in Healthcare 2020 Leadership Survey Report: About the Survey

The AI in Healthcare team embarked on this survey to gain a deeper understanding of the current state of artificial and augmented intelligence in use and being planned across healthcare in the next few years. We polled readers of AI in Healthcare, AIin.Healthcare and sister brand HealthExec.com over 2 months. All data is presented in this report in aggregate, with individual responses remaining anonymous.

The content in this report reflects the input of 1,238 physicians, executives, IT and administrative leaders in healthcare, medical devices and IT and software development from across the globe, with 75 percent based in the United States. The report focuses on the responses of providers and professionals at the helm of healthcare systems, integrated delivery networks, academic medical centers, hospitals, imaging centers and physician groups across the U.S. For a deeper dive into survey demographics, click here.

Some respondents chose to share more specific demographics that help us better get to know our survey base. Those 165 healthcare leaders work for 38 unique health systems, hospitals, physician groups and imaging or surgery centers, across 39 states and the District of Columbia. They are large, small and mid-sized, for profit, not for profit, academic and government owned. Respondents, too, herald from all levels of leadership. Here are some of the interesting titles who chimed in—and we are thankful they did: CEO, CFO, CMO, CIO, chief innovation officer, chief data officer, chief administrative officer, medical director of quality, senior VP of quality and innovation officer, system director of transformation, VP of service line development, and plenty of physicians, directors of ICU, imaging, cath lab and surgery, nurses and technologists.

In this report we unpack current trends in AI and machine learning, drill into data from various perspectives such as the C-suite and the physician leader, and learn how healthcare systems are using and planning to use AI. Turn the page and see where we are and where we’re going.

.

Author: Mary C. Tierney, MS, Chief Content Officer, AI in Healthcare magazine and AIin.Healthcare

SOURCE

https://www.aiin.healthcare/sponsored/9667/topics/ai-healthcare-2020-leadership-survey-report/ai-healthcare-2020-leadership-3

Read Full Post »


AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights

Reporter: Stephen J. Williams, Ph.D.

Recent report from CBI Insights shows the rapid pace at which the biggest tech firms (Google, Apple, Microsoft, Facebook, and Amazon) are acquiring artificial intelligence (AI) startups, potentially confounding the AI talent shortage that exists.

The link to the report and free download is given here at https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/

Part of the report:

TECH GIANTS LEAD IN AI ACQUISITIONS

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, & Apple (FAMGA) have all been aggressively acquiring AI startups in the last decade.

Among the FAMGA companies, Apple leads the way, making 20 total AI acquisitions since 2010. It is followed by Google (the frontrunner from 2012 to 2016) with 14 acquisitions and Microsoft with 10.

Apple’s AI acquisition spree, which has helped it overtake Google in recent years, was essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhone X just by looking at it, stems from Apple’s M&A moves in chips and computer vision, including the acquisition of AI company RealFace.

In fact, many of FAMGA’s prominent products and services came out of acquisitions of AI companies — such as Apple’s Siri, or Google’s contributions to healthcare through DeepMind.

That said, tech giants are far from the only companies snatching up AI startups.

Since 2010, there have been 635 AI acquisitions, as companies aim to build out their AI capabilities and capture sought-after talent (as of 8/31/2019).

The pace of these acquisitions has also been increasing. AI acquisitions saw a more than 6x uptick from 2013 to 2018, including last year’s record of 166 AI acquisitions — up 38% year-over-year.

In 2019, there have already been 140+ acquisitions (as of August), putting the year on track to beat the 2018 record at the current run rate.

Part of this increase in the pace of AI acquisitions can be attributed to a growing diversity in acquirers. Where once AI was the exclusive territory of major tech companies, today, smaller AI startups are becoming acquisition targets for traditional insurance, retail, and healthcare incumbents.

For example, in February 2018, Roche Holding acquired New York-based cancer startup Flatiron Health for $1.9B — one of the largest M&A deals in artificial intelligence. This year, Nike acquired AI-powered inventory management startup Celect, Uber acquired computer vision company Mighty AI, and McDonald’s acquired personalization platform Dynamic Yield.

Despite the increased number of acquirers, however, tech giants are still leading the charge. Acquisitive tech giants have emerged as powerful global corporations with a competitive advantage in artificial intelligence, and startups have played a pivotal role in helping these companies scale their AI initiatives.

Apple, Google, Microsoft, Facebook, Intel, and Amazon are the most active acquirers of AI startups, each acquiring 7+ companies.

To read more on recent Acquisitions in the AI space please see the following articles on this Open Access Online Journal

Diversification and Acquisitions, 2001 – 2015: Trail known as “Google Acquisitions” – Understanding Alphabet’s Acquisitions: A Sector-By-Sector Analysis

Clarivate Analytics expanded IP data leadership by new acquisition of the leading provider of intellectual property case law and analytics Darts-ip

2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Forbes Opinion: 13 Industries Soon To Be Revolutionized By Artificial Intelligence

Artificial Intelligence and Cardiovascular Disease

Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

 

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


scPopCorn: A New Computational Method for Subpopulation Detection and their Comparative Analysis Across Single-Cell Experiments

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Present day technological advances have facilitated unprecedented opportunities for studying biological systems at single-cell level resolution. For example, single-cell RNA sequencing (scRNA-seq) enables the measurement of transcriptomic information of thousands of individual cells in one experiment. Analyses of such data provide information that was not accessible using bulk sequencing, which can only assess average properties of cell populations. Single-cell measurements, however, can capture the heterogeneity of a population of cells. In particular, single-cell studies allow for the identification of novel cell types, states, and dynamics.

 

One of the most prominent uses of the scRNA-seq technology is the identification of subpopulations of cells present in a sample and comparing such subpopulations across samples. Such information is crucial for understanding the heterogeneity of cells in a sample and for comparative analysis of samples from different conditions, tissues, and species. A frequently used approach is to cluster every dataset separately, inspect marker genes for each cluster, and compare these clusters in an attempt to determine which cell types were shared between samples. This approach, however, relies on the existence of predefined or clearly identifiable marker genes and their consistent measurement across subpopulations.

 

Although the aligned data can then be clustered to reveal subpopulations and their correspondence, solving the subpopulation-mapping problem by performing global alignment first and clustering second overlooks the original information about subpopulations existing in each experiment. In contrast, an approach addressing this problem directly might represent a more suitable solution. So, keeping this in mind the researchers developed a computational method, single-cell subpopulations comparison (scPopCorn), that allows for comparative analysis of two or more single-cell populations.

 

The performance of scPopCorn was tested in three distinct settings. First, its potential was demonstrated in identifying and aligning subpopulations from single-cell data from human and mouse pancreatic single-cell data. Next, scPopCorn was applied to the task of aligning biological replicates of mouse kidney single-cell data. scPopCorn achieved the best performance over the previously published tools. Finally, it was applied to compare populations of cells from cancer and healthy brain tissues, revealing the relation of neoplastic cells to neural cells and astrocytes. Consequently, as a result of this integrative approach, scPopCorn provides a powerful tool for comparative analysis of single-cell populations.

 

This scPopCorn is basically a computational method for the identification of subpopulations of cells present within individual single-cell experiments and mapping of these subpopulations across these experiments. Different from other approaches, scPopCorn performs the tasks of population identification and mapping simultaneously by optimizing a function that combines both objectives. When applied to complex biological data, scPopCorn outperforms previous methods. However, it should be kept in mind that scPopCorn assumes the input single-cell data to consist of separable subpopulations and it is not designed to perform a comparative analysis of single cell trajectories datasets that do not fulfill this constraint.

 

Several innovations developed in this work contributed to the performance of scPopCorn. First, unifying the above-mentioned tasks into a single problem statement allowed for integrating the signal from different experiments while identifying subpopulations within each experiment. Such an incorporation aids the reduction of biological and experimental noise. The researchers believe that the ideas introduced in scPopCorn not only enabled the design of a highly accurate identification of subpopulations and mapping approach, but can also provide a stepping stone for other tools to interrogate the relationships between single cell experiments.

 

References:

 

https://www.sciencedirect.com/science/article/pii/S2405471219301887

 

https://www.tandfonline.com/doi/abs/10.1080/23307706.2017.1397554

 

https://ieeexplore.ieee.org/abstract/document/4031383

 

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0927-y

 

https://www.sciencedirect.com/science/article/pii/S2405471216302666

 

 

Read Full Post »


Tweets, Pictures and Retweets at 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, MIT by @pharma_BI and @AVIVA1950 for #KIsymposium PharmaceuticalIntelligence.com and Social Media

 

Pictures taken in Real Time

 

Notification from Twitter.com on June 14, 2019 and in the 24 hours following the symposium

 

     liked your Tweet

    3 hours ago

  1.  Retweeted your Tweet

    5 hours ago

    1 other Retweet

  2.  liked your Tweets

    11 hours ago

    2 other likes

     and  liked a Tweet you were mentioned in

    11 hours ago

     liked your reply

    12 hours ago

  3. Replying to 

    It was an incredibly touching and “metzamrer” surprise to meet you at MIT

  4.  liked your Tweets

    13 hours ago

    3 other likes

     liked your reply

    15 hours ago

    Amazing event @avivregev @reginabarzilay 2pharma_BI Breakthrough in

     and  liked a Tweet you were mentioned in

    17 hours ago

  5. ‘s machine learning tool characterizes proteins, which are biomarkers of disease development and progression. Scientists can know more about their relationship to specific diseases and can interview earlier and precisely. ,

  6. learning and are undergoing dramatic changes and hold great promise for cancer research, diagnostics, and therapeutics. @KIinstitute by

     liked your Tweet

    Jun 16

     Retweeted your Retweet

    Jun 16

     liked your Retweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Retweet

    Jun 15

     and 3 others liked your reply

    Jun 15

     and  Retweeted your Tweet

    Jun 14

     and  liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  7.  liked your Tweets

    Jun 14

    2 other likes

     liked your Tweet

    Jun 14

     Retweeted your Retweet

    Jun 14

     liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  8. identification in the will depend on highly

  9.  liked your Tweets

    Jun 14

    2 other likes

     Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     and 3 others liked your reply

    Jun 14

     liked your Retweet

    Jun 14

  10. this needed to be done a long time ago

     Retweeted your Tweet

    Jun 14

     and  Retweeted your reply

    Jun 14

     liked your Tweet

    Jun 14

     liked your reply

    Jun 14

     Retweeted your reply

    Jun 14

 

Tweets by @pharma_BI and by @AVIVA1950

&

Retweets and replies by @pharma_BI and @AVIVA1950

eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  1. Top lectures by @reginabarzilay @avivaregev

  2. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  1.   Retweeted

    eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  2. Top lectures by @reginabarzilay @avivaregev

  3. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  4. eProceedings & eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  5.   Retweeted
  6.   Retweeted

    Einstein, Curie, Bohr, Planck, Heisenberg, Schrödinger… was this the greatest meeting of minds, ever? Some of the world’s most notable physicists participated in the 1927 Solvay Conference. In fact, 17 of the 29 scientists attending were or became Laureates.

  7.   Retweeted

    identification in the will depend on highly

  8. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, Cambridge, MA via

 

Read Full Post »


Reported by Dror Nir, PhD

Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model

Allison Park, BA1Chris Chute, BS1Pranav Rajpurkar, MS1;  et al, Original Investigation, Health Informatics, June 7, 2019, JAMA Netw Open. 2019;2(6):e195600. doi:10.1001/jamanetworkopen.2019.5600

Key Points

Question  How does augmentation with a deep learning segmentation model influence the performance of clinicians in identifying intracranial aneurysms from computed tomographic angiography examinations?

Findings  In this diagnostic study of intracranial aneurysms, a test set of 115 examinations was reviewed once with model augmentation and once without in a randomized order by 8 clinicians. The clinicians showed significant increases in sensitivity, accuracy, and interrater agreement when augmented with neural network model–generated segmentations.

Meaning  This study suggests that the performance of clinicians in the detection of intracranial aneurysms can be improved by augmentation using deep learning segmentation models.

 

Abstract

Importance  Deep learning has the potential to augment clinician performance in medical imaging interpretation and reduce time to diagnosis through automated segmentation. Few studies to date have explored this topic.

Objective  To develop and apply a neural network segmentation model (the HeadXNet model) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians’ intracranial aneurysm diagnostic performance.

Design, Setting, and Participants  In this diagnostic study, a 3-dimensional convolutional neural network architecture was developed using a training set of 611 head CTA examinations to generate aneurysm segmentations. Segmentation outputs from this support model on a test set of 115 examinations were provided to clinicians. Between August 13, 2018, and October 4, 2018, 8 clinicians diagnosed the presence of aneurysm on the test set, both with and without model augmentation, in a crossover design using randomized order and a 14-day washout period. Head and neck examinations performed between January 3, 2003, and May 31, 2017, at a single academic medical center were used to train, validate, and test the model. Examinations positive for aneurysm had at least 1 clinically significant, nonruptured intracranial aneurysm. Examinations with hemorrhage, ruptured aneurysm, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, surgical clips, coils, catheters, or other surgical hardware were excluded. All other CTA examinations were considered controls.

Main Outcomes and Measures  Sensitivity, specificity, accuracy, time, and interrater agreement were measured. Metrics for clinician performance with and without model augmentation were compared.

Results  The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence–produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians’ mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19).

Conclusions and Relevance  The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence–assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.

Introduction

Diagnosis of unruptured aneurysms is a critically important clinical task: intracranial aneurysms occur in 1% to 3% of the population and account for more than 80% of nontraumatic life-threatening subarachnoid hemorrhages.1 Computed tomographic angiography (CTA) is the primary, minimally invasive imaging modality currently used for diagnosis, surveillance, and presurgical planning of intracranial aneurysms,2,3but interpretation is time consuming even for subspecialty-trained neuroradiologists. Low interrater agreement poses an additional challenge for reliable diagnosis.47

Deep learning has recently shown significant potential in accurately performing diagnostic tasks on medical imaging.8 Specifically, convolutional neural networks (CNNs) have demonstrated excellent performance on a range of visual tasks, including medical image analysis.9 Moreover, the ability of deep learning systems to augment clinician workflow remains relatively unexplored.10 The development of an accurate deep learning model to help clinicians reliably identify clinically significant aneurysms in CTA has the potential to provide radiologists, neurosurgeons, and other clinicians an easily accessible and immediately applicable diagnostic support tool.

In this study, a deep learning model to automatically detect intracranial aneurysms on CTA and produce segmentations specifying regions of interest was developed to assist clinicians in the interpretation of CTA examinations for the diagnosis of intracranial aneurysms. Sensitivity, specificity, accuracy, time to diagnosis, and interrater agreement for clinicians with and without model augmentation were compared.

Methods

The Stanford University institutional review board approved this study. Owing to the retrospective nature of the study, patient consent or assent was waived. The Standards for Reporting of Diagnostic Accuracy (STARD) reporting guideline was used for the reporting of this study.

Data

A total of 9455 consecutive CTA examination reports of the head or head and neck performed between January 3, 2003, and May 31, 2017, at Stanford University Medical Center were retrospectively reviewed. Examinations with parenchymal hemorrhage, subarachnoid hemorrhage, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, ischemic stroke, nonspecific or chronic vascular findings such as intracranial atherosclerosis or other vasculopathies, surgical clips, coils, catheters, or other surgical hardware were excluded. Examinations of injuries that resulted from trauma or contained images degraded by motion were also excluded on visual review by a board-certified neuroradiologist with 12 years of experience. Examinations with nonruptured clinically significant aneurysms (>3 mm) were included.11

Radiologist Annotations

The reference standard for all examinations in the test set was determined by a board-certified neuroradiologist at a large academic practice with 12 years of experience who determined the presence of aneurysm by review of the original radiology report, double review of the CTA examination, and further confirmation of the aneurysm by diagnostic cerebral angiograms, if available. The neuroradiologist had access to all of the Digital Imaging and Communications in Medicine (DICOM) series, original reports, and clinical histories, as well as previous and follow-up examinations during interpretation to establish the best possible reference standard for the labels. For each of the aneurysm examinations, the radiologist also identified the location of each of the aneurysms. Using the open-source annotation software ITK-SNAP,12 the identified aneurysms were manually segmented on each slice.

Model Development

In this study, we developed a 3-dimensional (3-D) CNN called HeadXNet for segmentation of intracranial aneurysms from CT scans. Neural networks are functions with parameters structured as a sequence of layers to learn different levels of abstraction. Convolutional neural networks are a type of neural network designed to process image data, and 3-D CNNs are particularly well suited to handle sequences of images, or volumes.

HeadXNet is a CNN with an encoder-decoder structure (eFigure 1 in the Supplement), where the encoder maps a volume to an abstract low-resolution encoding, and the decoder expands this encoding to a full-resolution segmentation volume. The segmentation volume is of the same size as the corresponding study and specifies the probability of aneurysm for each voxel, which is the atomic unit of a 3-D volume, analogous to a pixel in a 2-D image. The encoder is adapted from a 50-layer SE-ResNeXt network,1315and the decoder is a sequence of 3 × 3 transposed convolutions. Similar to UNet,16 skip connections are used in 3 layers of the encoder to transmit outputs directly to the decoder. The encoder was pretrained on the Kinetics-600 data set,17 a large collection of YouTube videos labeled with human actions; after pretraining the encoder, the final 3 convolutional blocks and the 600-way softmax output layer were removed. In their place, an atrous spatial pyramid pooling18 layer and the decoder were added.

Training Procedure

Subvolumes of 16 slices were randomly sampled from volumes during training. The data set was preprocessed to find contours of the skull, and each volume was cropped around the skull in the axial plane before resizing each slice to 208 × 208 pixels. The slices were then cropped to 192 × 192 pixels (using random crops during training and centered crops during testing), resulting in a final input of size 16 × 192 × 192 per example; the same transformations were applied to the segmentation label. The segmentation output was trained to optimize a weighted combination of the voxelwise binary cross-entropy and Dice losses.19

Before reaching the model, inputs were clipped to [−300, 700] Hounsfield units, normalized to [−1, 1], and zero-centered. The model was trained on 3 Titan Xp graphical processing units (GPUs) (NVIDIA) using a minibatch of 2 examples per GPU. The parameters of the model were optimized using a stochastic gradient descent optimizer with momentum of 0.9 and a peak learning rate of 0.1 for randomly initialized weights and 0.01 for pretrained weights. The learning rate was scheduled with a linear warm-up from 0 to the peak learning rate for 10 000 iterations, followed by cosine annealing20 over 300 000 iterations. Additionally, the learning rate was fixed at 0 for the first 10 000 iterations for the pretrained encoder. For regularization, L2 weight decay of 0.001 was added to the loss for all trainable parameters and stochastic depth dropout21 was used in the encoder blocks. Standard dropout was not used.

To control for class imbalance, 3 methods were used. First, an auxiliary loss was added after the encoder and focal loss was used to encourage larger parameter updates on misclassified positive examples. Second, abnormal training examples were sampled more frequently than normal examples such that abnormal examples made up 30% of training iterations. Third, parameters of the decoder were not updated on training iterations where the segmentation label consisted of purely background (normal) voxels.

To produce a segmentation prediction for the entire volume, the segmentation outputs for sequential 16-slice subvolumes were simply concatenated. If the number of slices was not divisible by 16, the last input volume was padded with 0s and the corresponding output volume was truncated back to the original size.

Study Design

We performed a diagnostic accuracy study comparing performance metrics of clinicians with and without model augmentation. Each of the 8 clinicians participating in the study diagnosed a test set of 115 examinations, once with and once without assistance of the model. The clinicians were blinded to the original reports, clinical histories, and follow-up imaging examinations. Using a crossover design, the clinicians were randomly and equally divided into 2 groups. Within each group, examinations were sorted in a fixed random order for half of the group and sorted in reverse order for the other half. Group 1 first read the examinations without model augmentation, and group 2 first read the examinations with model augmentation. After a washout period of 14 days, the augmentation arrangement was reversed such that group 1 performed reads with model augmentation and group 2 read the examinations without model augmentation (Figure 1A).

Clinicians were instructed to assign a binary label for the presence or absence of at least 1 clinically significant aneurysm, defined as having a diameter greater than 3 mm. Clinicians read alone in a diagnostic reading room, all using the same high-definition monitor (3840 × 2160 pixels) displaying CTA examinations on a standard open-source DICOM viewer (Horos).22 Clinicians entered their labels into a data entry software application that automatically logged the time difference between labeling of the previous examination and the current examination.

When reading with model augmentation, clinicians were provided the model’s predictions in the form of region of interest (ROI) segmentations directly overlaid on top of CTA examinations. To ensure an image display interface that was familiar to all clinicians, the model’s predictions were presented as ROIs in a standard DICOM viewing software. At every voxel where the model predicted a probability greater than 0.5, readers saw a semiopaque red overlay on the axial, sagittal, and coronal series (Figure 1C). Readers had access to the ROIs immediately on loading the examinations, and the ROIs could be toggled off to reveal the unaltered CTA images (Figure 1B). The red overlays were the only indication that was given whether a particular CTA examination had been predicted by the model to contain an aneurysm. Given these model results, readers had the option to take it into consideration or disregard it based on clinical judgment. When readers performed diagnoses without augmentation, no ROIs were present on any of the examinations. Otherwise, the diagnostic tools were identical for augmented and nonaugmented reads.

 

Statistical Analysis

On the binary task of determining whether an examination contained an aneurysm, sensitivity, specificity, and accuracy were used to assess the performance of clinicians with and without model augmentation. Sensitivity denotes the number of true-positive results over total aneurysm-positive cases, specificity denotes the number of true-negative results over total aneurysm-negative cases, and accuracy denotes the number of true-positive and true-negative results over all test cases. The microaverage of these statistics across all clinicians was also computed by measuring each statistic pertaining to the total number of true-positive, false-negative, and false-positive results. In addition, to convert the models’ segmentation output of the model into a binary prediction, a prediction was considered positive if the model predicted at least 1 voxel as belonging to an aneurysm and negative otherwise. The 95% Wilson score confidence intervals were used to assess the variability in the estimates for sensitivity, specificity, and accuracy.23

To assess whether the clinicians achieved significant increases in performance with model augmentation, a 1-tailed t test was performed on the differences in sensitivity, specificity, and accuracy across all 8 clinicians. To determine the robustness of the findings and whether results were due to inclusion of the resident radiologist and neurosurgeon, we performed a sensitivity analysis: we computed the t test on the differences in sensitivity, specificity, and accuracy across board-certified radiologists only.

The average time to diagnosis for the clinicians with and without augmentation was computed as the difference between the mean entry times into the spreadsheet of consecutive diagnoses; 95% t score confidence intervals were used to assess the variability in the estimates. To account for interruptions in the clinical read or time logging errors, the 5 longest and 5 shortest time to diagnosis for each clinician in each reading were excluded. To assess whether model augmentation significantly decreased the time to diagnosis, a 1-tailed t test was performed on the difference in average time with and without augmentation across all 8 clinicians.

The interrater agreement of clinicians and for the radiologist subset was computed using the exact Fleiss κ.24 To assess whether model augmentation increased interrater agreement, a 1-tailed permutation test was performed on the difference between the interrater agreement of clinicians on the test set with and without augmentation. The permutation procedure consisted of randomly swapping clinician annotations with and without augmentation so that a random subset of the test set that had previously been labeled as read with augmentation was now labeled as being read without augmentation, and vice versa; the exact Fleiss κ values (and the difference) were computed on the test set with permuted labels. This permutation procedure was repeated 10 000 times to generate the null distribution of the Fleiss κ difference (the interrater agreement of clinician annotations with augmentation is not higher than without augmentation) and the unadjusted value calculated as the proportion of Fleiss κ differences that were higher than the observed Fleiss κ difference.

To control the familywise error rate, the Benjamini-Hochberg correction was applied to account for multiple hypothesis testing; a Benjamini-Hochberg–adjusted P ≤ .05 indicated statistical significance. All tests were 1-tailed.25

Results

The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms (Figure 2). Of the 328 aneurysm cases, 20 cases from 15 unique patients contained 2 or more aneurysms. One hundred forty-eight aneurysm cases contained aneurysms between 3 mm and 7 mm, 108 cases had aneurysms between 7 mm and 12 mm, 61 cases had aneurysms between 12 mm and 24 mm, and 11 cases had aneurysms 24 mm or greater. The location of the aneurysms varied according to the following distribution: 99 were located in the internal carotid artery, 78 were in the middle cerebral artery, 50 were cavernous internal carotid artery aneurysms, 44 were basilar tip aneurysms, 41 were in the anterior communicating artery, 18 were in the posterior communicating artery, 16 were in the vertebrobasilar system, and 12 were in the anterior cerebral artery. All examinations were performed either on a GE Discovery, GE LightSpeed, GE Revolution, Siemens Definition, Siemens Sensation, or a Siemens Force scanner, with slice thicknesses of 1.0 mm or 1.25 mm, using standard clinical protocols for head angiogram or head/neck angiogram. There was no difference between the protocols or slice thicknesses between the aneurysm and nonaneurysm examinations. For this study, axial series were extracted from each examination and a segmentation label was produced on every axial slice containing an aneurysm. The number of images per examination ranged from 113 to 802 (mean [SD], 373 [157]).

The examinations were split into a training set of 611 examinations (494 patients; mean [SD] age, 55.8 [18.1] years; 372 [60.9%] female) used to train the model, a development set of 92 examinations (86 patients; mean [SD] age, 61.6 [16.7] years; 59 [64.1%] female) used for model selection, and a test set of 115 examinations (82 patients; mean [SD] age, 57.8 [18.3] years; 74 [64.4%] female) to evaluate the performance of the clinicians when augmented with the model (Figure 2).

Using stratified random sampling, the development and test sets were formed to include 50% aneurysm examinations and 50% normal examinations; the remaining examinations composed the training set, of which 36.5% were aneurysm examinations. Forty-three patients had multiple examinations in the data set due to examinations performed for follow-up of the aneurysm. To account for these repeat patients, examinations were split so that there was no patient overlap between the different sets. Figure 2 contains pathology and patient demographic characteristics for each set.

A total of 8 clinicians, including 6 board-certified practicing radiologists, 1 practicing neurosurgeon, and 1 radiology resident, participated as readers in the study. The radiologists’ years of experience ranged from 3 to 12 years, the neurosurgeon had 2 years of experience as attending, and the resident was in the second year of training at Stanford University Medical Center. Groups 1 and 2 consisted of 3 radiologists each; the resident and neurosurgeon were both in group 1. None of the clinicians were involved in establishing the reference standard for the examinations.

Without augmentation, clinicians achieved a microaveraged sensitivity of 0.831 (95% CI, 0.794-0.862), specificity of 0.960 (95% CI, 0.937-0.974), and an accuracy of 0.893 (95% CI, 0.872-0.912). With augmentation, the clinicians achieved a microaveraged sensitivity of 0.890 (95% CI, 0.858-0.915), specificity of 0.975 (95% CI, 0.957-0.986), and an accuracy of 0.932 (95% CI, 0.913-0.946). The underlying model had a sensitivity of 0.949 (95% CI, 0.861-0.983), specificity of 0.661 (95% CI, 0.530-0.771), and accuracy of 0.809 (95% CI, 0.727-0.870). The performances of the model, individual clinicians, and their microaverages are reported in eTable 1 in the Supplement.

 

With augmentation, there was a statistically significant increase in the mean sensitivity (0.059; 95% CI, 0.028-0.091; adjusted P = .01) and mean accuracy (0.038; 95% CI, 0.014-0.062; adjusted P = .02) of the clinicians as a group. There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16). Performance improvements across clinicians are detailed in the Table, and individual clinician improvement in Figure 3.

Individual performances with and without model augmentation are shown in eTable 1 in the Supplement. The sensitivity analysis confirmed that even among board-certified radiologists, there was a statistically significant increase in mean sensitivity (0.059; 95% CI, 0.013-0.105; adjusted P = .04) and accuracy (0.036; 95% CI, 0.001-0.072; adjusted P = .05). Performance improvements of board-certified radiologists as a group are shown in eTable 2 in the Supplement.

 

The mean diagnosis time per examination without augmentation microaveraged across clinicians was 57.04 seconds (95% CI, 54.58-59.50 seconds). The times for individual clinicians are detailed in eTable 3 in the Supplement, and individual time changes are shown in eFigure 2 in the Supplement.

 

With augmentation, there was no statistically significant decrease in mean diagnosis time (5.71 seconds; 95% CI, −7.22 to 18.63 seconds; adjusted P = .19). The model took a mean of 7.58 seconds (95% CI, 6.92-8.25 seconds) to process an examination and output its segmentation map.Confusion matrices, which are tables reporting true- and false-positive results and true- and false-negative results of each clinician with and without model augmentation, are shown in eTable 4 in the Supplement.

There was a statistically significant increase of 0.060 (adjusted P = .05) in the interrater agreement among the clinicians, with an exact Fleiss κ of 0.799 without augmentation and 0.859 with augmentation. For the board-certified radiologists, there was an increase of 0.063 in their interrater agreement, with an exact Fleiss κ of 0.783 without augmentation and 0.847 with augmentation.

Discussion

In this study, the ability of a deep learning model to augment clinician performance in detecting cerebral aneurysms using CTA was investigated with a crossover study design. With model augmentation, clinicians’ sensitivity, accuracy, and interrater agreement significantly increased. There was no statistical change in specificity and time to diagnosis.Given the potential catastrophic outcome of a missed aneurysm at risk of rupture, an automated detection tool that reliably detects and enhances clinicians’ performance is highly desirable. Aneurysm rupture is fatal in 40% of patients and leads to irreversible neurological disability in two-thirds of those who survive; therefore, an accurate and timely detection is of paramount importance. In addition to significantly improving accuracy across clinicians while interpreting CTA examinations, an automated aneurysm detection tool, such as the one presented in this study, could also be used to prioritize workflow so that those examinations more likely to be positive could receive timely expert review, potentially leading to a shorter time to treatment and more favorable outcomes.The significant variability among clinicians in the diagnosis of aneurysms has been well documented and is typically attributed to lack of experience or subspecialty neuroradiology training, complex neurovascular anatomy, or the labor-intensive nature of identifying aneurysms. Studies have shown that interrater agreement of CTA-based aneurysm detection is highly variable, with interrater reliability metrics ranging from 0.37 to 0.85,6,7,2628 and performance levels that vary depending on aneurysm size and individual radiologist experience.4,6 In addition to significantly increasing sensitivity and accuracy, augmenting clinicians with the model also significantly improved interrater reliability from 0.799 to 0.859. This implies that augmenting clinicians with varying levels of experience and specialties with models could lead to more accurate and more consistent radiological interpretations. Currently, tools to improve clinician aneurysm detection on CTA include bone subtraction,29 as well as 3-D rendering of intracranial vasculature,3032 which rely on application of contrast threshold settings to better delineate cerebral vasculature and create a 3-D–rendered reconstruction to assist aneurysm detection. However, using these tools is labor- and time-intensive for clinicians; in some institutions, this process is outsourced to a 3-D lab at additional costs. The tool developed in this study, integrated directly in a standard DICOM viewer, produces a segmentation map on a new examination in only a few seconds. If integrated into the standard workflow, this diagnostic tool could substantially decrease both cost and time to diagnosis, potentially leading to more efficient treatment and more favorable patient outcomes.Deep learning has recently shown success in various clinical image-based recognition tasks. In particular, studies have shown strong performance of 2-D CNNs in detecting intracranial hemorrhage and other acute brain findings, such as mass effect or skull fractures, on CT head examinations.3336 Recently, one study10 examined the potential role for deep learning in magnetic resonance angiogram–based detection of cerebral aneurysms, and another study37 showed that providing deep learning model predictions to clinicians when interpreting knee magnetic resonance studies increased specificity in detecting anterior cruciate ligament tears. To our knowledge, prior to this study, deep learning had not been applied to CTA, which is the first-line imaging modality for detecting cerebral aneurysms. Our results demonstrate that deep learning segmentation models may produce dependable and interpretable predictions that augment clinicians and improve their diagnostic performance. The model implemented and tested in this study significantly increased sensitivity, accuracy, and interrater reliability of clinicians with varied experience and specialties in detecting cerebral aneurysms using CTA.

Limitations

This study has limitations. First, because the study focused only on nonruptured aneurysms, model performance on aneurysm detection after aneurysm rupture, lesion recurrence after coil or surgical clipping, or aneurysms associated with arteriovenous malformations has not been investigated. Second, since examinations containing surgical hardware or devices were excluded, model performance in their presence is unknown. In a clinical environment, CTA is typically used to evaluate for many types of vascular diseases, not just for aneurysm detection. Therefore, the high prevalence of aneurysm in the test set and the clinician’s binary task could have introduced bias in interpretation. Also, this study was performed on data from a single tertiary care academic institution and may not reflect performance when applied to data from other institutions with different scanners and imaging protocols, such as different slice thicknesses.

Conclusions

A deep learning model was developed to automatically detect clinically significant intracranial aneurysms on CTA. We found that the augmentation significantly improved clinicians’ sensitivity, accuracy, and interrater reliability. Future work should investigate the performance of this model prospectively and in application of data from other institutions and hospitals.

Article Information:

Accepted for Publication: April 23, 2019.

Published: June 7, 2019. doi:10.1001/jamanetworkopen.2019.5600

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Park A et al. JAMA Network Open.

Corresponding Author: Kristen W. Yeom, MD, School of Medicine, Department of Radiology, Stanford University, 725 Welch Rd, Ste G516, Palo Alto, CA 94304 (kyeom@stanford.edu).

Author Contributions: Ms Park and Dr Yeom had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Ms Park and Messrs Chute and Rajpurkar are co–first authors. Drs Ng and Yeom are co–senior authors.

Concept and design: Park, Chute, Rajpurkar, Lou, Shpanskaya, Ni, Basu, Lungren, Ng, Yeom.

Acquisition, analysis, or interpretation of data: Park, Chute, Rajpurkar, Lou, Ball, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Ni, Wishah, Wittber, Hong, Wilson, Halabi, Patel, Lungren, Yeom.

Drafting of the manuscript: Park, Chute, Rajpurkar, Lou, Ball, Jabarkheel, Kim, McKenna, Hong, Halabi, Lungren, Yeom.

Critical revision of the manuscript for important intellectual content: Park, Chute, Rajpurkar, Ball, Shpanskaya, Jabarkheel, Kim, Tseng, Ni, Wishah, Wittber, Wilson, Basu, Patel, Lungren, Ng, Yeom.

Statistical analysis: Park, Chute, Rajpurkar, Lou, Ball, Lungren.

Administrative, technical, or material support: Park, Chute, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Wittber, Hong, Wilson, Lungren, Ng, Yeom.

Supervision: Park, Ball, Tseng, Halabi, Basu, Lungren, Ng, Yeom.

Conflict of Interest Disclosures: Drs Wishah and Patel reported grants from GE and Siemens outside the submitted work. Dr Patel reported participation in the speakers bureau for GE. Dr Lungren reported personal fees from Nines Inc outside the submitted work. Dr Yeom reported grants from Philips outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by National Institutes of Health National Center for Advancing Translational Science Clinical and Translational Science Award UL1TR001085.

Role of the Funder/Sponsor: The National Institutes of Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

1.Jaja  BN, Cusimano  MD, Etminan  N,  et al.  Clinical prediction models for aneurysmal subarachnoid hemorrhage: a systematic review.  Neurocrit Care. 2013;18(1):143-153. doi:10.1007/s12028-012-9792-zPubMedGoogle ScholarCrossref
2.Turan  N, Heider  RA, Roy  AK,  et al.  Current perspectives in imaging modalities for the assessment of unruptured intracranial aneurysms: a comparative analysis and review.  World Neurosurg. 2018;113:280-292. doi:10.1016/j.wneu.2018.01.054PubMedGoogle ScholarCrossref
3.Yoon  NK, McNally  S, Taussky  P, Park  MS.  Imaging of cerebral aneurysms: a clinical perspective.  Neurovasc Imaging. 2016;2(1):6. doi:10.1186/s40809-016-0016-3Google ScholarCrossref
4.Jayaraman  MV, Mayo-Smith  WW, Tung  GA,  et al.  Detection of intracranial aneurysms: multi-detector row CT angiography compared with DSA.  Radiology. 2004;230(2):510-518. doi:10.1148/radiol.2302021465PubMedGoogle ScholarCrossref
5.Bharatha  A, Yeung  R, Durant  D,  et al.  Comparison of computed tomography angiography with digital subtraction angiography in the assessment of clipped intracranial aneurysms.  J Comput Assist Tomogr. 2010;34(3):440-445. doi:10.1097/RCT.0b013e3181d27393PubMedGoogle ScholarCrossref
6.Lubicz  B, Levivier  M, François  O,  et al.  Sixty-four-row multisection CT angiography for detection and evaluation of ruptured intracranial aneurysms: interobserver and intertechnique reproducibility.  AJNR Am J Neuroradiol. 2007;28(10):1949-1955. doi:10.3174/ajnr.A0699PubMedGoogle ScholarCrossref
7.White  PM, Teasdale  EM, Wardlaw  JM, Easton  V.  Intracranial aneurysms: CT angiography and MR angiography for detection prospective blinded comparison in a large patient cohort.  Radiology. 2001;219(3):739-749. doi:10.1148/radiology.219.3.r01ma16739PubMedGoogle ScholarCrossref
8.Suzuki  K.  Overview of deep learning in medical imaging.  Radiol Phys Technol. 2017;10(3):257-273. doi:10.1007/s12194-017-0406-5PubMedGoogle ScholarCrossref
9.Rajpurkar  P, Irvin  J, Ball  RL,  et al.  Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists.  PLoS Med. 2018;15(11):e1002686. doi:10.1371/journal.pmed.1002686PubMedGoogle ScholarCrossref
10.Bien  N, Rajpurkar  P, Ball  RL,  et al.  Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet.  PLoS Med. 2018;15(11):e1002699. doi:10.1371/journal.pmed.1002699PubMedGoogle ScholarCrossref
11.Morita  A, Kirino  T, Hashi  K,  et al; UCAS Japan Investigators.  The natural course of unruptured cerebral aneurysms in a Japanese cohort.  N Engl J Med. 2012;366(26):2474-2482. doi:10.1056/NEJMoa1113260PubMedGoogle ScholarCrossref
12.Yushkevich  PA, Piven  J, Hazlett  HC,  et al.  User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.  Neuroimage. 2006;31(3):1116-1128. doi:10.1016/j.neuroimage.2006.01.015PubMedGoogle ScholarCrossref
13.He  K, Zhang  X, Ren  S, Sun  J. Deep residual learning for image recognition. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 27, 2016; Las Vegas, NV.
14.Xie  S, Girshick  R, Dollár  P, Tu  Z, He  K. Aggregated residual transformations for deep neural networks. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
15.Hu  J, Shen  L, Sun  G. Squeeze-and-excitation networks. Paper presented at: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 21, 2018; Salt Lake City, Utah.
16.Ronneberger  O, Fischer  P, Brox  T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Basel, Switzerland: Springer International; 2015:234–241.
17.Carreira  J, Zisserman  A. Quo vadis, action recognition? a new model and the kinetics dataset. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
18.Chen  L-C, Papandreou  G, Schroff  F, Adam  H. Rethinking atrous convolution for semantic image segmentation. https://arxiv.org/abs/1706.05587. Published June 17, 2017. Accessed May 7, 2019.
19.Milletari  F, Navab  N, Ahmadi  S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. Paper presented at: 2016 IEEE Fourth International Conference on 3D Vision (3DV); October 26-28, 2016; Stanford, CA.
20.Loshchilov  I, Hutter  F. Sgdr: Stochastic gradient descent with warm restarts. Paper presented at: 2017 Fifth International Conference on Learning Representations; April 24-26, 2017; Toulon, France.
21.Huang  G, Sun  Y, Liu  Z, Sedra  D, Weinberger  KQ. Deep networks with stochastic depth. European Conference on Computer Vision. Basel, Switzerland: Springer International; 2016:646–661.
22.Horos. https://horosproject.org. Accessed May 1, 2019.
23.Wilson  EB.  Probable inference, the law of succession, and statistical inference.  J Am Stat Assoc. 1927;22(158):209-212. doi:10.1080/01621459.1927.10502953Google ScholarCrossref
24.Fleiss  JL, Cohen  J.  The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability.  Educ Psychol Meas. 1973;33(3):613-619. doi:10.1177/001316447303300309Google ScholarCrossref
25.Benjamini  Y, Hochberg  Y.  Controlling the false discovery rate: a practical and powerful approach to multiple testing.  J R Stat Soc Series B Stat Methodol. 1995;57(1):289-300.Google Scholar
26.Maldaner  N, Stienen  MN, Bijlenga  P,  et al.  Interrater agreement in the radiologic characterization of ruptured intracranial aneurysms based on computed tomography angiography.  World Neurosurg. 2017;103:876-882.e1. doi:10.1016/j.wneu.2017.04.131PubMedGoogle ScholarCrossref
27.Wang  Y, Gao  X, Lu  A,  et al.  Residual aneurysm after metal coils treatment detected by spectral CT.  Quant Imaging Med Surg. 2012;2(2):137-138.PubMedGoogle Scholar
28.Yoon  YW, Park  S, Lee  SH,  et al.  Post-traumatic myocardial infarction complicated with left ventricular aneurysm and pericardial effusion.  J Trauma. 2007;63(3):E73-E75. doi:10.1097/01.ta.0000246896.89156.70PubMedGoogle ScholarCrossref
29.Tomandl  BF, Hammen  T, Klotz  E, Ditt  H, Stemper  B, Lell  M.  Bone-subtraction CT angiography for the evaluation of intracranial aneurysms.  AJNR Am J Neuroradiol. 2006;27(1):55-59.PubMedGoogle Scholar
30.Shi  W-Y, Li  Y-D, Li  M-H,  et al.  3D rotational angiography with volume rendering: the utility in the detection of intracranial aneurysms.  Neurol India. 2010;58(6):908-913. doi:10.4103/0028-3886.73743PubMedGoogle Scholar