Archive for the ‘Image Processing/Computing’ Category

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.


Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:


or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package


Read Full Post »

Cryo-EM disclosed how the D614G mutation changes SARS-CoV-2 spike protein structure.

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

SARS-CoV-2, the virus that causes COVID-19, has had a major impact on human health globally; infecting a massive quantity of people around 136,046,262 (John Hopkins University); causing severe disease and associated long-term health sequelae; resulting in death and excess mortality, especially among older and prone populations; altering routine healthcare services; disruptions to travel, trade, education, and many other societal functions; and more broadly having a negative impact on peoples physical and mental health.

It’s need of the hour to answer the questions like what allows the variants of SARS-CoV-2 first detected in the UK, South Africa, and Brazil to spread so quickly? How can current COVID-19 vaccines better protect against them?

Scientists from the Harvard Medical School and the Boston Children’s Hospital help answer these urgent questions. The team reports its findings in the journal “Science a paper entitled Structural impact on SARS-CoV-2 spike protein by D614G substitution. The mutation rate of the SARS-CoV-2 virus has rapidly evolved over the past few months, especially at the Spike (S) protein region of the virus, where the maximum number of mutations have been observed by the virologists.

Bing Chen, HMS professor of pediatrics at Boston Children’s, and colleagues analyzed the changes in the structure of the spike proteins with the genetic change by D614G mutation by all three variants. Hence they assessed the structure of the coronavirus spike protein down to the atomic level and revealed the reason for the quick spreading of these variants.

This model shows the structure of the spike protein in its closed configuration, in its original D614 form (left) and its mutant form (G614). In the mutant spike protein, the 630 loop (in red) stabilizes the spike, preventing it from flipping open prematurely and rendering SARS-CoV-2 more infectious.

Fig. 1. Cryo-EM structures of the full-length SARS-CoV-2 S protein carrying G614.

(A) Three structures of the G614 S trimer, representing a closed, three RBD-down conformation, an RBD-intermediate conformation and a one RBD-up conformation, were modeled based on corresponding cryo-EM density maps at 3.1-3.5Å resolution. Three protomers (a, b, c) are colored in red, blue and green, respectively. RBD locations are indicated. (B) Top views of superposition of three structures of the G614 S in (A) in ribbon representation with the structure of the prefusion trimer of the D614 S (PDB ID: 6XR8), shown in yellow. NTD and RBD of each protomer are indicated. Side views of the superposition are shown in fig. S8.

IMAGE SOURCE: Bing Chen, Ph.D., Boston Children’s Hospital, https://science.sciencemag.org/content/early/2021/03/16/science.abf2303

The work

The mutant spikes were imaged by Cryo-Electron microscopy (cryo-EM), which has resolution down to the atomic level. They found that the D614G mutation (substitution of in a single amino acid “letter” in the genetic code for the spike protein) makes the spike more stable as compared with the original SARS-CoV-2 virus. As a result, more functional spikes are available to bind to our cells’ ACE2 receptors, making the virus more contagious.

Fig. 2. Cryo-EM revealed how the D614G mutation changes SARS-CoV-2 spike protein structure.

IMAGE SOURCE:  Zhang J, et al., Science

Say the original virus has 100 spikes,” Chen explained. “Because of the shape instability, you may have just 50 percent of them functional. In the G614 variants, you may have 90 percent that is functional. So even though they don’t bind as well, the chances are greater and you will have an infection

Forthcoming directions by Bing Chen and Team

The findings suggest the current approved COVID-19 vaccines and any vaccines in the works should include the genetic code for this mutation. Chen has quoted:

Since most of the vaccines so far—including the Moderna, Pfizer–BioNTech, Johnson & Johnson, and AstraZeneca vaccines are based on the original spike protein, adding the D614G mutation could make the vaccines better able to elicit protective neutralizing antibodies against the viral variants

Chen proposes that redesigned vaccines incorporate the code for this mutant spike protein. He believes the more stable spike shape should make any vaccine based on the spike more likely to elicit protective antibodies. Chen also has his sights set on therapeutics. He and his colleagues are further applying structural biology to better understand how SARS-CoV-2 binds to the ACE2 receptor. That could point the way to drugs that would block the virus from gaining entry to our cells.

In January, the team showed that a structurally engineered “decoy” ACE2 protein binds to SARS-CoV-2 200 times more strongly than the body’s own ACE2. The decoy potently inhibited the virus in cell culture, suggesting it could be an anti-COVID-19 treatment. Chen is now working to advance this research into animal models.

Main Source:


Substitution for aspartic acid by glycine at position 614 in the spike (S) protein of severe acute respiratory syndrome coronavirus 2 appears to facilitate rapid viral spread. The G614 strain and its recent variants are now the dominant circulating forms. We report here cryo-EM structures of a full-length G614 S trimer, which adopts three distinct prefusion conformations differing primarily by the position of one receptor-binding domain. A loop disordered in the D614 S trimer wedges between domains within a protomer in the G614 spike. This added interaction appears to prevent premature dissociation of the G614 trimer, effectively increasing the number of functional spikes and enhancing infectivity, and to modulate structural rearrangements for membrane fusion. These findings extend our understanding of viral entry and suggest an improved immunogen for vaccine development.


Other Related Articles published in this Open Access Online Scientific Journal include the following:

COVID-19-vaccine rollout risks and challenges

Reporter : Irina Robu, PhD


COVID-19 Sequel: Neurological Impact of Social isolation been linked to poorer physical and mental health

Reporter: Aviva Lev-Ari, PhD, RN


Comparing COVID-19 Vaccine Schedule Combinations, or “Com-COV” – First-of-its-Kind Study will explore the Impact of using eight different Combinations of Doses and Dosing Intervals for Different COVID-19 Vaccines

Reporter: Aviva Lev-Ari, PhD, RN


COVID-19 T-cell immune response map, immunoSEQ T-MAP COVID for research of T-cell response to SARS-CoV-2 infection

Reporter: Aviva Lev-Ari, PhD, RN


Tiny biologic drug to fight COVID-19 show promise in animal models

Reporter : Irina Robu, PhD


Miniproteins against the COVID-19 Spike protein may be therapeutic

Reporter: Stephen J. Williams, PhD


Read Full Post »

Rare earth-doped nanoparticles applications in biological imaging and tumor treatment

Reporter: Irina Robu, PhD

Bioimaging  aims to interfere as little as possible with life processes and can be used to gain information on the 3-D structure of the observed specimen from the outside. Bioimaging ranges from  the observation of subcellular structures and the entire cells over tissues up to entire multicellular organisms. The technology uses light, fluorescence, ultrasound, X-ray, magnetic resonance as sources of imaging. The more common imaging is fluorescence imaging which is used to monitor the dynamic interaction between the drug molecules and tumor cells and the ability to monitor the real time dynamic process in biological tissues.

Researchers from the Xi’an Institute of Optics and Precision Mechanics (XIOPM) of the Chinese Academy of Sciences (CAS) described the recent progress they made in the rare earth-doped nanoparticles in the field of bio-engineering and tumor treatment. It is well known that producing small nanoparticles with good dispersion and exploitable optical coherence properties is highly challenging. According to them, these rare earth-doped nanoparticles can be vested with additional capabilities such as water solubility, biocompatibility, drug-loading ability and the target ability for different tumors by surface functionalization. The luminescent properties and structure design were also looked at.

According to the Chinese researchers, for applying the RE-doped NPs to the diagnosis and treatment of tumors, their first goal is to improve water solubility and biocompatibility.  The second goal would be to give the nanoparticles the ability to target tumors by surface functionalization. Lastly, biocompatible water-soluble tumor-targeting NPs can be used as carriers to load drugs for treatment of tumor cells. All things considered, the recent research progress on the development of fluorescence intensity of NPs, surface modification, and tumor targeted diagnosis and treatment has also been emphasized.



Read Full Post »

Powerful AI Tools Being Developed for the COVID-19 Fight

Curator: Stephen J. Williams, Ph.D.


Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

IBM Releases Novel AI-Powered Technologies to Help Health and Research Community Accelerate the Discovery of Medical Insights and Treatments for COVID-19

April 3, 2020 | Written by: 

IBM Research has been actively developing new cloud and AI-powered technologies that can help researchers across a variety of scientific disciplines accelerate the process of discovery. As the COVID-19 pandemic unfolds, we continue to ask how these technologies and our scientific knowledge can help in the global battle against coronavirus.

Today, we are making available multiple novel, free resources from across IBM to help healthcare researchers, doctors and scientists around the world accelerate COVID-19 drug discovery: from gathering insights, to applying the latest virus genomic information and identifying potential targets for treatments, to creating new drug molecule candidates.

Though some of the resources are still in exploratory stages, IBM is making them available to qualifying researchers at no charge to aid the international scientific investigation of COVID-19.

Today’s announcement follows our recent leadership in launching the U.S. COVID-19 High Performance Computing Consortium, which is harnessing massive computing power in the effort to help confront the coronavirus.

Streamlining the Search for Information

Healthcare agencies and governments around the world have quickly amassed medical and other relevant data about the pandemic. And, there are already vast troves of medical research that could prove relevant to COVID-19. Yet, as with any large volume of disparate data sources, it is difficult to efficiently aggregate and analyze that data in ways that can yield scientific insights.

To help researchers access structured and unstructured data quickly, we are offering a cloud-based AI research resource that has been trained on a corpus of thousands of scientific papers contained in the COVID-19 Open Research Dataset (CORD-19), prepared by the White House and a coalition of research groups, and licensed databases from the DrugBankClinicaltrials.gov and GenBank. This tool uses our advanced AI and allows researchers to pose specific queries to the collections of papers and to extract critical COVID-19 knowledge quickly. Please note, access to this resource will be granted only to qualified researchers. To learn more and request access, please click here.

Aiding the Hunt for Treatments

The traditional drug discovery pipeline relies on a library of compounds that are screened, improved, and tested to determine safety and efficacy. In dealing with new pathogens such as SARS-CoV-2, there is the potential to enhance the compound libraries with additional novel compounds. To help address this need, IBM Research has recently created a new, AI-generative framework which can rapidly identify novel peptides, proteins, drug candidates and materials.

We have applied this AI technology against three COVID-19 targets to identify 3,000 new small molecules as potential COVID-19 therapeutic candidates. IBM is releasing these molecules under an open license, and researchers can study them via a new interactive molecular explorer tool to understand their characteristics and relationship to COVID-19 and identify candidates that might have desirable properties to be further pursued in drug development.

To streamline efforts to identify new treatments for COVID-19, we are also making the IBM Functional Genomics Platform available for free for the duration of the pandemic. Built to discover the molecular features in viral and bacterial genomes, this cloud-based repository and research tool includes genes, proteins and other molecular targets from sequenced viral and bacterial organisms in one place with connections pre-computed to help accelerate discovery of molecular targets required for drug design, test development and treatment.

Select IBM collaborators from government agencies, academic institutions and other organizations already use this platform for bacterial genomic study. And now, those working on COVID-19 can request the IBM Functional Genomics Platform interface to explore the genomic features of the virus. Access to the IBM Functional Genomics Platform will be prioritized for those conducting COVID-19 research. To learn more and request access, please click here.

Drug and Disease Information

Clinicians and healthcare professionals on the frontlines of care will also have free access to hundreds of pieces of evidence-based, curated COVID-19 and infectious disease content from IBM Micromedex and EBSCO DynaMed. Using these two rich decision support solutions, users will have access to drug and disease information in a single and comprehensive search. Clinicians can also provide patients with consumer-friendly patient education handouts with relevant, actionable medical information. IBM Micromedex is one of the largest online reference databases for medication information and is used by more than 4,500 hospitals and health systems worldwide. EBSCO DynaMed provides peer-reviewed clinical content, including systematic literature reviews in 28 specialties for comprehensive disease topics, health conditions and abnormal findings, to highly focused topics on evaluation, differential diagnosis and management.

The scientific community is working hard to make important new discoveries relevant to the treatment of COVID-19, and we’re hopeful that releasing these novel tools will help accelerate this global effort. This work also outlines our long-term vision for the future of accelerated discovery, where multi-disciplinary scientists and clinicians work together to rapidly and effectively create next generation therapeutics, aided by novel AI-powered technologies.

Learn more about IBM’s response to COVID-19: IBM.com/COVID19.

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

DiA Imaging Analysis Receives Grant to Accelerate Global Access to its AI Ultrasound Solutions in the Fight Against COVID-19

Source: https://www.grantnews.com/news-articles/?rkey=20200512UN05506&filter=12337

Grant will allow company to accelerate access to its AI solutions and use of ultrasound in COVID-19 emergency settings

TEL AVIV, IsraelMay 12, 2020 /PRNewswire-PRWeb/ — DiA Imaging Analysis, a leading provider of AI based ultrasound analysis solutions, today announced that it has received a government grant from the Israel Innovation Authority (IIA) to develop solutions for ultrasound imaging analysis of COVID-19 patients using Artificial Intelligence (AI).Using ultrasound in point of care emergency settings has gained momentum since the outbreak of COVID-19 pandemic. In these settings, which include makeshift hospital COVID-19 departments and triage “tents,” portable ultrasound offers clinicians diagnostic decision support, with the added advantage of being easier to disinfect and eliminating the need to transport patients from one room to another.However, analyzing ultrasound images is a process that it is still mostly done visually, leading to a growing market need for automated solutions and decision support.As the leading provider of AI solutions for ultrasound analysis and backed by Connecticut Innovations, DiA makes ultrasound analysis smarter and accessible to both new and expert ultrasound users with various levels of experience. The company’s flagship LVivo Cardio Toolbox for AI-based cardiac ultrasound analysis enables clinicians to automatically generate objective clinical analysis, with increased accuracy and efficiency to support decisions about patient treatment and care.

The IIA grant provides a budget of millions NIS to increase access to DiA’s solutions for users in Israel and globally, and accelerate R&D with a focus on new AI solutions for COVID-19 patient management. DiA solutions are vendor-neutral and platform agnostic, as well as powered to run in low processing, mobile environments like handheld ultrasound.Recent data highlights the importance of looking at the heart during the progression of COVID-19, with one study citing 20% of patients hospitalized with COVID-19 showing signs of heart damage and increased mortality rates in those patients. DiA’s LVivo cardiac analysis solutions automatically generate objective, quantified cardiac ultrasound results to enable point-of-care clinicians to assess cardiac function on the spot, near patients’ bedside.

According to Dr. Ami Applebaum, the Chairman of the Board of the IIA, “The purpose of IIA’s call was to bring solutions to global markets for fighting COVID-19, with an emphasis on relevancy, fast time to market and collaborations promising continuity of the Israeli economy. DiA meets these requirements with AI innovation for ultrasound.”DiA has received several FDA/CE clearances and established distribution partnerships with industry leading companies including GE Healthcare, IBM Watson and Konica Minolta, currently serving thousands of end users worldwide.”We see growing use of ultrasound in point of care settings, and an urgent need for automated, objective solutions that provide decision support in real time,” said Hila Goldman-Aslan, CEO and Co-founder of DiA Imaging Analysis, “Our AI solutions meet this need by immediately helping clinicians on the frontlines to quickly and easily assess COVID-19 patients’ hearts to help guide care delivery.”

About DiA Imaging Analysis:
DiA Imaging Analysis provides advanced AI-based ultrasound analysis technology that makes ultrasound accessible to all. DiA’s automated tools deliver fast and accurate clinical indications to support the decision-making process and offer better patient care. DiA’s AI-based technology uses advanced pattern recognition and machine-learning algorithms to automatically imitate the way the human eye detects image borders and identifies motion. Using DiA’s tools provides automated and objective AI tools, helps reduce variability among users, and increases efficiency. It allows clinicians with various levels of experience to quickly and easily analyze ultrasound images.

For additional information, please visit http://www.dia-analysis.com.

Read Full Post »

Expanding 3D Printing in Cardiology

Reporter: Irina Robu, PhD

3D printing is a fabrication technique used to transform digital objects into physical models, which builds structures of arbitrary geometry by depositing material in successive layers on the basis of specific digital design. Even though, the use of 3D bioprinting in cardiovascular medicine is relatively new development, advancement within this discipline is occurring at such a rapid rate. Most cardiologists believed the costs would be too high for routine use such that the price tag was better for academic applications.

Now as the prices are starting to lower, the idea of using 3D printed models of organs vessels and tissue manufactured based on CT, MRI and echocardiography might be beneficial according to Dr. Fadi Matar, professor at University of South Florida. He and his cardiology colleagues use 3D printed models to allow them to view patient’s complex anatomies before deciding what treatments to pursue. The models allow them to calculate the size and exact placement of devices which has led to shorter procedure time and better outcome.

In a study published in Academic Radiology, David Ballard, professor at University School of Medicine appraised the costs of setting up a 3D printing lab including the commercial printer plus software, lab space, materials and staffing. According to Ballard’s team, the commercial printers start at $12,000 but can be as high as high as $500,000.

According to American Medical Association-approved Category III Current Procedural Terminology (CPT) codes allows cardiology relief from setting up a new 3D printing lab such as Codes 0559T and 0560T, for individually prepared 3D-printed anatomical models with one or more components (including arteries and veins) and Codes 0561T and 0562T, which are for the production of personalized 3D-printed cutting or drilling tools that use patient imaging data and often are used to guide or facilitate surgery.

These codes have been met with enthusiasm by teams eyeing 3D printing, but there are noteworthy limitations to Category III codes—which are temporary codes describing emerging technologies, services and procedures that are used for tracking effectiveness data. It is important to note that Category III codes are not reimbursed but often are a step toward reimbursement.

New and improved materials also might lead to a sharper focus on 3D printing in cardiology. Dr. Fadi Matar says companies are working on materials that better mimic elements of the heart. Such “mimicry” ought to enhance the value of 3D-printed models since they will give cardiologists more realistic insights into how specific devices will interact with an individual patient’s heart. Even with the complex modalities of using 3D bioprinting, in time there would be less obstacles to being able to set up a 3D bioprinter lab.



Read Full Post »

Artificial Intelligence Innovations in Cardiac Imaging

Reporter: Aviva Lev-Ari, PhD, RN


3.3.23   Artificial Intelligence Innovations in Cardiac Imaging, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

‘CTA-for-All’ fast-tracks intervention, improves LVO detection in stroke patients

A “CTA-for-All” stroke imaging policy improved large vessel occlusion (LVO) detection, fast-tracked intervention and improved outcomes in a recent study of patients with acute ischemic stroke (AIS), researchers reported in Stroke.

“Combined noncontrast computed tomography (NCCT) and CT angiography (CTA) have been championed as the new minimum standard for initial imaging of disabling stroke,” Mayer, a neurologist at Henry Ford Hospital in Detroit, and co-authors wrote in their paper. “Patient selection criteria that impose arbitrary limits on time from last known well (LKW) or baseline National Institutes of Health Stroke Scale (NIHSS) score may delay CTA and the diagnosis of LVO.”

“These findings suggest that a uniform CTA-for-All imaging policy for stroke patients presenting within 24 hours is feasible and safe, improves LVO detection, speeds intervention and can improve outcomes,” the authors wrote. “The benefit appears to primarily affect patients presenting within six hours of symptom onset.”



How to integrate AI into the cardiac imaging pipeline

Hsiao said physicians can expect “a little bit of generalization” from neural networks, meaning they’ll work okay on data that they’ve never seen, but they’re not going to produce perfect results the first time around. If a model was trained on 3T MRI data, for example, and someone inputs 1.5T MRI data, it might not be able to analyze that information comprehensively. If some 1.5T data were fed into the model’s training algorithm, though, that could change.

According to Hsiao, all of this knowledge means little without clinical validation. He said he and his colleagues are working to integrate algorithms into the clinical environment such that a radiologist could hit a button and AI could auto-prescribe a set of images. Even better, he said, would be the ability to open up a series and have it auto-prescribe itself.

“That’s where we’re moving next, so you don’t have to hit any buttons at all,” he said.



DiA Imaging, IBM pair to take the subjectivity out of cardiac image analysis



FDA clears Ultromics’ AI-based CV image analysis system

Smartphone app accurately finds, identifies CV implants—and fast

According to the study, the finalized model achieved 95% sensitivity and 98% specificity.

Ferrick et al. said that since their training sample size was somewhat small and limited to a single institution, it would be valuable to validate the model externally. Still, their neural network was able to accurately identify CIEDs on chest radiographs and translate that ability into a phone app.

“Rather than the conventional ‘bench-to-bedside’ approach of translational research, we demonstrated the feasibility of ‘big data-to-bedside’ endeavors,” the team said. “This research has the potential to facilitate device identification in urgent scenarios in medical settings with limited resources.”



Machine learning cuts cardiac MRI analysis from minutes to seconds

“Cardiovascular MRI offers unparalleled image quality for assessing heart structure and function; however, current manual analysis remains basic and outdated,” Manisty said in a statement. “Automated machine learning techniques offer the potential to change this and radically improve efficiency, and we look forward to further research that could validate its superiority to human analysis.”

It’s estimated that around 150,000 cardiac MRIs are performed in the U.K. each year, she said, and based on that number, her team thinks using AI to read scans could mean saving 54 clinician-days per year at every health center in the country.

“Our dataset of patients with a range of heart diseases who received scans enabled us to demonstrate that the greatest sources of measurement error arise from human factors,” Manisty said. “This indicates that automated techniques are at least as good as humans, with the potential soon to be ‘superhuman’—transforming clinical and research measurement precision.



General SOURCE

From: Cardiovascular Business <news@mail.cardiovascularbusiness.com>

Reply-To: Cardiovascular Business <news@mail.cardiovascularbusiness.com>

Date: Tuesday, December 17, 2019 at 9:31 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Cardiovascular Imaging | December 2019

Read Full Post »

Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel


3.3.21   Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.



There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.


A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.


Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text


Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

Read Full Post »

Applying AI to Improve Interpretation of Medical Imaging

Author and Curator: Dror Nir, PhD   Applying AI to Improve Interpretation of Medical Imaging, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine


The idea that we can use machines’ intelligence to help us perform daily tasks is not an alien any more. As consequence, applying AI to improve the assessment of patients’ clinical condition is booming. What used to be the field of daring start-ups became now a playground for the tech-giants; Google, Amazon, Microsoft and IBM.

Interpretation of medical-Imaging involves standardised workflows and requires analysis of many data-items. Also, it is well established that human-subjectivity is a barrier to reproducibility and transferability of medical imaging results (evident by the reports on high intraoperative variability in  imaging-interpretation).Accepting the fact that computers are better suited that humans to perform routine, repeated tasks involving “big-data” analysis makes AI a very good candidate to improve on this situation.Google’s vision in that respect: “Machine learning has dozens of possible application areas, but healthcare stands out as a remarkable opportunity to benefit people — and working closely with clinicians and medical providers, we’re developing tools that we hope will dramatically improve the availability and accuracy of medical services.”

Google’s commitment to their vision is evident by their TensorFlow initiative. “TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.” Two recent papers describe in length the use of TensorFlow in retrospective studies (supported by Google AI) in which medical-images (from publicly accessed databases) where used:

Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, Nature Biomedical Engineering, Authors: Ryan Poplin, Avinash V. Varadarajan, Katy Blumer, Yun Liu, Michael V. McConnell, Greg S. Corrado, Lily Peng, and Dale R. Webster

As a demonstrator to the expected benefits the use of AI in interpretation of medical-imaging entails this is a very interesting paper. The authors show how they could extract information that is relevant for the assessment of the risk for having an adverse cardiac event from retinal fundus images collected while managing a totally different medical condition.  “Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic

blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70).”

Screenshot 2019-05-28 at 10.07.21Screenshot 2019-05-28 at 10.09.40

Clearly, if such algorithm would be implemented as a generalised and transferrable medical-device that can be used in routine practice, it will contribute to the cost-effectiveness of screening programs.

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography, Nature Medicine, Authors: Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse , Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich and Shravya Shetty.

This paper is in line of many previously published works demonstrating how AI can increase the accuracy of cancer diagnosis in comparison to current state of the art: “Existing challenges include inter-grader variability and high false-positive and false-negative rates. We propose a deep learning algorithm that uses a patient’s current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases.”

Screenshot 2019-05-28 at 10.22.06Screenshot 2019-05-28 at 10.23.48

The benefit of using an AI based application for lung cancer screening (If and when such algorithm is implemented as a generalised and transferable medical device) is well summarised by the authors: “The strong performance of the model at the case level has important potential clinical relevance. The observed increase in specificity could translate to fewer unnecessary follow up procedures. Increased sensitivity in cases without priors could translate to fewer missed cancers in clinical practice, especially as more patients begin screening. For patients with prior imaging exams, the performance of the deep learning model could enable gains in workflow efficiency and consistency as assessment of prior imaging is already a key component of a specialist’s workflow. Given that LDCT screening is in the relatively early phases of adoption, the potential for considerable improvement in patient care in the coming years is substantial. The model’s localization directs follow-up for specific lesion(s) of greatest concern. These predictions are critical for patients proceeding for further work-up and treatment, including diagnostic CT, positron emission tomography (PET)/CT or biopsy. Malignancy risk prediction allows for the possibility of augmenting existing, manually created interpretation guidelines such as Lung-RADS, which are limited to subjective clustering and assessment to approximate cancer risk.

BTW: The methods section in these two papers is detailed enough to allow any interested party to reproduce the study.

For the sake of balance-of-information, I would like to note that:

  • Amazon is encouraging access to its AI platform Amazon SageMaker “Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.” Amazon is offering training courses to help programmers get proficiency in Machine-Learning using its AWS platform: “We offer 30+ digital ML courses totaling 45+ hours, plus hands-on labs and documentation, originally developed for Amazon’s internal use. Developers, data scientists, data platform engineers, and business decision makers can use this training to learn how to apply ML, artificial intelligence (AI), and deep learning (DL) to their businesses unlocking new insights and value. Validate your learning and your years of experience in machine learning on AWS with a new certification.”
  • IBM is offering a general-purpose AI platform named Watson. Watson is also promoted as a platform to develop AI applications in the “health” sector with the following positioning: “IBM Watson Health applies data-driven analytics, advisory services and advanced technologies such as AI, to deliver actionable insights that can help you free up time to care, identify efficiencies, and improve population health.”
  • Microsoft is offering its AI platform as a tool to accelerate development of AI solutions. They are also offering an AI school : “Dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI and Cognitive Toolkit. Our platform enables any developer to code in any language and infuse AI into your apps. Whether your solutions are existing or new, this is the intelligence platform to build on.”

Read Full Post »

Live Conference Coverage @Medcitynews Converge 2018 Philadelphia: The Davids vs. the Cancer Goliath Part 2

8:40 – 9:25 AM The Davids vs. the Cancer Goliath Part 2

Startups from diagnostics, biopharma, medtech, digital health and emerging tech will have 8 minutes to articulate their visions on how they aim to tame the beast.

Start Time End Time Company
8:40 8:48 3Derm
8:49 8:57 CNS Pharmaceuticals
8:58 9:06 Cubismi
9:07 9:15 CytoSavvy
9:16 9:24 PotentiaMetrics

Liz Asai, CEO & Co-Founder, 3Derm Systems, Inc. @liz_asai
John M. Climaco, CEO, CNS Pharmaceuticals @cns_pharma 

John Freyhof, CEO, CytoSavvy
Robert Palmer, President & CEO, PotentiaMetrics @robertdpalmer 
Moira Schieke M.D., Founder, Cubismi, Adjunct Assistant Prof UW Madison @cubismi_inc


3Derm Systems

3Derm Systems is an image analysis firm for dermatologic malignancies.  They use a tele-medicine platform to accurately triage out benign malignancies observed from the primary care physician, expediate those pathology cases if urgent to the dermatologist and rapidly consults with you over home or portable device (HIPAA compliant).  Their suite also includes a digital dermatology teaching resource including digital training for students and documentation services.


CNS Pharmaceuticals

developing drugs against CNS malignancies, spun out of research at MD Anderson.  They are focusing on glioblastoma and Berubicin, an anthracycline antiobiotic (TOPOII inhibitor) that can cross the blood brain barrier.  Berubicin has good activity in a number of animal models.  Phase I results were very positive and Phase II is scheduled for later in the year.  They hope that the cardiotoxicity profile is less severe than other anthracyclines.  The market opportunity will be in temazolamide resistant glioblastoma.


They are using machine learning and biomarker based imaging to visualize tumor heterogeneity. “Data is the new oil” (Intel CEO). We need prediction machines so they developed a “my body one file” system, a cloud based data rich file of a 3D map of human body.




CytoSavvy is a digital pathology company.  They feel AI has a fatal flaw in that no way to tell how a decision was made. Use a Shape Based Model Segmentation algorithm which uses automated image analysis to provide objective personalized pathology data.  They are partnering with three academic centers (OSU, UM, UPMC) and pool data and automate the rule base for image analysis.

CytoSavvy’s patented diagnostic dashboards are intuitive, easy–to-use and HIPAA compliant. Our patented Shape-Based Modeling Segmentation (SBMS) algorithms combine shape and color analysis capabilities to increase reliability, save time, and improve decisions. Specifications and capabilities for our web-based delivery system follow.

link to their white paper: https://www.cytosavvy.com/resources/healthcare-ai-value-proposition.pdf


They were developing a diagnostic software for cardiology epidemiology measuring outcomes however when a family member got a cancer diagnosis felt there was a need for outcomes based models for cancer treatment/care.  They deliver real world outcomes for persoanlized patient care to help patients make decisions on there care by using a socioeconomic modeling integrated with real time clinical data.

Featured in the Wall Street Journal, using the informed treatment decisions they have generated achieve a 20% cost savings on average.  There research was spun out of Washington University St. Louis.

They have concentrated on urban markets however the CEO had mentioned his desire to move into more rural areas of the country as there models work well for patients in the rural setting as well.

Please follow on Twitter using the following #hash tags and @pharma_BI 









And at the following handles:



Read Full Post »

Sperm Analysis by Smart Phone, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Sperm Analysis by Smart Phone

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


Low sperm count and motility are markers for male infertility, a condition that is actually a neglected health issue worldwide, according to the World Health Organization. Researchers at Harvard Medical School have developed a very low cost device that can attach to a cell phone and provides a quick and easy semen analysis. The device is still under development, but a study of the machine’s capabilities concludes that it is just as accurate as the elaborate high cost computer-assisted semen analysis machines costing tens of thousands of dollars in measuring sperm concentration, sperm motility, total sperm count and total motile cells.


The Harvard team isn’t the first to develop an at-home fertility test for men, but they are the first to be able to determine sperm concentration as well as motility. The scientists compared the smart phone sperm tracker to current lab equipment by analyzing the same semen samples side by side. They analyzed over 350 semen samples of both infertile and fertile men. The smart phone system was able to identify abnormal sperm samples with 98 percent accuracy. The results of the study were published in the journal named Science Translational Medicine.


The device uses an optical attachment for magnification and a disposable microchip for handling the semen sample. With two lenses that require no manual focusing and an inexpensive battery, it slides onto the smart phone’s camera. Total cost for manufacturing the equipment: $4.45, including $3.59 for the optical attachment and 86 cents for the disposable micro-fluidic chip that contains the semen sample.


The software of the app is designed with a simple interface that guides the user through the test with onscreen prompts. After the sample is inserted, the app can photograph it, create a video and report the results in less than five seconds. The test results are stored on the phone so that semen quality can be monitored over time. The device is under consideration for approval from the Food and Drug Administration within the next two years.


With this device at home, a man can avoid the embarrassment and stress of providing a sample in a doctor’s clinic. The device could also be useful for men who get vasectomies, who are supposed to return to the urologist for semen analysis twice in the six months after the procedure. Compliance is typically poor, but with this device, a man could perform his own semen analysis at home and email the result to the urologist. This will make sperm analysis available in the privacy of our home and as easy as a home pregnancy test or blood sugar test.


The device costs about $5 to make in the lab and can be made available in the market at lower than $50 initially. This low cost could help provide much-needed infertility care in developing or underdeveloped nations, which often lack the resources for currently available diagnostics.
















Read Full Post »

Older Posts »