Feeds:
Posts
Comments

Posts Tagged ‘Machine Learning models’

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Immune-checkpoint blockers (ICBs) immunotherapy appears promising for various cancer types, offering a durable therapeutic advantage. Only a number of cases with cancer respond to this therapy. Biomarkers are required to adequately predict the responses of patients. This article evaluates this issue utilizing a system method to characterize the immune response of the anti-tumor based on the entire tumor environment. Researchers build mechanical biomarkers and cancer-specific response models using interpretable machine learning that predict the response of patients to ICB.

The lymphatic and immunological systems help the body defend itself by combating. The immune system functions as the body’s own personal police force, hunting down and eliminating pathogenic baddies.

According to Federica Eduati, Department of Biomedical Engineering at TU/e, “The immune system of the body is quite adept at detecting abnormally behaving cells. Cells that potentially grow into tumors or cancer in the future are included in this category. Once identified, the immune system attacks and destroys the cells.”

Immunotherapy and machine learning are combining to assist the immune system solve one of its most vexing problems: detecting hidden tumorous cells in the human body.

It is the fundamental responsibility of our immune system to identify and remove alien invaders like bacteria or viruses, but also to identify risks within the body, such as cancer. However, cancer cells have sophisticated ways of escaping death by shutting off immune cells. Immunotherapy can reverse the process, but not for all patients and types of cancer. To unravel the mystery, Eindhoven University of Technology researchers used machine learning. They developed a model to predict whether immunotherapy will be effective for a patient using a simple trick. Even better, the model outperforms conventional clinical approaches.

The outcomes of this research are published on 30th June, 2021 in the journal Patterns in an article entitled “Interpretable systems biomarkers predict response to immune-checkpoint inhibitors”.

The Study

  • Characterization of the tumor microenvironment from RNAseq and prior knowledge
  • Multi-task machine-learning models for predicting antitumor immune responses
  • Identification of cancer-type-specific, interpretable biomarkers of immune responses
  • EaSIeR is a tool to predict biomarker-based immunotherapy response from RNA-seq

“Tumor also contains multiple types of immune and fibroblast cells which can play a role in favor of or anti-tumor, and communicates among themselves,” said Oscar Lapuente-Santana, a researcher doctoral student in the computational biology group. “We had to learn how complicated regulatory mechanisms in the micro-environment of the tumor affect the ICB response. We have used RNA sequencing datasets to depict numerous components of the Tumor Microenvironment (TME) in a high-level illustration.”

Using computational algorithms and datasets from previous clinical patient care, the researchers investigated the TME.

Eduati explained

While RNA-sequencing databases are publically available, information on which patients responded to ICB therapy is only available for a limited group of patients and cancer types. So, to tackle the data problem, we used a trick.

All 100 models learned in the randomized cross-validation were included in the EaSIeR tool. For each validation dataset, we used the corresponding cancer-type-specific model: SKCM for the melanoma Gide, Auslander, Riaz, and Liu cohorts; STAD for the gastric cancer Kim cohort; BLCA for the bladder cancer Mariathasan cohort; and GBM for the glioblastoma Cloughesy cohort. To make predictions for each job, the average of the 100 cancer-type-specific models was employed. The predictions of each dataset’s cancer-type-specific models were also compared to models generated for the remaining 17 cancer types.

From the same datasets, the researchers selected several surrogate immunological responses to be used as a measure of ICB effectiveness.

Lapuente-Santana stated

One of the most difficult aspects of our job was properly training the machine learning models. We were able to fix this by looking at alternative immune responses during the training process.

Some of the researchers employed the machine learning approach given in the paper to participate in the “Anti-PD1 Response Prediction DREAM Challenge.”

DREAM is an organization that carries out crowd-based tasks with biomedical algorithms. “We were the first to compete in one of the sub-challenges under the name cSysImmunoOnco team,” Eduati remarks.

The researchers noted,

We applied machine learning to seek for connections between the obtained system-based attributes and the immune response, estimated using 14 predictors (proxies) derived from previous publications. We treated these proxies as individual tasks to be predicted by our machine learning models, and we employed multi-task learning algorithms to jointly learn all tasks.

The researchers discovered that their machine learning model surpasses biomarkers that are already utilized in clinical settings to evaluate ICB therapies.

But why are Eduati, Lapuente-Santana, and their colleagues using mathematical models to tackle a medical treatment problem? Is this going to take the place of the doctor?

Eduati explains

Mathematical models can provide an overview of the interconnection between individual molecules and cells and at the same time predicting a particular patient’s tumor behavior. This implies that immunotherapy with ICB can be personalized in a patient’s clinical setting. The models can aid physicians with their decisions about optimum therapy, it is vital to note that they will not replace them.

Furthermore, the model aids in determining which biological mechanisms are relevant for the biological response.

The researchers noted

Another advantage of our concept is that it does not need a dataset with known patient responses to immunotherapy for model training.

Further testing is required before these findings may be implemented in clinical settings.

Main Source:

Lapuente-Santana, Ó., van Genderen, M., Hilbers, P. A., Finotello, F., & Eduati, F. (2021). Interpretable systems biomarkers predict response to immune-checkpoint inhibitorsPatterns, 100293. https://www.cell.com/patterns/pdfExtended/S2666-3899(21)00126-4

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Inhibitory CD161 receptor recognized as a potential immunotherapy target in glioma-infiltrating T cells by single-cell analysis

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/02/20/inhibitory-cd161-receptor-identified-in-glioma-infiltrating-t-cells-by-single-cell-analysis-2/

Immunotherapy may help in glioblastoma survival

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/03/16/immunotherapy-may-help-in-glioblastoma-survival/

Deep Learning for In-silico Drug Discovery and Drug Repurposing: Artificial Intelligence to search for molecules boosting response rates in Cancer Immunotherapy: Insilico Medicine @John Hopkins University

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/17/deep-learning-for-in-silico-drug-discovery-and-drug-repurposing-artificial-intelligence-to-search-for-molecules-boosting-response-rates-in-cancer-immunotherapy-insilico-medicine-john-hopkins-univer/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/02/cancer-detection-and-therapeutics/

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

%d bloggers like this: