Posts Tagged ‘Artificial intelligence’

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.


Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:


or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package


Read Full Post »

Al is on the way to lead critical ED decisions on CT

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Artificial intelligence (AI) has infiltrated many organizational processes, raising concerns that robotic systems will eventually replace many humans in decision-making. The advent of AI as a tool for improving health care provides new prospects to improve patient and clinical team’s performance, reduce costs, and impact public health. Examples include, but are not limited to, automation; information synthesis for patients, “fRamily” (friends and family unpaid caregivers), and health care professionals; and suggestions and visualization of information for collaborative decision making.

In the emergency department (ED), patients with Crohn’s disease (CD) are routinely subjected to Abdomino-Pelvic Computed Tomography (APCT). It is necessary to diagnose clinically actionable findings (CAF) since they may require immediate intervention, which is typically surgical. Repeated APCTs, on the other hand, results in higher ionizing radiation exposure. The majority of APCT performance guidance is clinical and empiric. Emergency surgeons struggle to identify Crohn’s disease patients who actually require a CT scan to determine the source of acute abdominal distress.

Image Courtesy: Jim Coote via Pixabay https://www.aiin.healthcare/media/49446

Aid seems to be on the way. Researchers employed machine learning to accurately distinguish these sufferers from Crohn’s patients who appear with the same complaint but may safely avoid the recurrent exposure to contrast materials and ionizing radiation that CT would otherwise wreak on them.

The study entitled “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department” was published on July 9 in Digestive and Liver Disease by gastroenterologists and radiologists at Tel Aviv University in Israel.

Retrospectively, Jacob Ollech and his fellow researcher have analyzed 101 emergency treatments of patients with Crohn’s who underwent abdominopelvic CT.

They were looking for examples where a scan revealed clinically actionable results. These were classified as intestinal blockage, perforation, intra-abdominal abscess, or complex fistula by the researchers.

On CT, 44 (43.5 %) of the 101 cases reviewed had such findings.

Ollech and colleagues utilized a machine-learning technique to design a decision-support tool that required only four basic clinical factors to test an AI approach for making the call.

The approach was successful in categorizing patients into low- and high-risk groupings. The researchers were able to risk-stratify patients based on the likelihood of clinically actionable findings on abdominopelvic CT as a result of their success.

Ollech and co-authors admit that their limited sample size, retrospective strategy, and lack of external validation are shortcomings.

Moreover, several patients fell into an intermediate risk category, implying that a standard workup would have been required to guide CT decision-making in a real-world situation anyhow.

Consequently, they generate the following conclusion:

We believe this study shows that a machine learning-based tool is a sound approach for better-selecting patients with Crohn’s disease admitted to the ED with acute gastrointestinal complaints about abdominopelvic CT: reducing the number of CTs performed while ensuring that patients with high risk for clinically actionable findings undergo abdominopelvic CT appropriately.

Main Source:

Konikoff, Tom, Idan Goren, Marianna Yalon, Shlomit Tamir, Irit Avni-Biron, Henit Yanai, Iris Dotan, and Jacob E. Ollech. “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department.” Digestive and Liver Disease (2021). https://www.sciencedirect.com/science/article/abs/pii/S1590865821003340

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Al App for People with Digestive Disorders

Reporter: Irina Robu, Ph.D.


Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc


Al System Used to Detect Lung Cancer

Reporter: Irina Robu, Ph.D.


Artificial Intelligence: Genomics & Cancer


Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc


Systemic Inflammatory Diseases as Crohn’s disease, Rheumatoid Arthritis and Longer Psoriasis Duration May Mean Higher CVD Risk

Reporter: Aviva Lev-Ari, PhD, RN


Autoimmune Inflammatory Bowel Diseases: Crohn’s Disease & Ulcerative Colitis: Potential Roles for Modulation of Interleukins 17 and 23 Signaling for Therapeutics

Curators: Larry H Bernstein, MD FCAP and Aviva Lev-Ari, PhD, RN https://pharmaceuticalintelligence.com/2016/01/23/autoimmune-inflammtory-bowl-diseases-crohns-disease-ulcerative-colitis-potential-roles-for-modulation-of-interleukins-17-and-23-signaling-for-therapeutics/

Inflammatory Disorders: Inflammatory Bowel Diseases (IBD) – Crohn’s and Ulcerative Colitis (UC) and Others

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN


Read Full Post »

This AI Just Evolved From Companion Robot To Home-Based Physician Helper

Reporter: Ethan Coomber, Research Assistant III, Data Science and Podcast Library Development 

Article Author: Gil Press Senior Contributor Enterprise & Cloud @Forbes 

Twitter: @GilPress I write about technology, entrepreneurs and innovation.

Intuition Robotics announced today that it is expanding its mission of improving the lives of older adults to include enhancing their interactions with their physicians. The Israeli startup has developed the AI-based, award-winning proactive social robot ElliQ which has spent over 30,000 days in older adults’ homes over the past two years. Now ElliQ will help increase patient engagement while offering primary care providers continuous actionable data and insights for early detection and intervention.

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

Higher patient engagement leads to lower costs of delivering care and the quality of the physician-patient relationship is positively associated with improved functional health, studies have found. Typically, however, primary care physicians see their patients anywhere from once a month to once a year, even though about 85% of seniors in the U.S. have at least one chronic health condition. ElliQ, with the consent of its users, can provide data on the status of patients in between office visits and facilitate timely and consistent communications between physicians and their patients.

Supporting the notion of a home-based physician assistant robot is the transformation of healthcare delivery in the U.S. More and more primary care physicians are moving from a fee-for-service business model, where doctors are paid according to the procedures used to treat a patient, to “capitation,” where doctors are paid a set amount for each patient they see. This shift in how doctors are compensated is gaining momentum as a key solution for reducing the skyrocketing costs of healthcare: “…inadequate, unnecessary, uncoordinated, and inefficient care and suboptimal business processes eat up at least 35%—and maybe over 50%—of the more than $3 trillion that the country spends annually on health care. That suggests more than $1 trillion is being squandered,” states “The Case for Capitation,” a Harvard Business Review article.

Under this new business model, physicians have a strong incentive to reduce or eliminate visits to the ER and hospitalization, so ElliQ’s assistance in early intervention and support of proactive and preventative healthcare is highly valuable. ElliQ’s “new capabilities provide physicians with visibility into the patient’s condition at home while allowing seamless communication… can assist me and my team in early detection and mitigation of health issues, and it increases patients’ involvement in their care through more frequent engagement and communication,” says in a statement Dr. Peter Barker of Family Doctors, a Mass General Brigham-affiliated practice in Swampscott, MA, that is working with Intuition Robotics.

With the new stage in its evolution, ElliQ becomes “a conversational agent for self-reported data on how people are doing based on what the doctor is telling us to look for and, at the same time, a super-simple communication channel between the physician and the patient,” says Skuler. As only 20% of the individual’s health has to do with the administration of healthcare, Skuler says the balance is already taken care of by ElliQ—encouraging exercise, watching nutrition, keeping mentally active, connecting to the outside world, and promoting a sense of purpose.

A recent article in The Communication of the ACM pointed out that “usability concerns have for too long overshadowed questions about the usefulness and acceptability of digital technologies for older adults.” Specifically, the authors challenge the long-held assumption that accessibility and aging research “fall under the same umbrella despite the fact that aging is neither an illness nor a disability.”

For Skuler, a “pyramid of value” is represented in Intuition Robotics offering. At the foundation is the physical product, easy to use and operate and doing what it is expected to do. Then there is the layer of “building relationships based on trust and empathy,” with a lot of humor and social interaction and activities for the users. On top are specific areas of value to older adults, and the first one is healthcare. There will be more in the future, anything that could help older adults live better lives, such as direct connections to the local community. ”Healthcare is an interesting experiment and I’m very much looking forward to see what else the future holds for ElliQ,” says Skuler.

Original. Reposted with permission, 7/7/2021.

Other related articles published in this Open Access Online Scientific Journal include the Following:

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”




Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.


HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN


AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN


Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN


New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD


Read Full Post »

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

This image has an empty alt attribute; its file name is morethanthes.jpg
Seeing “through” the cancer with the power of data analysis — possible with the help of artificial intelligence. Credit: MPI f. Molecular Genetics/ Ella Maru Studio
Image Source: https://medicalxpress.com/news/2021-04-sum-mutations-cancer-genes-machine.html

Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low-risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) and Artificial Intelligence (AI) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions by predicting new algorithms.

In the majority of human cancers, heritable loss of gene function through cell division may be mediated as often by epigenetic as by genetic abnormalities. Epigenetic modification occurs through a process of interrelated changes in CpG island methylation and histone modifications. Candidate gene approaches of cell cycle, growth regulatory and apoptotic genes have shown epigenetic modification associated with loss of cognate proteins in sporadic pituitary tumors.

On 11th November 2020, researchers from the University of California, Irvine, has established the understanding of epigenetic mechanisms in tumorigenesis and publicized a previously undetected repertoire of cancer driver genes. The study was published in “Science Advances

Researchers were able to identify novel tumor suppressor genes (TSGs) and oncogenes (OGs), particularly those with rare mutations by using a new prediction algorithm, called DORGE (Discovery of Oncogenes and tumor suppressor genes using Genetic and Epigenetic features) by integrating the most comprehensive collection of genetic and epigenetic data.

The senior author Wei Li, Ph.D., the Grace B. Bell chair and professor of bioinformatics in the Department of Biological Chemistry at the UCI School of Medicine said

Existing bioinformatics algorithms do not sufficiently leverage epigenetic features to predict cancer driver genes, even though epigenetic alterations are known to be associated with cancer driver genes.

The Study

This study demonstrated how cancer driver genes, predicted by DORGE, included both known cancer driver genes and novel driver genes not reported in current literature. In addition, researchers found that the novel dual-functional genes, which DORGE predicted as both TSGs and OGs, are highly enriched at hubs in protein-protein interaction (PPI) and drug/compound-gene networks.

Prof. Li explained that the DORGE algorithm, successfully leveraged public data to discover the genetic and epigenetic alterations that play significant roles in cancer driver gene dysregulation and could be instrumental in improving cancer prevention, diagnosis and treatment efforts in the future.

Another new algorithmic prediction for the identification of cancer genes by Machine Learning has been carried out by a team of researchers at the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin and the Institute of Computational Biology of Helmholtz Zentrum München combining a wide variety of data analyzed it with “Artificial Intelligence” and identified numerous cancer genes. They termed the algorithm as EMOGI (Explainable Multi-Omics Graph Integration). EMOGI can predict which genes cause cancer, even if their DNA sequence is not changed. This opens up new perspectives for targeted cancer therapy in personalized medicine and the development of biomarkers. The research was published in Nature Machine Intelligence on 12th April 2021.

In cancer, cells get out of control. They proliferate and push their way into tissues, destroying organs and thereby impairing essential vital functions. This unrestricted growth is usually induced by an accumulation of DNA changes in cancer genes—i.e. mutations in these genes that govern the development of the cell. But some cancers have only very few mutated genes, which means that other causes lead to the disease in these cases.

The Study

Overlap of EMOGI’s positive predictions with known cancer genes (KCGs) and candidate cancer genes
Image Source: https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-021-00325-y/MediaObjects/42256_2021_325_MOESM1_ESM.pdf

The aim of the study has been represented in 4 main headings

  • Additional targets for personalized medicine
  • Better results by combination
  • In search of hints for further studies
  • Suitable for other types of diseases as well

The team was headed by Annalisa Marsico. The team used the algorithm to identify 165 previously unknown cancer genes. The sequences of these genes are not necessarily altered-apparently, already a dysregulation of these genes can lead to cancer. All of the newly identified genes interact closely with well-known cancer genes and be essential for the survival of tumor cells in cell culture experiments. The EMOGI can also explain the relationships in the cell’s machinery that make a gene a cancer gene. The software integrates tens of thousands of data sets generated from patient samples. These contain information about DNA methylations, the activity of individual genes and the interactions of proteins within cellular pathways in addition to sequence data with mutations. In these data, a deep-learning algorithm detects the patterns and molecular principles that lead to the development of cancer.

Marsico says

Ideally, we obtain a complete picture of all cancer genes at some point, which can have a different impact on cancer progression for different patients

Unlike traditional cancer treatments such as chemotherapy, personalized treatments are tailored to the exact type of tumor. “The goal is to choose the best treatment for each patient, the most effective treatment with the fewest side effects. In addition, molecular properties can be used to identify cancers that are already in the early stages.

Roman Schulte-Sasse, a doctoral student on Marsico’s team and the first author of the publication says

To date, most studies have focused on pathogenic changes in sequence, or cell blueprints, at the same time, it has recently become clear that epigenetic perturbation or dysregulation gene activity can also lead to cancer.

This is the reason, researchers merged sequence data that reflects blueprint failures with information that represents events in cells. Initially, scientists confirmed that mutations, or proliferation of genomic segments, were the leading cause of cancer. Then, in the second step, they identified gene candidates that are not very directly related to the genes that cause cancer.

Clues for future directions

The researcher’s new program adds a considerable number of new entries to the list of suspected cancer genes, which has grown to between 700 and 1,000 in recent years. It was only through a combination of bioinformatics analysis and the newest Artificial Intelligence (AI) methods that the researchers were able to track down the hidden genes.

Schulte-Sasse says “The interactions of proteins and genes can be mapped as a mathematical network, known as a graph.” He explained by giving an example of a railroad network; each station corresponds to a protein or gene, and each interaction among them is the train connection. With the help of deep learning—the very algorithms that have helped artificial intelligence make a breakthrough in recent years – the researchers were able to discover even those train connections that had previously gone unnoticed. Schulte-Sasse had the computer analyze tens of thousands of different network maps from 16 different cancer types, each containing between 12,000 and 19,000 data points.

Many more interesting details are hidden in the data. Patterns that are dependent on particular cancer and tissue were seen. The researchers were also observed this as evidence that tumors are triggered by different molecular mechanisms in different organs.

Marsico explains

The EMOGI program is not limited to cancer, the researchers emphasize. In theory, it can be used to integrate diverse sets of biological data and find patterns there. It could be useful to apply our algorithm for similarly complex diseases for which multifaceted data are collected and where genes play an important role. An example might be complex metabolic diseases such as diabetes.

Main Source

New prediction algorithm identifies previously undetected cancer driver genes


Integration of multiomics data with graph convolutional networks to identify new cancer genes and their associated molecular mechanisms


Other Related Articles published in this Open Access Online Scientific Journal include the following:

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD


Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN


Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis

Curator & Reporter: Aviva Lev-Ari, PhD, RN


Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP


Free Bio-IT World Webinar: Machine Learning to Detect Cancer Variants

Reporter: Stephen J. Williams, PhD


Artificial Intelligence: Genomics & Cancer


Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning


Read Full Post »

Artificial intelligence predicts the immunogenic landscape of SARS-CoV-2

Reporter: Irina Robu, PhD

Artificial intelligence makes it imaginable for machines to learn from experience, adjust to new inputs and perform human-like tasks. Using the technologies, computer can be trained to achieve specific tasks by processing large amount of data and recognizing patterns. Scientists from NEC OncoImmunity use artificial intelligence to forecast designs for designing universal vaccines for COVID 19, that contain a broad spectrum of T-cell epitopes capable of providing coverage and protection across the global population. To help test their hypothesis, they profiled the entire SARS COV2 proteome across the most frequent 100 HLA-A, HLA-B and HLA-DR alleles in the human population using host infected cell surface antigen and immunogenicity predictors from NEC Immune Profiler suite of tools, and generated comprehensive epitope maps. They use the epitope maps as a starting point for Monte Carlo simulation intended to identify the most significant epitope hotspot in the virus. Then they analyzed the antigen arrangement and immunogenic landscape to recognize a trend where SARS-COV-2 mutations are expected to have minimized potential to be accessible by host-infected cells, and subsequently noticed by the host immune system. A sequence conservation analysis then removed epitope hotspots that occurred in less-conserved regions of the viral proteome.

By merging the antigen arrangement to the infected-host cell surface and immunogenicity estimates of the NEC Immune Profiler with a Monte Carlo and digital twin simulation, the researchers have outlined the entire SARS-CoV-2 proteome and recognized a subset of epitope hotspots that could be used  in a vaccine formulation to provide a wide-ranging coverage across the global population.

By using the database of HLA haplotypes of approximately 22,000 individuals to design  a “digital twin” type simulation to model how efficient various  combinations of hotspots would work in a varied human population. 



Read Full Post »

AI-controlled sensors could save lives in smart hospitals and homes

Reporter: Irina Robu, PhD

Arnold Milstein, professor of medicine and director of Stanford’s Clinical Excellence Research Center along with Fei-Fei Li, computer science professor and graduate student Albert Haque  believe that having the ability to build technologies into the physical spaces where health care is delivered minimize the rate of fatal errors that occur lately due to sheer volumes of patients and complexity of their care. Even though, the technology is a very promising, it also raises legal and regulatory issues as well as privacy concerns.

They believe that the AI can alert clinicians and patient visitors when they fail to sanitize their hands before entering hospital room for example. Also, AI tools can be built into smart homes where the technology can monitor the frail elderly for behavioral clues of a health crises and can let in-home caregivers, remotely located clinicians and patients to make life saving interventions.

Li and Milstein co-direct the 8-year-old Stanford Partnership in AI-Assisted Care (PAC), one of a growing number of centers, including those at Johns Hopkins University and the University of Toronto, where technologists and clinicians have teamed up to develop ambient intelligence technologies to help health care providers manage patient volumes, roughly 24 million Americans required an overnight hospital stay in 2018.

Haque, who compiled the 170 scientific papers cited in the Nature article, the availability of infrared sensors that are inexpensive enough to build into high-risk care-giving environments, and the rise of machine learning systems as a way to use sensor input to train specialized AI applications in health care.

The infrared technologies are of two types. The first is active infrared, such as the invisible light beams used by TV remote controls. Nonetheless as an alternative of simply beaming invisible light in one direction, like a TV remote, new active infrared systems use AI to compute the time it takes the invisible rays to bounce back to the source, like a light-based form of radar that maps the 3D outlines of a person or object.

These alert systems are being confirmed to see if they can reduce the number of ICU patients who get nosocomial infections due to failure of other people in the hospital to fully observe to infection prevention protocols.

The second type of infrared technology are passive detectors, that allow night vision goggles to generate thermal images from the infrared rays generated by body heat. In a hospital setting, a thermal sensor above an ICU bed would allow the governing AI to sense twitching beneath the sheets, and alert clinical team members to forthcoming health crises without continuously going from room to room.

Constant monitoring by ambient intelligence systems in a home environment could also be used to detect clues of serious illness or potential accidents, and alert caregivers to make timely interventions. . Researchers are still developing activity recognition algorithms that can examine through infrared sensing data to detect variations in habitual behaviors, and benefit caregivers get a more holistic view of patient health.




Read Full Post »

Systems Biology analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Curator: Stephen J. Williams, Ph.D.

In the June 2020 issue of the journal Science, writer Roxanne Khamsi has an interesting article “Computing Cancer’s Weak Spots; An algorithm to unmask tumors’ molecular linchpins is tested in patients”[1], describing some early successes in the incorporation of cancer genome sequencing in conjunction with artificial intelligence algorithms toward a personalized clinical treatment decision for various tumor types.  In 2016, oncologists Amy Tiersten collaborated with systems biologist Andrea Califano and cell biologist Jose Silva at Mount Sinai Hospital to develop a systems biology approach to determine that the drug ruxolitinib, a STAT3 inhibitor, would be effective for one of her patient’s aggressively recurring, Herceptin-resistant breast tumor.  Dr. Califano, instead of defining networks of driver mutations, focused on identifying a few transcription factors that act as ‘linchpins’ or master controllers of transcriptional networks withing tumor cells, and in doing so hoping to, in essence, ‘bottleneck’ the transcriptional machinery of potential oncogenic products. As Dr. Castilano states

“targeting those master regulators and you will stop cancer in its tracks, no matter what mutation initially caused it.”

It is important to note that this approach also relies on the ability to sequence tumors  by RNA-seq to determine the underlying mutations which alter which master regulators are pertinent in any one tumor.  And given the wide tumor heterogeneity in tumor samples, this sequencing effort may have to involve multiple biopsies (as discussed in earlier posts on tumor heterogeneity in renal cancer).

As stated in the article, Califano co-founded a company called Darwin-Health in 2015 to guide doctors by identifying the key transcription factors in a patient’s tumor and suggesting personalized therapeutics to those identified molecular targets (OncoTarget™).  He had collaborated with the Jackson Laboratory and most recently Columbia University to conduct a $15 million 3000 patient clinical trial.  This was a bit of a stretch from his initial training as a physicist and, in 1986, IBM hired him for some artificial intelligence projects.  He then landed in 2003 at Columbia and has been working on identifying these transcriptional nodes that govern cancer survival and tumorigenicity.  Dr. Califano had figured that the number of genetic mutations which potentially could be drivers were too vast:

A 2018 study which analyzed more than 9000 tumor samples reported over 1.5 million mutations[2]

and impossible to develop therapeutics against.  He reasoned that you would just have to identify the common connections between these pathways or transcriptional nodes and termed them master regulators.

A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples

Chen H, Li C, Peng X, et al. Cell. 2018;173(2):386-399.e12.


The role of enhancers, a key class of non-coding regulatory DNA elements, in cancer development has increasingly been appreciated. Here, we present the detection and characterization of a large number of expressed enhancers in a genome-wide analysis of 8928 tumor samples across 33 cancer types using TCGA RNA-seq data. Compared with matched normal tissues, global enhancer activation was observed in most cancers. Across cancer types, global enhancer activity was positively associated with aneuploidy, but not mutation load, suggesting a hypothesis centered on “chromatin-state” to explain their interplay. Integrating eQTL, mRNA co-expression, and Hi-C data analysis, we developed a computational method to infer causal enhancer-gene interactions, revealing enhancers of clinically actionable genes. Having identified an enhancer ∼140 kb downstream of PD-L1, a major immunotherapy target, we validated it experimentally. This study provides a systematic view of enhancer activity in diverse tumor contexts and suggests the clinical implications of enhancers.


A diagram of how concentrating on these transcriptional linchpins or nodes may be more therapeutically advantageous as only one pharmacologic agent is needed versus multiple agents to inhibit the various upstream pathways:



From: Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.


VIPER Algorithm (Virtual Inference of Protein activity by Enriched Regulon Analysis)

The algorithm that Califano and DarwinHealth developed is a systems biology approach using a tumor’s RNASeq data to determine controlling nodes of transcription.  They have recently used the VIPER algorithm to look at RNA-Seq data from more than 10,000 tumor samples from TCGA and identified 407 transcription factor genes that acted as these linchpins across all tumor types.  Only 20 to 25 of  them were implicated in just one tumor type so these potential nodes are common in many forms of cancer.

Other institutions like the Cold Spring Harbor Laboratories have been using VIPER in their patient tumor analysis.  Linchpins for other tumor types have been found.  For instance, VIPER identified transcription factors IKZF1 and IKF3 as linchpins in multiple myeloma.  But currently approved therapeutics are hard to come by for targets with are transcription factors, as most pharma has concentrated on inhibiting an easier target like kinases and their associated activity.  In general, developing transcription factor inhibitors in more difficult an undertaking for multiple reasons.

Network-based inference of protein activity helps functionalize the genetic landscape of cancer. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A:. Nature genetics 2016, 48(8):838-847 [3]


Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible. To address this problem we introduce and experimentally validate a new algorithm, VIPER (Virtual Inference of Protein-activity by Enriched Regulon analysis), for the accurate assessment of protein activity from gene expression data. We use VIPER to evaluate the functional relevance of genetic alterations in regulatory proteins across all TCGA samples. In addition to accurately inferring aberrant protein activity induced by established mutations, we also identify a significant fraction of tumors with aberrant activity of druggable oncoproteins—despite a lack of mutations, and vice-versa. In vitro assays confirmed that VIPER-inferred protein activity outperforms mutational analysis in predicting sensitivity to targeted inhibitors.





Figure 1 

Schematic overview of the VIPER algorithm From: Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.

(a) Molecular layers profiled by different technologies. Transcriptomics measures steady-state mRNA levels; Proteomics quantifies protein levels, including some defined post-translational isoforms; VIPER infers protein activity based on the protein’s regulon, reflecting the abundance of the active protein isoform, including post-translational modifications, proper subcellular localization and interaction with co-factors. (b) Representation of VIPER workflow. A regulatory model is generated from ARACNe-inferred context-specific interactome and Mode of Regulation computed from the correlation between regulator and target genes. Single-sample gene expression signatures are computed from genome-wide expression data, and transformed into regulatory protein activity profiles by the aREA algorithm. (c) Three possible scenarios for the aREA analysis, including increased, decreased or no change in protein activity. The gene expression signature and its absolute value (|GES|) are indicated by color scale bars, induced and repressed target genes according to the regulatory model are indicated by blue and red vertical lines. (d) Pleiotropy Correction is performed by evaluating whether the enrichment of a given regulon (R4) is driven by genes co-regulated by a second regulator (R4∩R1). (e) Benchmark results for VIPER analysis based on multiple-samples gene expression signatures (msVIPER) and single-sample gene expression signatures (VIPER). Boxplots show the accuracy (relative rank for the silenced protein), and the specificity (fraction of proteins inferred as differentially active at p < 0.05) for the 6 benchmark experiments (see Table 2). Different colors indicate different implementations of the aREA algorithm, including 2-tail (2T) and 3-tail (3T), Interaction Confidence (IC) and Pleiotropy Correction (PC).

 Other articles from Andrea Califano on VIPER algorithm in cancer include:

Resistance to neoadjuvant chemotherapy in triple-negative breast cancer mediated by a reversible drug-tolerant state.

Echeverria GV, Ge Z, Seth S, Zhang X, Jeter-Jones S, Zhou X, Cai S, Tu Y, McCoy A, Peoples M, Sun Y, Qiu H, Chang Q, Bristow C, Carugo A, Shao J, Ma X, Harris A, Mundi P, Lau R, Ramamoorthy V, Wu Y, Alvarez MJ, Califano A, Moulder SL, Symmans WF, Marszalek JR, Heffernan TP, Chang JT, Piwnica-Worms H.Sci Transl Med. 2019 Apr 17;11(488):eaav0936. doi: 10.1126/scitranslmed.aav0936.PMID: 30996079

An Integrated Systems Biology Approach Identifies TRIM25 as a Key Determinant of Breast Cancer Metastasis.

Walsh LA, Alvarez MJ, Sabio EY, Reyngold M, Makarov V, Mukherjee S, Lee KW, Desrichard A, Turcan Ş, Dalin MG, Rajasekhar VK, Chen S, Vahdat LT, Califano A, Chan TA.Cell Rep. 2017 Aug 15;20(7):1623-1640. doi: 10.1016/j.celrep.2017.07.052.PMID: 28813674

Inhibition of the autocrine IL-6-JAK2-STAT3-calprotectin axis as targeted therapy for HR-/HER2+ breast cancers.

Rodriguez-Barrueco R, Yu J, Saucedo-Cuevas LP, Olivan M, Llobet-Navas D, Putcha P, Castro V, Murga-Penas EM, Collazo-Lorduy A, Castillo-Martin M, Alvarez M, Cordon-Cardo C, Kalinsky K, Maurer M, Califano A, Silva JM.Genes Dev. 2015 Aug 1;29(15):1631-48. doi: 10.1101/gad.262642.115. Epub 2015 Jul 30.PMID: 26227964

Master regulators used as breast cancer metastasis classifier.

Lim WK, Lyashenko E, Califano A.Pac Symp Biocomput. 2009:504-15.PMID: 19209726 Free


Additional References


  1. Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.
  2. Chen H, Li C, Peng X, Zhou Z, Weinstein JN, Liang H: A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples. Cell 2018, 173(2):386-399 e312.
  3. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.


Other articles of Note on this Open Access Online Journal Include:

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing


Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 3:00 PM-5:30 PM Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using




Register for FREE at https://www.aacr.org/

uesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Bioinformatics and Systems Biology

The Clinical Proteomic Tumor Analysis Consortium: Resources and Data Dissemination

This session will provide information regarding methodologic and computational aspects of proteogenomic analysis of tumor samples, particularly in the context of clinical trials. Availability of comprehensive proteomic and matching genomic data for tumor samples characterized by the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) and The Cancer Genome Atlas (TCGA) program will be described, including data access procedures and informatic tools under development. Recent advances on mass spectrometry-based targeted assays for inclusion in clinical trials will also be discussed.

Amanda G Paulovich, Shankha Satpathy, Meenakshi Anurag, Bing Zhang, Steven A Carr

Methods and tools for comprehensive proteogenomic characterization of bulk tumor to needle core biopsies

Shankha Satpathy
  • TCGA has 11,000 cancers with >20,000 somatic alterations but only 128 proteins as proteomics was still young field
  • CPTAC is NCI proteomic effort
  • Chemical labeling approach now method of choice for quantitative proteomics
  • Looked at ovarian and breast cancers: to measure PTM like phosphorylated the sample preparation is critical


Data access and informatics tools for proteogenomics analysis

Bing Zhang
  • Raw and processed data (raw MS data) with linked clinical data can be extracted in CPTAC
  • Python scripts are available for bioinformatic programming


Pathways to clinical translation of mass spectrometry-based assays

Meenakshi Anurag

·         Using kinase inhibitor pulldown (KIP) assay to identify unique kinome profiles

·         Found single strand break repair defects in endometrial luminal cases, especially with immune checkpoint prognostic tumors

·         Paper: JNCI 2019 analyzed 20,000 genes correlated with ET resistant in luminal B cases (selected for a list of 30 genes)

·         Validated in METABRIC dataset

·         KIP assay uses magnetic beads to pull out kinases to determine druggable kinases

·         Looked in xenografts and was able to pull out differential kinomes

·         Matched with PDX data so good clinical correlation

·         Were able to detect ESR1 fusion correlated with ER+ tumors

Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session

Artificial Intelligence and Machine Learning from Research to the Cancer Clinic

The adoption of omic technologies in the cancer clinic is giving rise to an increasing number of large-scale high-dimensional datasets recording multiple aspects of the disease. This creates the need for frameworks for translatable discovery and learning from such data. Like artificial intelligence (AI) and machine learning (ML) for the cancer lab, methods for the clinic need to (i) compare and integrate different data types; (ii) scale with data sizes; (iii) prove interpretable in terms of the known biology and batch effects underlying the data; and (iv) predict previously unknown experimentally verifiable mechanisms. Methods for the clinic, beyond the lab, also need to (v) produce accurate actionable recommendations; (vi) prove relevant to patient populations based upon small cohorts; and (vii) be validated in clinical trials. In this educational session we will present recent studies that demonstrate AI and ML translated to the cancer clinic, from prognosis and diagnosis to therapy.
NOTE: Dr. Fish’s talk is not eligible for CME credit to permit the free flow of information of the commercial interest employee participating.

Ron C. Anafi, Rick L. Stevens, Orly Alter, Guy Fish

Overview of AI approaches in cancer research and patient care

Rick L. Stevens
  • Deep learning is less likely to saturate as data increases
  • Deep learning attempts to learn multiple layers of information
  • The ultimate goal is prediction but this will be the greatest challenge for ML
  • ML models can integrate data validation and cross database validation
  • What limits the performance of cross validation is the internal noise of data (reproducibility)
  • Learning curves: not the more data but more reproducible data is important
  • Neural networks can outperform classical methods
  • Important to measure validation accuracy in training set. Class weighting can assist in development of data set for training set especially for unbalanced data sets

Discovering genome-scale predictors of survival and response to treatment with multi-tensor decompositions

Orly Alter
  • Finding patterns using SVD component analysis. Gene and SVD patterns match 1:1
  • Comparative spectral decompositions can be used for global datasets
  • Validation of CNV data using this strategy
  • Found Ras, Shh and Notch pathways with altered CNV in glioblastoma which correlated with prognosis
  • These predictors was significantly better than independent prognostic indicator like age of diagnosis


Identifying targets for cancer chronotherapy with unsupervised machine learning

Ron C. Anafi
  • Many clinicians have noticed that some patients do better when chemo is given at certain times of the day and felt there may be a circadian rhythm or chronotherapeutic effect with respect to side effects or with outcomes
  • ML used to determine if there is indeed this chronotherapy effect or can we use unstructured data to determine molecular rhythms?
  • Found a circadian transcription in human lung
  • Most dataset in cancer from one clinical trial so there might need to be more trials conducted to take into consideration circadian rhythms

Stratifying patients by live-cell biomarkers with random-forest decision trees

Stratifying patients by live-cell biomarkers with random-forest decision trees

Guy Fish CEO Cellanyx Diagnostics


Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Molecular and Cellular Biology/Genetics, Bioinformatics and Systems Biology, Prevention Research

The Wound Healing that Never Heals: The Tumor Microenvironment (TME) in Cancer Progression

This educational session focuses on the chronic wound healing, fibrosis, and cancer “triad.” It emphasizes the similarities and differences seen in these conditions and attempts to clarify why sustained fibrosis commonly supports tumorigenesis. Importance will be placed on cancer-associated fibroblasts (CAFs), vascularity, extracellular matrix (ECM), and chronic conditions like aging. Dr. Dvorak will provide an historical insight into the triad field focusing on the importance of vascular permeability. Dr. Stewart will explain how chronic inflammatory conditions, such as the aging tumor microenvironment (TME), drive cancer progression. The session will close with a review by Dr. Cukierman of the roles that CAFs and self-produced ECMs play in enabling the signaling reciprocity observed between fibrosis and cancer in solid epithelial cancers, such as pancreatic ductal adenocarcinoma.

Harold F Dvorak, Sheila A Stewart, Edna Cukierman


The importance of vascular permeability in tumor stroma generation and wound healing

Harold F Dvorak

Aging in the driver’s seat: Tumor progression and beyond

Sheila A Stewart

Why won’t CAFs stay normal?

Edna Cukierman


Tuesday, June 23

3:00 PM – 5:00 PM EDT








Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020 #AACR20: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using




Register for FREE at https://www.aacr.org/


Presidential Address

Elaine R Mardis, William N Hait


Welcome and introduction

William N Hait


Improving diagnostic yield in pediatric cancer precision medicine

Elaine R Mardis
  • Advent of genomics have revolutionized how we diagnose and treat lung cancer
  • We are currently needing to understand the driver mutations and variants where we can personalize therapy
  • PD-L1 and other checkpoint therapy have not really been used in pediatric cancers even though CAR-T have been successful
  • The incidence rates and mortality rates of pediatric cancers are rising
  • Large scale study of over 700 pediatric cancers show cancers driven by epigenetic drivers or fusion proteins. Need for transcriptomics.  Also study demonstrated that we have underestimated germ line mutations and hereditary factors.
  • They put together a database to nominate patients on their IGM Cancer protocol. Involves genetic counseling and obtaining germ line samples to determine hereditary factors.  RNA and protein are evaluated as well as exome sequencing. RNASeq and Archer Dx test to identify driver fusions
  • PECAN curated database from St. Jude used to determine driver mutations. They use multiple databases and overlap within these databases and knowledge base to determine or weed out false positives
  • They have used these studies to understand the immune infiltrate into recurrent cancers (CytoCure)
  • They found 40 germline cancer predisposition genes, 47 driver somatic fusion proteins, 81 potential actionable targets, 106 CNV, 196 meaningful somatic driver mutations



Tuesday, June 23

12:00 PM – 12:30 PM EDT

Awards and Lectures

NCI Director’s Address

Norman E Sharpless, Elaine R Mardis


Introduction: Elaine Mardis


NCI Director Address: Norman E Sharpless
  • They are functioning well at NCI with respect to grant reviews, research, and general functions in spite of the COVID pandemic and the massive demonstrations on also focusing on the disparities which occur in cancer research field and cancer care
  • There are ongoing efforts at NCI to make a positive difference in racial injustice, diversity in the cancer workforce, and for patients as well
  • Need a diverse workforce across the cancer research and care spectrum
  • Data show that areas where the clinicians are successful in putting African Americans on clinical trials are areas (geographic and site specific) where health disparities are narrowing
  • Grants through NCI new SeroNet for COVID-19 serologic testing funded by two RFAs through NIAD (RFA-CA-30-038 and RFA-CA-20-039) and will close on July 22, 2020


Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Immunology, Tumor Biology, Experimental and Molecular Therapeutics, Molecular and Cellular Biology/Genetics

Tumor Immunology and Immunotherapy for Nonimmunologists: Innovation and Discovery in Immune-Oncology

This educational session will update cancer researchers and clinicians about the latest developments in the detailed understanding of the types and roles of immune cells in tumors. It will summarize current knowledge about the types of T cells, natural killer cells, B cells, and myeloid cells in tumors and discuss current knowledge about the roles these cells play in the antitumor immune response. The session will feature some of the most promising up-and-coming cancer immunologists who will inform about their latest strategies to harness the immune system to promote more effective therapies.

Judith A Varner, Yuliya Pylayeva-Gupta



Judith A Varner
New techniques reveal critical roles of myeloid cells in tumor development and progression
  • Different type of cells are becoming targets for immune checkpoint like myeloid cells
  • In T cell excluded or desert tumors T cells are held at periphery so myeloid cells can infiltrate though so macrophages might be effective in these immune t cell naïve tumors, macrophages are most abundant types of immune cells in tumors
  • CXCLs are potential targets
  • PI3K delta inhibitors,
  • Reduce the infiltrate of myeloid tumor suppressor cells like macrophages
  • When should we give myeloid or T cell therapy is the issue
Judith A Varner
Novel strategies to harness T-cell biology for cancer therapy
Positive and negative roles of B cells in cancer
Yuliya Pylayeva-Gupta
New approaches in cancer immunotherapy: Programming bacteria to induce systemic antitumor immunity



Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Cancer Chemistry

Chemistry to the Clinic: Part 2: Irreversible Inhibitors as Potential Anticancer Agents

There are numerous examples of highly successful covalent drugs such as aspirin and penicillin that have been in use for a long period of time. Despite historical success, there was a period of reluctance among many to purse covalent drugs based on concerns about toxicity. With advances in understanding features of a well-designed covalent drug, new techniques to discover and characterize covalent inhibitors, and clinical success of new covalent cancer drugs in recent years, there is renewed interest in covalent compounds. This session will provide a broad look at covalent probe compounds and drug development, including a historical perspective, examination of warheads and electrophilic amino acids, the role of chemoproteomics, and case studies.

Benjamin F Cravatt, Richard A. Ward, Sara J Buhrlage


Discovering and optimizing covalent small-molecule ligands by chemical proteomics

Benjamin F Cravatt
  • Multiple approaches are being investigated to find new covalent inhibitors such as: 1) cysteine reactivity mapping, 2) mapping cysteine ligandability, 3) and functional screening in phenotypic assays for electrophilic compounds
  • Using fluorescent activity probes in proteomic screens; have broad useability in the proteome but can be specific
  • They screened quiescent versus stimulated T cells to determine reactive cysteines in a phenotypic screen and analyzed by MS proteomics (cysteine reactivity profiling); can quantitate 15000 to 20,000 reactive cysteines
  • Isocitrate dehydrogenase 1 and adapter protein LCP-1 are two examples of changes in reactive cysteines they have seen using this method
  • They use scout molecules to target ligands or proteins with reactive cysteines
  • For phenotypic screens they first use a cytotoxic assay to screen out toxic compounds which just kill cells without causing T cell activation (like IL10 secretion)
  • INTERESTINGLY coupling these MS reactive cysteine screens with phenotypic screens you can find NONCANONICAL mechanisms of many of these target proteins (many of the compounds found targets which were not predicted or known)

Electrophilic warheads and nucleophilic amino acids: A chemical and computational perspective on covalent modifier

The covalent targeting of cysteine residues in drug discovery and its application to the discovery of Osimertinib

Richard A. Ward
  • Cysteine activation: thiolate form of cysteine is a strong nucleophile
  • Thiolate form preferred in polar environment
  • Activation can be assisted by neighboring residues; pKA will have an effect on deprotonation
  • pKas of cysteine vary in EGFR
  • cysteine that are too reactive give toxicity while not reactive enough are ineffective


Accelerating drug discovery with lysine-targeted covalent probes


Tuesday, June 23

12:45 PM – 2:15 PM EDT

Virtual Educational Session

Molecular and Cellular Biology/Genetics

Virtual Educational Session

Tumor Biology, Immunology

Metabolism and Tumor Microenvironment

This Educational Session aims to guide discussion on the heterogeneous cells and metabolism in the tumor microenvironment. It is now clear that the diversity of cells in tumors each require distinct metabolic programs to survive and proliferate. Tumors, however, are genetically programmed for high rates of metabolism and can present a metabolically hostile environment in which nutrient competition and hypoxia can limit antitumor immunity.

Jeffrey C Rathmell, Lydia Lynch, Mara H Sherman, Greg M Delgoffe


T-cell metabolism and metabolic reprogramming antitumor immunity

Jeffrey C Rathmell


Jeffrey C Rathmell

Metabolic functions of cancer-associated fibroblasts

Mara H Sherman

Tumor microenvironment metabolism and its effects on antitumor immunity and immunotherapeutic response

Greg M Delgoffe
  • Multiple metabolites, reactive oxygen species within the tumor microenvironment; is there heterogeneity within the TME metabolome which can predict their ability to be immunosensitive
  • Took melanoma cells and looked at metabolism using Seahorse (glycolysis): and there was vast heterogeneity in melanoma tumor cells; some just do oxphos and no glycolytic metabolism (inverse Warburg)
  • As they profiled whole tumors they could separate out the metabolism of each cell type within the tumor and could look at T cells versus stromal CAFs or tumor cells and characterized cells as indolent or metabolic
  • T cells from hyerglycolytic tumors were fine but from high glycolysis the T cells were more indolent
  • When knock down glucose transporter the cells become more glycolytic
  • If patient had high oxidative metabolism had low PDL1 sensitivity
  • Showed this result in head and neck cancer as well
  • Metformin a complex 1 inhibitor which is not as toxic as most mito oxphos inhibitors the T cells have less hypoxia and can remodel the TME and stimulate the immune response
  • Metformin now in clinical trials
  • T cells though seem metabolically restricted; T cells that infiltrate tumors are low mitochondrial phosph cells
  • T cells from tumors have defective mitochondria or little respiratory capacity
  • They have some preliminary findings that metabolic inhibitors may help with CAR-T therapy

Obesity, lipids and suppression of anti-tumor immunity

Lydia Lynch
  • Hypothesis: obesity causes issues with anti tumor immunity
  • Less NK cells in obese people; also produce less IFN gamma
  • RNASeq on NOD mice; granzymes and perforins at top of list of obese downregulated
  • Upregulated genes that were upregulated involved in lipid metabolism
  • All were PPAR target genes
  • NK cells from obese patients takes up palmitate and this reduces their glycolysis but OXPHOS also reduced; they think increased FFA basically overloads mitochondria
  • PPAR alpha gamma activation mimics obesity



Tuesday, June 23

12:45 PM – 2:45 PM EDT

Virtual Educational Session

Clinical Research Excluding Trials

The Evolving Role of the Pathologist in Cancer Research

Long recognized for their role in cancer diagnosis and prognostication, pathologists are beginning to leverage a variety of digital imaging technologies and computational tools to improve both clinical practice and cancer research. Remarkably, the emergence of artificial intelligence (AI) and machine learning algorithms for analyzing pathology specimens is poised to not only augment the resolution and accuracy of clinical diagnosis, but also fundamentally transform the role of the pathologist in cancer science and precision oncology. This session will discuss what pathologists are currently able to achieve with these new technologies, present their challenges and barriers, and overview their future possibilities in cancer diagnosis and research. The session will also include discussions of what is practical and doable in the clinic for diagnostic and clinical oncology in comparison to technologies and approaches primarily utilized to accelerate cancer research.


Jorge S Reis-Filho, Thomas J Fuchs, David L Rimm, Jayanta Debnath


Tuesday, June 23

12:45 PM – 2:45 PM EDT


High-dimensional imaging technologies in cancer research

David L Rimm

  • Using old methods and new methods; so cell counting you use to find the cells then phenotype; with quantification like with Aqua use densitometry of positive signal to determine a threshold to determine presence of a cell for counting
  • Hiplex versus multiplex imaging where you have ten channels to measure by cycling of flour on antibody (can get up to 20plex)
  • Hiplex can be coupled with Mass spectrometry (Imaging Mass spectrometry, based on heavy metal tags on mAbs)
  • However it will still take a trained pathologist to define regions of interest or field of desired view



Jayanta Debnath

Challenges and barriers of implementing AI tools for cancer diagnostics

Jorge S Reis-Filho

Implementing robust digital pathology workflows into clinical practice and cancer research

Jayanta Debnath

Invited Speaker

Thomas J Fuchs
  • Founder of spinout of Memorial Sloan Kettering
  • Separates AI from computational algothimic
  • Dealing with not just machines but integrating human intelligence
  • Making decision for the patients must involve human decision making as well
  • How do we get experts to do these decisions faster
  • AI in pathology: what is difficult? =è sandbox scenarios where machines are great,; curated datasets; human decision support systems or maps; or try to predict nature
  • 1) learn rules made by humans; human to human scenario 2)constrained nature 3)unconstrained nature like images and or behavior 4) predict nature response to nature response to itself
  • In sandbox scenario the rules are set in stone and machines are great like chess playing
  • In second scenario can train computer to predict what a human would predict
  • So third scenario is like driving cars
  • System on constrained nature or constrained dataset will take a long time for commuter to get to decision
  • Fourth category is long term data collection project
  • He is finding it is still finding it is still is difficult to predict nature so going from clinical finding to prognosis still does not have good predictability with AI alone; need for human involvement
  • End to end partnering (EPL) is a new way where humans can get more involved with the algorithm and assist with the problem of constrained data
  • An example of a workflow for pathology would be as follows from Campanella et al 2019 Nature Medicine: obtain digital images (they digitized a million slides), train a massive data set with highthroughput computing (needed a lot of time and big software developing effort), and then train it using input be the best expert pathologists (nature to human and unconstrained because no data curation done)
  • Led to first clinically grade machine learning system (Camelyon16 was the challenge for detecting metastatic cells in lymph tissue; tested on 12,000 patients from 45 countries)
  • The first big hurdle was moving from manually annotated slides (which was a big bottleneck) to automatically extracted data from path reports).
  • Now problem is in prediction: How can we bridge the gap from predicting humans to predicting nature?
  • With an AI system pathologist drastically improved the ability to detect very small lesions


Virtual Educational Session


Cancer Increases in Younger Populations: Where Are They Coming from?

Incidence rates of several cancers (e.g., colorectal, pancreatic, and breast cancers) are rising in younger populations, which contrasts with either declining or more slowly rising incidence in older populations. Early-onset cancers are also more aggressive and have different tumor characteristics than those in older populations. Evidence on risk factors and contributors to early-onset cancers is emerging. In this Educational Session, the trends and burden, potential causes, risk factors, and tumor characteristics of early-onset cancers will be covered. Presenters will focus on colorectal and breast cancer, which are among the most common causes of cancer deaths in younger people. Potential mechanisms of early-onset cancers and racial/ethnic differences will also be discussed.

Stacey A. Fedewa, Xavier Llor, Pepper Jo Schedin, Yin Cao

Cancers that are and are not increasing in younger populations

Stacey A. Fedewa


  • Early onset cancers, pediatric cancers and colon cancers are increasing in younger adults
  • Younger people are more likely to be uninsured and these are there most productive years so it is a horrible life event for a young adult to be diagnosed with cancer. They will have more financial hardship and most (70%) of the young adults with cancer have had financial difficulties.  It is very hard for women as they are on their childbearing years so additional stress
  • Types of early onset cancer varies by age as well as geographic locations. For example in 20s thyroid cancer is more common but in 30s it is breast cancer.  Colorectal and testicular most common in US.
  • SCC is decreasing by adenocarcinoma of the cervix is increasing in women’s 40s, potentially due to changing sexual behaviors
  • Breast cancer is increasing in younger women: maybe etiologic distinct like triple negative and larger racial disparities in younger African American women
  • Increased obesity among younger people is becoming a factor in this increasing incidence of early onset cancers



Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM


Read Full Post »

Older Posts »