Feeds:
Posts
Comments

Posts Tagged ‘Artificial intelligence’


Systems Biology analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Curator: Stephen J. Williams, Ph.D.

In the June 2020 issue of the journal Science, writer Roxanne Khamsi has an interesting article “Computing Cancer’s Weak Spots; An algorithm to unmask tumors’ molecular linchpins is tested in patients”[1], describing some early successes in the incorporation of cancer genome sequencing in conjunction with artificial intelligence algorithms toward a personalized clinical treatment decision for various tumor types.  In 2016, oncologists Amy Tiersten collaborated with systems biologist Andrea Califano and cell biologist Jose Silva at Mount Sinai Hospital to develop a systems biology approach to determine that the drug ruxolitinib, a STAT3 inhibitor, would be effective for one of her patient’s aggressively recurring, Herceptin-resistant breast tumor.  Dr. Califano, instead of defining networks of driver mutations, focused on identifying a few transcription factors that act as ‘linchpins’ or master controllers of transcriptional networks withing tumor cells, and in doing so hoping to, in essence, ‘bottleneck’ the transcriptional machinery of potential oncogenic products. As Dr. Castilano states

“targeting those master regulators and you will stop cancer in its tracks, no matter what mutation initially caused it.”

It is important to note that this approach also relies on the ability to sequence tumors  by RNA-seq to determine the underlying mutations which alter which master regulators are pertinent in any one tumor.  And given the wide tumor heterogeneity in tumor samples, this sequencing effort may have to involve multiple biopsies (as discussed in earlier posts on tumor heterogeneity in renal cancer).

As stated in the article, Califano co-founded a company called Darwin-Health in 2015 to guide doctors by identifying the key transcription factors in a patient’s tumor and suggesting personalized therapeutics to those identified molecular targets (OncoTarget™).  He had collaborated with the Jackson Laboratory and most recently Columbia University to conduct a $15 million 3000 patient clinical trial.  This was a bit of a stretch from his initial training as a physicist and, in 1986, IBM hired him for some artificial intelligence projects.  He then landed in 2003 at Columbia and has been working on identifying these transcriptional nodes that govern cancer survival and tumorigenicity.  Dr. Califano had figured that the number of genetic mutations which potentially could be drivers were too vast:

A 2018 study which analyzed more than 9000 tumor samples reported over 1.5 million mutations[2]

and impossible to develop therapeutics against.  He reasoned that you would just have to identify the common connections between these pathways or transcriptional nodes and termed them master regulators.

A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples

Chen H, Li C, Peng X, et al. Cell. 2018;173(2):386-399.e12.

Abstract

The role of enhancers, a key class of non-coding regulatory DNA elements, in cancer development has increasingly been appreciated. Here, we present the detection and characterization of a large number of expressed enhancers in a genome-wide analysis of 8928 tumor samples across 33 cancer types using TCGA RNA-seq data. Compared with matched normal tissues, global enhancer activation was observed in most cancers. Across cancer types, global enhancer activity was positively associated with aneuploidy, but not mutation load, suggesting a hypothesis centered on “chromatin-state” to explain their interplay. Integrating eQTL, mRNA co-expression, and Hi-C data analysis, we developed a computational method to infer causal enhancer-gene interactions, revealing enhancers of clinically actionable genes. Having identified an enhancer ∼140 kb downstream of PD-L1, a major immunotherapy target, we validated it experimentally. This study provides a systematic view of enhancer activity in diverse tumor contexts and suggests the clinical implications of enhancers.

 

A diagram of how concentrating on these transcriptional linchpins or nodes may be more therapeutically advantageous as only one pharmacologic agent is needed versus multiple agents to inhibit the various upstream pathways:

 

 

From: Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.

 

VIPER Algorithm (Virtual Inference of Protein activity by Enriched Regulon Analysis)

The algorithm that Califano and DarwinHealth developed is a systems biology approach using a tumor’s RNASeq data to determine controlling nodes of transcription.  They have recently used the VIPER algorithm to look at RNA-Seq data from more than 10,000 tumor samples from TCGA and identified 407 transcription factor genes that acted as these linchpins across all tumor types.  Only 20 to 25 of  them were implicated in just one tumor type so these potential nodes are common in many forms of cancer.

Other institutions like the Cold Spring Harbor Laboratories have been using VIPER in their patient tumor analysis.  Linchpins for other tumor types have been found.  For instance, VIPER identified transcription factors IKZF1 and IKF3 as linchpins in multiple myeloma.  But currently approved therapeutics are hard to come by for targets with are transcription factors, as most pharma has concentrated on inhibiting an easier target like kinases and their associated activity.  In general, developing transcription factor inhibitors in more difficult an undertaking for multiple reasons.

Network-based inference of protein activity helps functionalize the genetic landscape of cancer. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A:. Nature genetics 2016, 48(8):838-847 [3]

Abstract

Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible. To address this problem we introduce and experimentally validate a new algorithm, VIPER (Virtual Inference of Protein-activity by Enriched Regulon analysis), for the accurate assessment of protein activity from gene expression data. We use VIPER to evaluate the functional relevance of genetic alterations in regulatory proteins across all TCGA samples. In addition to accurately inferring aberrant protein activity induced by established mutations, we also identify a significant fraction of tumors with aberrant activity of druggable oncoproteins—despite a lack of mutations, and vice-versa. In vitro assays confirmed that VIPER-inferred protein activity outperforms mutational analysis in predicting sensitivity to targeted inhibitors.

 

 

 

 

Figure 1 

Schematic overview of the VIPER algorithm From: Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.

(a) Molecular layers profiled by different technologies. Transcriptomics measures steady-state mRNA levels; Proteomics quantifies protein levels, including some defined post-translational isoforms; VIPER infers protein activity based on the protein’s regulon, reflecting the abundance of the active protein isoform, including post-translational modifications, proper subcellular localization and interaction with co-factors. (b) Representation of VIPER workflow. A regulatory model is generated from ARACNe-inferred context-specific interactome and Mode of Regulation computed from the correlation between regulator and target genes. Single-sample gene expression signatures are computed from genome-wide expression data, and transformed into regulatory protein activity profiles by the aREA algorithm. (c) Three possible scenarios for the aREA analysis, including increased, decreased or no change in protein activity. The gene expression signature and its absolute value (|GES|) are indicated by color scale bars, induced and repressed target genes according to the regulatory model are indicated by blue and red vertical lines. (d) Pleiotropy Correction is performed by evaluating whether the enrichment of a given regulon (R4) is driven by genes co-regulated by a second regulator (R4∩R1). (e) Benchmark results for VIPER analysis based on multiple-samples gene expression signatures (msVIPER) and single-sample gene expression signatures (VIPER). Boxplots show the accuracy (relative rank for the silenced protein), and the specificity (fraction of proteins inferred as differentially active at p < 0.05) for the 6 benchmark experiments (see Table 2). Different colors indicate different implementations of the aREA algorithm, including 2-tail (2T) and 3-tail (3T), Interaction Confidence (IC) and Pleiotropy Correction (PC).

 Other articles from Andrea Califano on VIPER algorithm in cancer include:

Resistance to neoadjuvant chemotherapy in triple-negative breast cancer mediated by a reversible drug-tolerant state.

Echeverria GV, Ge Z, Seth S, Zhang X, Jeter-Jones S, Zhou X, Cai S, Tu Y, McCoy A, Peoples M, Sun Y, Qiu H, Chang Q, Bristow C, Carugo A, Shao J, Ma X, Harris A, Mundi P, Lau R, Ramamoorthy V, Wu Y, Alvarez MJ, Califano A, Moulder SL, Symmans WF, Marszalek JR, Heffernan TP, Chang JT, Piwnica-Worms H.Sci Transl Med. 2019 Apr 17;11(488):eaav0936. doi: 10.1126/scitranslmed.aav0936.PMID: 30996079

An Integrated Systems Biology Approach Identifies TRIM25 as a Key Determinant of Breast Cancer Metastasis.

Walsh LA, Alvarez MJ, Sabio EY, Reyngold M, Makarov V, Mukherjee S, Lee KW, Desrichard A, Turcan Ş, Dalin MG, Rajasekhar VK, Chen S, Vahdat LT, Califano A, Chan TA.Cell Rep. 2017 Aug 15;20(7):1623-1640. doi: 10.1016/j.celrep.2017.07.052.PMID: 28813674

Inhibition of the autocrine IL-6-JAK2-STAT3-calprotectin axis as targeted therapy for HR-/HER2+ breast cancers.

Rodriguez-Barrueco R, Yu J, Saucedo-Cuevas LP, Olivan M, Llobet-Navas D, Putcha P, Castro V, Murga-Penas EM, Collazo-Lorduy A, Castillo-Martin M, Alvarez M, Cordon-Cardo C, Kalinsky K, Maurer M, Califano A, Silva JM.Genes Dev. 2015 Aug 1;29(15):1631-48. doi: 10.1101/gad.262642.115. Epub 2015 Jul 30.PMID: 26227964

Master regulators used as breast cancer metastasis classifier.

Lim WK, Lyashenko E, Califano A.Pac Symp Biocomput. 2009:504-15.PMID: 19209726 Free

 

Additional References

 

  1. Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.
  2. Chen H, Li C, Peng X, Zhou Z, Weinstein JN, Liang H: A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples. Cell 2018, 173(2):386-399 e312.
  3. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.

 

Other articles of Note on this Open Access Online Journal Include:

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

 

Read Full Post »


Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 3:00 PM-5:30 PM Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using

#AACR20

@pharma_BI

@AACR

Register for FREE at https://www.aacr.org/

uesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Bioinformatics and Systems Biology

The Clinical Proteomic Tumor Analysis Consortium: Resources and Data Dissemination

This session will provide information regarding methodologic and computational aspects of proteogenomic analysis of tumor samples, particularly in the context of clinical trials. Availability of comprehensive proteomic and matching genomic data for tumor samples characterized by the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) and The Cancer Genome Atlas (TCGA) program will be described, including data access procedures and informatic tools under development. Recent advances on mass spectrometry-based targeted assays for inclusion in clinical trials will also be discussed.

Amanda G Paulovich, Shankha Satpathy, Meenakshi Anurag, Bing Zhang, Steven A Carr

Methods and tools for comprehensive proteogenomic characterization of bulk tumor to needle core biopsies

Shankha Satpathy
  • TCGA has 11,000 cancers with >20,000 somatic alterations but only 128 proteins as proteomics was still young field
  • CPTAC is NCI proteomic effort
  • Chemical labeling approach now method of choice for quantitative proteomics
  • Looked at ovarian and breast cancers: to measure PTM like phosphorylated the sample preparation is critical

 

Data access and informatics tools for proteogenomics analysis

Bing Zhang
  • Raw and processed data (raw MS data) with linked clinical data can be extracted in CPTAC
  • Python scripts are available for bioinformatic programming

 

Pathways to clinical translation of mass spectrometry-based assays

Meenakshi Anurag

·         Using kinase inhibitor pulldown (KIP) assay to identify unique kinome profiles

·         Found single strand break repair defects in endometrial luminal cases, especially with immune checkpoint prognostic tumors

·         Paper: JNCI 2019 analyzed 20,000 genes correlated with ET resistant in luminal B cases (selected for a list of 30 genes)

·         Validated in METABRIC dataset

·         KIP assay uses magnetic beads to pull out kinases to determine druggable kinases

·         Looked in xenografts and was able to pull out differential kinomes

·         Matched with PDX data so good clinical correlation

·         Were able to detect ESR1 fusion correlated with ER+ tumors

Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Survivorship

Artificial Intelligence and Machine Learning from Research to the Cancer Clinic

The adoption of omic technologies in the cancer clinic is giving rise to an increasing number of large-scale high-dimensional datasets recording multiple aspects of the disease. This creates the need for frameworks for translatable discovery and learning from such data. Like artificial intelligence (AI) and machine learning (ML) for the cancer lab, methods for the clinic need to (i) compare and integrate different data types; (ii) scale with data sizes; (iii) prove interpretable in terms of the known biology and batch effects underlying the data; and (iv) predict previously unknown experimentally verifiable mechanisms. Methods for the clinic, beyond the lab, also need to (v) produce accurate actionable recommendations; (vi) prove relevant to patient populations based upon small cohorts; and (vii) be validated in clinical trials. In this educational session we will present recent studies that demonstrate AI and ML translated to the cancer clinic, from prognosis and diagnosis to therapy.
NOTE: Dr. Fish’s talk is not eligible for CME credit to permit the free flow of information of the commercial interest employee participating.

Ron C. Anafi, Rick L. Stevens, Orly Alter, Guy Fish

Overview of AI approaches in cancer research and patient care

Rick L. Stevens
  • Deep learning is less likely to saturate as data increases
  • Deep learning attempts to learn multiple layers of information
  • The ultimate goal is prediction but this will be the greatest challenge for ML
  • ML models can integrate data validation and cross database validation
  • What limits the performance of cross validation is the internal noise of data (reproducibility)
  • Learning curves: not the more data but more reproducible data is important
  • Neural networks can outperform classical methods
  • Important to measure validation accuracy in training set. Class weighting can assist in development of data set for training set especially for unbalanced data sets

Discovering genome-scale predictors of survival and response to treatment with multi-tensor decompositions

Orly Alter
  • Finding patterns using SVD component analysis. Gene and SVD patterns match 1:1
  • Comparative spectral decompositions can be used for global datasets
  • Validation of CNV data using this strategy
  • Found Ras, Shh and Notch pathways with altered CNV in glioblastoma which correlated with prognosis
  • These predictors was significantly better than independent prognostic indicator like age of diagnosis

 

Identifying targets for cancer chronotherapy with unsupervised machine learning

Ron C. Anafi
  • Many clinicians have noticed that some patients do better when chemo is given at certain times of the day and felt there may be a circadian rhythm or chronotherapeutic effect with respect to side effects or with outcomes
  • ML used to determine if there is indeed this chronotherapy effect or can we use unstructured data to determine molecular rhythms?
  • Found a circadian transcription in human lung
  • Most dataset in cancer from one clinical trial so there might need to be more trials conducted to take into consideration circadian rhythms

Stratifying patients by live-cell biomarkers with random-forest decision trees

Stratifying patients by live-cell biomarkers with random-forest decision trees

Guy Fish CEO Cellanyx Diagnostics

 

Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Molecular and Cellular Biology/Genetics, Bioinformatics and Systems Biology, Prevention Research

The Wound Healing that Never Heals: The Tumor Microenvironment (TME) in Cancer Progression

This educational session focuses on the chronic wound healing, fibrosis, and cancer “triad.” It emphasizes the similarities and differences seen in these conditions and attempts to clarify why sustained fibrosis commonly supports tumorigenesis. Importance will be placed on cancer-associated fibroblasts (CAFs), vascularity, extracellular matrix (ECM), and chronic conditions like aging. Dr. Dvorak will provide an historical insight into the triad field focusing on the importance of vascular permeability. Dr. Stewart will explain how chronic inflammatory conditions, such as the aging tumor microenvironment (TME), drive cancer progression. The session will close with a review by Dr. Cukierman of the roles that CAFs and self-produced ECMs play in enabling the signaling reciprocity observed between fibrosis and cancer in solid epithelial cancers, such as pancreatic ductal adenocarcinoma.

Harold F Dvorak, Sheila A Stewart, Edna Cukierman

 

The importance of vascular permeability in tumor stroma generation and wound healing

Harold F Dvorak

Aging in the driver’s seat: Tumor progression and beyond

Sheila A Stewart

Why won’t CAFs stay normal?

Edna Cukierman

 

Tuesday, June 23

3:00 PM – 5:00 PM EDT

 

 

 

 

 

 

 

Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020 #AACR20: Tuesday June 23, 2020 Noon-2:45 Educational Sessions


Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using

#AACR20

@pharma_BI

@AACR

Register for FREE at https://www.aacr.org/

 

Presidential Address

Elaine R Mardis, William N Hait

DETAILS

Welcome and introduction

William N Hait

 

Improving diagnostic yield in pediatric cancer precision medicine

Elaine R Mardis
  • Advent of genomics have revolutionized how we diagnose and treat lung cancer
  • We are currently needing to understand the driver mutations and variants where we can personalize therapy
  • PD-L1 and other checkpoint therapy have not really been used in pediatric cancers even though CAR-T have been successful
  • The incidence rates and mortality rates of pediatric cancers are rising
  • Large scale study of over 700 pediatric cancers show cancers driven by epigenetic drivers or fusion proteins. Need for transcriptomics.  Also study demonstrated that we have underestimated germ line mutations and hereditary factors.
  • They put together a database to nominate patients on their IGM Cancer protocol. Involves genetic counseling and obtaining germ line samples to determine hereditary factors.  RNA and protein are evaluated as well as exome sequencing. RNASeq and Archer Dx test to identify driver fusions
  • PECAN curated database from St. Jude used to determine driver mutations. They use multiple databases and overlap within these databases and knowledge base to determine or weed out false positives
  • They have used these studies to understand the immune infiltrate into recurrent cancers (CytoCure)
  • They found 40 germline cancer predisposition genes, 47 driver somatic fusion proteins, 81 potential actionable targets, 106 CNV, 196 meaningful somatic driver mutations

 

 

Tuesday, June 23

12:00 PM – 12:30 PM EDT

Awards and Lectures

NCI Director’s Address

Norman E Sharpless, Elaine R Mardis

DETAILS

Introduction: Elaine Mardis

 

NCI Director Address: Norman E Sharpless
  • They are functioning well at NCI with respect to grant reviews, research, and general functions in spite of the COVID pandemic and the massive demonstrations on also focusing on the disparities which occur in cancer research field and cancer care
  • There are ongoing efforts at NCI to make a positive difference in racial injustice, diversity in the cancer workforce, and for patients as well
  • Need a diverse workforce across the cancer research and care spectrum
  • Data show that areas where the clinicians are successful in putting African Americans on clinical trials are areas (geographic and site specific) where health disparities are narrowing
  • Grants through NCI new SeroNet for COVID-19 serologic testing funded by two RFAs through NIAD (RFA-CA-30-038 and RFA-CA-20-039) and will close on July 22, 2020

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Immunology, Tumor Biology, Experimental and Molecular Therapeutics, Molecular and Cellular Biology/Genetics

Tumor Immunology and Immunotherapy for Nonimmunologists: Innovation and Discovery in Immune-Oncology

This educational session will update cancer researchers and clinicians about the latest developments in the detailed understanding of the types and roles of immune cells in tumors. It will summarize current knowledge about the types of T cells, natural killer cells, B cells, and myeloid cells in tumors and discuss current knowledge about the roles these cells play in the antitumor immune response. The session will feature some of the most promising up-and-coming cancer immunologists who will inform about their latest strategies to harness the immune system to promote more effective therapies.

Judith A Varner, Yuliya Pylayeva-Gupta

 

Introduction

Judith A Varner
New techniques reveal critical roles of myeloid cells in tumor development and progression
  • Different type of cells are becoming targets for immune checkpoint like myeloid cells
  • In T cell excluded or desert tumors T cells are held at periphery so myeloid cells can infiltrate though so macrophages might be effective in these immune t cell naïve tumors, macrophages are most abundant types of immune cells in tumors
  • CXCLs are potential targets
  • PI3K delta inhibitors,
  • Reduce the infiltrate of myeloid tumor suppressor cells like macrophages
  • When should we give myeloid or T cell therapy is the issue
Judith A Varner
Novel strategies to harness T-cell biology for cancer therapy
Positive and negative roles of B cells in cancer
Yuliya Pylayeva-Gupta
New approaches in cancer immunotherapy: Programming bacteria to induce systemic antitumor immunity

 

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Cancer Chemistry

Chemistry to the Clinic: Part 2: Irreversible Inhibitors as Potential Anticancer Agents

There are numerous examples of highly successful covalent drugs such as aspirin and penicillin that have been in use for a long period of time. Despite historical success, there was a period of reluctance among many to purse covalent drugs based on concerns about toxicity. With advances in understanding features of a well-designed covalent drug, new techniques to discover and characterize covalent inhibitors, and clinical success of new covalent cancer drugs in recent years, there is renewed interest in covalent compounds. This session will provide a broad look at covalent probe compounds and drug development, including a historical perspective, examination of warheads and electrophilic amino acids, the role of chemoproteomics, and case studies.

Benjamin F Cravatt, Richard A. Ward, Sara J Buhrlage

 

Discovering and optimizing covalent small-molecule ligands by chemical proteomics

Benjamin F Cravatt
  • Multiple approaches are being investigated to find new covalent inhibitors such as: 1) cysteine reactivity mapping, 2) mapping cysteine ligandability, 3) and functional screening in phenotypic assays for electrophilic compounds
  • Using fluorescent activity probes in proteomic screens; have broad useability in the proteome but can be specific
  • They screened quiescent versus stimulated T cells to determine reactive cysteines in a phenotypic screen and analyzed by MS proteomics (cysteine reactivity profiling); can quantitate 15000 to 20,000 reactive cysteines
  • Isocitrate dehydrogenase 1 and adapter protein LCP-1 are two examples of changes in reactive cysteines they have seen using this method
  • They use scout molecules to target ligands or proteins with reactive cysteines
  • For phenotypic screens they first use a cytotoxic assay to screen out toxic compounds which just kill cells without causing T cell activation (like IL10 secretion)
  • INTERESTINGLY coupling these MS reactive cysteine screens with phenotypic screens you can find NONCANONICAL mechanisms of many of these target proteins (many of the compounds found targets which were not predicted or known)

Electrophilic warheads and nucleophilic amino acids: A chemical and computational perspective on covalent modifier

The covalent targeting of cysteine residues in drug discovery and its application to the discovery of Osimertinib

Richard A. Ward
  • Cysteine activation: thiolate form of cysteine is a strong nucleophile
  • Thiolate form preferred in polar environment
  • Activation can be assisted by neighboring residues; pKA will have an effect on deprotonation
  • pKas of cysteine vary in EGFR
  • cysteine that are too reactive give toxicity while not reactive enough are ineffective

 

Accelerating drug discovery with lysine-targeted covalent probes

 

Tuesday, June 23

12:45 PM – 2:15 PM EDT

Virtual Educational Session

Molecular and Cellular Biology/Genetics

Virtual Educational Session

Tumor Biology, Immunology

Metabolism and Tumor Microenvironment

This Educational Session aims to guide discussion on the heterogeneous cells and metabolism in the tumor microenvironment. It is now clear that the diversity of cells in tumors each require distinct metabolic programs to survive and proliferate. Tumors, however, are genetically programmed for high rates of metabolism and can present a metabolically hostile environment in which nutrient competition and hypoxia can limit antitumor immunity.

Jeffrey C Rathmell, Lydia Lynch, Mara H Sherman, Greg M Delgoffe

 

T-cell metabolism and metabolic reprogramming antitumor immunity

Jeffrey C Rathmell

Introduction

Jeffrey C Rathmell

Metabolic functions of cancer-associated fibroblasts

Mara H Sherman

Tumor microenvironment metabolism and its effects on antitumor immunity and immunotherapeutic response

Greg M Delgoffe
  • Multiple metabolites, reactive oxygen species within the tumor microenvironment; is there heterogeneity within the TME metabolome which can predict their ability to be immunosensitive
  • Took melanoma cells and looked at metabolism using Seahorse (glycolysis): and there was vast heterogeneity in melanoma tumor cells; some just do oxphos and no glycolytic metabolism (inverse Warburg)
  • As they profiled whole tumors they could separate out the metabolism of each cell type within the tumor and could look at T cells versus stromal CAFs or tumor cells and characterized cells as indolent or metabolic
  • T cells from hyerglycolytic tumors were fine but from high glycolysis the T cells were more indolent
  • When knock down glucose transporter the cells become more glycolytic
  • If patient had high oxidative metabolism had low PDL1 sensitivity
  • Showed this result in head and neck cancer as well
  • Metformin a complex 1 inhibitor which is not as toxic as most mito oxphos inhibitors the T cells have less hypoxia and can remodel the TME and stimulate the immune response
  • Metformin now in clinical trials
  • T cells though seem metabolically restricted; T cells that infiltrate tumors are low mitochondrial phosph cells
  • T cells from tumors have defective mitochondria or little respiratory capacity
  • They have some preliminary findings that metabolic inhibitors may help with CAR-T therapy

Obesity, lipids and suppression of anti-tumor immunity

Lydia Lynch
  • Hypothesis: obesity causes issues with anti tumor immunity
  • Less NK cells in obese people; also produce less IFN gamma
  • RNASeq on NOD mice; granzymes and perforins at top of list of obese downregulated
  • Upregulated genes that were upregulated involved in lipid metabolism
  • All were PPAR target genes
  • NK cells from obese patients takes up palmitate and this reduces their glycolysis but OXPHOS also reduced; they think increased FFA basically overloads mitochondria
  • PPAR alpha gamma activation mimics obesity

 

 

Tuesday, June 23

12:45 PM – 2:45 PM EDT

Virtual Educational Session

Clinical Research Excluding Trials

The Evolving Role of the Pathologist in Cancer Research

Long recognized for their role in cancer diagnosis and prognostication, pathologists are beginning to leverage a variety of digital imaging technologies and computational tools to improve both clinical practice and cancer research. Remarkably, the emergence of artificial intelligence (AI) and machine learning algorithms for analyzing pathology specimens is poised to not only augment the resolution and accuracy of clinical diagnosis, but also fundamentally transform the role of the pathologist in cancer science and precision oncology. This session will discuss what pathologists are currently able to achieve with these new technologies, present their challenges and barriers, and overview their future possibilities in cancer diagnosis and research. The session will also include discussions of what is practical and doable in the clinic for diagnostic and clinical oncology in comparison to technologies and approaches primarily utilized to accelerate cancer research.

 

Jorge S Reis-Filho, Thomas J Fuchs, David L Rimm, Jayanta Debnath

DETAILS

Tuesday, June 23

12:45 PM – 2:45 PM EDT

 

High-dimensional imaging technologies in cancer research

David L Rimm

  • Using old methods and new methods; so cell counting you use to find the cells then phenotype; with quantification like with Aqua use densitometry of positive signal to determine a threshold to determine presence of a cell for counting
  • Hiplex versus multiplex imaging where you have ten channels to measure by cycling of flour on antibody (can get up to 20plex)
  • Hiplex can be coupled with Mass spectrometry (Imaging Mass spectrometry, based on heavy metal tags on mAbs)
  • However it will still take a trained pathologist to define regions of interest or field of desired view

 

Introduction

Jayanta Debnath

Challenges and barriers of implementing AI tools for cancer diagnostics

Jorge S Reis-Filho

Implementing robust digital pathology workflows into clinical practice and cancer research

Jayanta Debnath

Invited Speaker

Thomas J Fuchs
  • Founder of spinout of Memorial Sloan Kettering
  • Separates AI from computational algothimic
  • Dealing with not just machines but integrating human intelligence
  • Making decision for the patients must involve human decision making as well
  • How do we get experts to do these decisions faster
  • AI in pathology: what is difficult? =è sandbox scenarios where machines are great,; curated datasets; human decision support systems or maps; or try to predict nature
  • 1) learn rules made by humans; human to human scenario 2)constrained nature 3)unconstrained nature like images and or behavior 4) predict nature response to nature response to itself
  • In sandbox scenario the rules are set in stone and machines are great like chess playing
  • In second scenario can train computer to predict what a human would predict
  • So third scenario is like driving cars
  • System on constrained nature or constrained dataset will take a long time for commuter to get to decision
  • Fourth category is long term data collection project
  • He is finding it is still finding it is still is difficult to predict nature so going from clinical finding to prognosis still does not have good predictability with AI alone; need for human involvement
  • End to end partnering (EPL) is a new way where humans can get more involved with the algorithm and assist with the problem of constrained data
  • An example of a workflow for pathology would be as follows from Campanella et al 2019 Nature Medicine: obtain digital images (they digitized a million slides), train a massive data set with highthroughput computing (needed a lot of time and big software developing effort), and then train it using input be the best expert pathologists (nature to human and unconstrained because no data curation done)
  • Led to first clinically grade machine learning system (Camelyon16 was the challenge for detecting metastatic cells in lymph tissue; tested on 12,000 patients from 45 countries)
  • The first big hurdle was moving from manually annotated slides (which was a big bottleneck) to automatically extracted data from path reports).
  • Now problem is in prediction: How can we bridge the gap from predicting humans to predicting nature?
  • With an AI system pathologist drastically improved the ability to detect very small lesions

 

Virtual Educational Session

Epidemiology

Cancer Increases in Younger Populations: Where Are They Coming from?

Incidence rates of several cancers (e.g., colorectal, pancreatic, and breast cancers) are rising in younger populations, which contrasts with either declining or more slowly rising incidence in older populations. Early-onset cancers are also more aggressive and have different tumor characteristics than those in older populations. Evidence on risk factors and contributors to early-onset cancers is emerging. In this Educational Session, the trends and burden, potential causes, risk factors, and tumor characteristics of early-onset cancers will be covered. Presenters will focus on colorectal and breast cancer, which are among the most common causes of cancer deaths in younger people. Potential mechanisms of early-onset cancers and racial/ethnic differences will also be discussed.

Stacey A. Fedewa, Xavier Llor, Pepper Jo Schedin, Yin Cao

Cancers that are and are not increasing in younger populations

Stacey A. Fedewa

 

  • Early onset cancers, pediatric cancers and colon cancers are increasing in younger adults
  • Younger people are more likely to be uninsured and these are there most productive years so it is a horrible life event for a young adult to be diagnosed with cancer. They will have more financial hardship and most (70%) of the young adults with cancer have had financial difficulties.  It is very hard for women as they are on their childbearing years so additional stress
  • Types of early onset cancer varies by age as well as geographic locations. For example in 20s thyroid cancer is more common but in 30s it is breast cancer.  Colorectal and testicular most common in US.
  • SCC is decreasing by adenocarcinoma of the cervix is increasing in women’s 40s, potentially due to changing sexual behaviors
  • Breast cancer is increasing in younger women: maybe etiologic distinct like triple negative and larger racial disparities in younger African American women
  • Increased obesity among younger people is becoming a factor in this increasing incidence of early onset cancers

 

 

Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

 

Read Full Post »


Powerful AI Tools Being Developed for the COVID-19 Fight

Curator: Stephen J. Williams, Ph.D.

 

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

IBM Releases Novel AI-Powered Technologies to Help Health and Research Community Accelerate the Discovery of Medical Insights and Treatments for COVID-19

April 3, 2020 | Written by: 

IBM Research has been actively developing new cloud and AI-powered technologies that can help researchers across a variety of scientific disciplines accelerate the process of discovery. As the COVID-19 pandemic unfolds, we continue to ask how these technologies and our scientific knowledge can help in the global battle against coronavirus.

Today, we are making available multiple novel, free resources from across IBM to help healthcare researchers, doctors and scientists around the world accelerate COVID-19 drug discovery: from gathering insights, to applying the latest virus genomic information and identifying potential targets for treatments, to creating new drug molecule candidates.

Though some of the resources are still in exploratory stages, IBM is making them available to qualifying researchers at no charge to aid the international scientific investigation of COVID-19.

Today’s announcement follows our recent leadership in launching the U.S. COVID-19 High Performance Computing Consortium, which is harnessing massive computing power in the effort to help confront the coronavirus.

Streamlining the Search for Information

Healthcare agencies and governments around the world have quickly amassed medical and other relevant data about the pandemic. And, there are already vast troves of medical research that could prove relevant to COVID-19. Yet, as with any large volume of disparate data sources, it is difficult to efficiently aggregate and analyze that data in ways that can yield scientific insights.

To help researchers access structured and unstructured data quickly, we are offering a cloud-based AI research resource that has been trained on a corpus of thousands of scientific papers contained in the COVID-19 Open Research Dataset (CORD-19), prepared by the White House and a coalition of research groups, and licensed databases from the DrugBankClinicaltrials.gov and GenBank. This tool uses our advanced AI and allows researchers to pose specific queries to the collections of papers and to extract critical COVID-19 knowledge quickly. Please note, access to this resource will be granted only to qualified researchers. To learn more and request access, please click here.

Aiding the Hunt for Treatments

The traditional drug discovery pipeline relies on a library of compounds that are screened, improved, and tested to determine safety and efficacy. In dealing with new pathogens such as SARS-CoV-2, there is the potential to enhance the compound libraries with additional novel compounds. To help address this need, IBM Research has recently created a new, AI-generative framework which can rapidly identify novel peptides, proteins, drug candidates and materials.

We have applied this AI technology against three COVID-19 targets to identify 3,000 new small molecules as potential COVID-19 therapeutic candidates. IBM is releasing these molecules under an open license, and researchers can study them via a new interactive molecular explorer tool to understand their characteristics and relationship to COVID-19 and identify candidates that might have desirable properties to be further pursued in drug development.

To streamline efforts to identify new treatments for COVID-19, we are also making the IBM Functional Genomics Platform available for free for the duration of the pandemic. Built to discover the molecular features in viral and bacterial genomes, this cloud-based repository and research tool includes genes, proteins and other molecular targets from sequenced viral and bacterial organisms in one place with connections pre-computed to help accelerate discovery of molecular targets required for drug design, test development and treatment.

Select IBM collaborators from government agencies, academic institutions and other organizations already use this platform for bacterial genomic study. And now, those working on COVID-19 can request the IBM Functional Genomics Platform interface to explore the genomic features of the virus. Access to the IBM Functional Genomics Platform will be prioritized for those conducting COVID-19 research. To learn more and request access, please click here.

Drug and Disease Information

Clinicians and healthcare professionals on the frontlines of care will also have free access to hundreds of pieces of evidence-based, curated COVID-19 and infectious disease content from IBM Micromedex and EBSCO DynaMed. Using these two rich decision support solutions, users will have access to drug and disease information in a single and comprehensive search. Clinicians can also provide patients with consumer-friendly patient education handouts with relevant, actionable medical information. IBM Micromedex is one of the largest online reference databases for medication information and is used by more than 4,500 hospitals and health systems worldwide. EBSCO DynaMed provides peer-reviewed clinical content, including systematic literature reviews in 28 specialties for comprehensive disease topics, health conditions and abnormal findings, to highly focused topics on evaluation, differential diagnosis and management.

The scientific community is working hard to make important new discoveries relevant to the treatment of COVID-19, and we’re hopeful that releasing these novel tools will help accelerate this global effort. This work also outlines our long-term vision for the future of accelerated discovery, where multi-disciplinary scientists and clinicians work together to rapidly and effectively create next generation therapeutics, aided by novel AI-powered technologies.

Learn more about IBM’s response to COVID-19: IBM.com/COVID19.

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

DiA Imaging Analysis Receives Grant to Accelerate Global Access to its AI Ultrasound Solutions in the Fight Against COVID-19

Source: https://www.grantnews.com/news-articles/?rkey=20200512UN05506&filter=12337

Grant will allow company to accelerate access to its AI solutions and use of ultrasound in COVID-19 emergency settings

TEL AVIV, IsraelMay 12, 2020 /PRNewswire-PRWeb/ — DiA Imaging Analysis, a leading provider of AI based ultrasound analysis solutions, today announced that it has received a government grant from the Israel Innovation Authority (IIA) to develop solutions for ultrasound imaging analysis of COVID-19 patients using Artificial Intelligence (AI).Using ultrasound in point of care emergency settings has gained momentum since the outbreak of COVID-19 pandemic. In these settings, which include makeshift hospital COVID-19 departments and triage “tents,” portable ultrasound offers clinicians diagnostic decision support, with the added advantage of being easier to disinfect and eliminating the need to transport patients from one room to another.However, analyzing ultrasound images is a process that it is still mostly done visually, leading to a growing market need for automated solutions and decision support.As the leading provider of AI solutions for ultrasound analysis and backed by Connecticut Innovations, DiA makes ultrasound analysis smarter and accessible to both new and expert ultrasound users with various levels of experience. The company’s flagship LVivo Cardio Toolbox for AI-based cardiac ultrasound analysis enables clinicians to automatically generate objective clinical analysis, with increased accuracy and efficiency to support decisions about patient treatment and care.

The IIA grant provides a budget of millions NIS to increase access to DiA’s solutions for users in Israel and globally, and accelerate R&D with a focus on new AI solutions for COVID-19 patient management. DiA solutions are vendor-neutral and platform agnostic, as well as powered to run in low processing, mobile environments like handheld ultrasound.Recent data highlights the importance of looking at the heart during the progression of COVID-19, with one study citing 20% of patients hospitalized with COVID-19 showing signs of heart damage and increased mortality rates in those patients. DiA’s LVivo cardiac analysis solutions automatically generate objective, quantified cardiac ultrasound results to enable point-of-care clinicians to assess cardiac function on the spot, near patients’ bedside.

According to Dr. Ami Applebaum, the Chairman of the Board of the IIA, “The purpose of IIA’s call was to bring solutions to global markets for fighting COVID-19, with an emphasis on relevancy, fast time to market and collaborations promising continuity of the Israeli economy. DiA meets these requirements with AI innovation for ultrasound.”DiA has received several FDA/CE clearances and established distribution partnerships with industry leading companies including GE Healthcare, IBM Watson and Konica Minolta, currently serving thousands of end users worldwide.”We see growing use of ultrasound in point of care settings, and an urgent need for automated, objective solutions that provide decision support in real time,” said Hila Goldman-Aslan, CEO and Co-founder of DiA Imaging Analysis, “Our AI solutions meet this need by immediately helping clinicians on the frontlines to quickly and easily assess COVID-19 patients’ hearts to help guide care delivery.”

About DiA Imaging Analysis:
DiA Imaging Analysis provides advanced AI-based ultrasound analysis technology that makes ultrasound accessible to all. DiA’s automated tools deliver fast and accurate clinical indications to support the decision-making process and offer better patient care. DiA’s AI-based technology uses advanced pattern recognition and machine-learning algorithms to automatically imitate the way the human eye detects image borders and identifies motion. Using DiA’s tools provides automated and objective AI tools, helps reduce variability among users, and increases efficiency. It allows clinicians with various levels of experience to quickly and easily analyze ultrasound images.

For additional information, please visit http://www.dia-analysis.com.

Read Full Post »


Worldwide trial uses AI to quickly identify ideal Covid-19 treatments

Reporter : Irina Robu, PhD

The novel coronavirus, SARS-CoV-2 that has been spreading around the world can cause a respiratory illness that can be severe. The disease, COVID-19 appears to have a fatality rate of less than 2 percent and forcing doctors to choose between two equally revolting options: try an unproven therapy and anticipate that it works or treat patients with standard supportive care for severe respiratory disease until a vaccine is developed.

Currently, randomized controlled trials have started in dozens of hospitals around the world by fusing two approaches together, using artificial intelligence to home in or using the most effective treatments for respiratory infections. The randomized trials, also known as an adaptive trial, in which scientists adjust the treatment protocols and/or statistical procedures based on the outcomes of participants. The trials are seen as a way to detect promising treatments and brand trials more flexible than traditional randomized trials and force patients, trial sponsors to wait for an outcome that often turns out to be disappointing. The inadequacies of the randomized approach have been taken into sharp relief during the pandemic, as thousands of patients can’t wait for gold-standard science to play out as they lay dying in intensive care of units.

However, analyzing data from more than 50 hospitals, researchers hope to supply quick answers to pressing questions such as the fact that the antimalarial drug hydroxychloroquine is an effective therapy and, if so, for which types of patients. The trial will also allow the researchers to test multiple therapies at once. Since the approach seems reasonable to give answers during a pandemic, it still has a lot of challenges, plus the necessity to rapidly assemble and analyze data from several hospitals with various record-keeping systems on three continents. Then update the protocols in accord during a crisis that is draining clinical resources.

Since several treatments are being tested, carrying out these trials is predominantly complicated. But progress in computing resources required to share data and analyze it swiftly using artificial intelligence have started to make these designs more practical.

The World Health Organization and the U.S. Food and Drug Administration, along with groups like the Gates Foundation, have offered increasing support for adaptive trial designs in recent years, particularly as a way to evaluate therapies during epidemics.

Nonetheless that doesn’t mean this specific effort is going to yield results in time to save the first wave of extremely ill patients. Once a promising treatment is recognized more patients will be allocated to receive it during each successive round of therapy. So far, about 130 ICU patients with Covid-19 have been enrolled, furthermore to hundreds of other hospitalized patients.

The goal in the REMAP-CAP trial, once all the trial sites are up and running, is to analyze results and change treatments on a weekly basis.

SOURCE

International trial uses AI to rapidly identify optimal Covid-19 treatments

Other Resources

Chinese Hospitals Deploy AI to Help Diagnose Covid-19

Software that reads CT lung scans had been used primarily to detect cancer. Now it’s retooled to look for signs of pneumonia caused by coronavirus.

https://www.wired.com/story/chinese-hospitals-deploy-ai-help-diagnose-covid-19/

 

Artificial Intelligence against COVID-19: An Early Review

AI has not yet made an impact, but data scientists have taken up the challenge

https://towardsdatascience.com/artificial-intelligence-against-covid-19-an-early-review-92a8360edaba

 

Can AI Find a Cure for COVID-19?

Alex Woodie

https://www.datanami.com/2020/03/23/can-ai-find-a-cure-for-covid-19/

 

Scientists are racing to find the best drugs to treat COVID-19

The WHO is launching a multicountry trial to collect good data

By Nicole Wetsman Mar 23, 2020, 9:21am EDT

https://www.theverge.com/2020/3/23/21188167/coronavirus-treatment-clinical-trials-drugs-remdesivir-chloroquine-covid

 

Data Scientists Use Machine Learning to Discover COVID-19 Treatments

Researchers are using machine learning algorithms capable of generating millions of therapeutic antibodies to quickly find treatments for COVID-19.

https://healthitanalytics.com/news/data-scientists-use-machine-learning-to-discover-covid-19-treatments

Read Full Post »



Google AI improves accuracy of reading mammograms, study finds

Google AI improves accuracy of reading mammograms, study finds

Google CFO Ruth Porat has blogged about twice battling breast cancer.

Artificial intelligence was often more accurate than radiologists in detecting breast cancer from mammograms in a study conducted by researchers using Google AI technology.

The study, published in the journal Nature, used mammograms from approximately 90,000 women in which the outcomes were known to train technology from Alphabet Inc’s DeepMind AI unit, now part of Google Health, Yahoo news reported.

The AI system was then used to analyze images from 28,000 other women and often diagnosed early cancers more accurately than the radiologists who originally interpreted the mammograms.

In another test, AI outperformed six radiologists in reading 500 mammograms. However, while the AI system found cancers the humans missed, it also failed to find cancers flagged by all six radiologists, reports The New York Times.

The researchers said the study “paves the way” for further clinical trials.

Writing in NatureEtta D. Pisano, chief research officer at the American College of Radiology and professor in residence at Harvard Medical School, noted, “The real world is more complicated and potentially more diverse than the type of controlled research environment reported in this study.”

Ruth Porat, senior vice president and chief financial officer Alphabet, Inc., wrote in a company blog titled “Breast cancer and tech…a reason for optimism” in October about twice battling the disease herself, and the importance of her company’s application of AI to healthcare innovations.

She said that focus had already led to the development of a deep learning algorithm to help pathologists assess tissue associated with metastatic breast cancer.

“By pinpointing the location of the cancer more accurately, quickly and at a lower cost, care providers might be able to deliver better treatment for more patients,” she wrote.

Google also has created algorithms that help medical professionals diagnose lung cancer, and eye disease in people with diabetes, per the Times.

Porat acknowledged that Google’s research showed the best results occur when medical professionals and technology work together.

Any insights provided by AI must be “paired with human intelligence and placed in the hands of skilled researchers, surgeons, oncologists, radiologists and others,” she said.

Anne Stych is a staff writer for Bizwomen.
Industries:

Read Full Post »


The Future of Synthetic Biology

Reporter: Irina Robu, PhD

With an estimated global evaluation of around $14 billion US, synthetic biology is a rapidly accelerating market. Nonetheless while the growth of the market has been remarkable, the ttrue impact has not yet been seen. The era of AI will quickly increase the pace of discovery, and produce materials not seen in nature, through extrapolation and generative design. The extraordinary is now possible: producing spider silk without spiders, egg proteins without chickens and fragrances without flowers.

Synthetic biology companies are associating with fashion designers as well as forming ‘organism foundries. Rapidly, AI will utilize its learning of the natural world to make guided inferences which produce entirely new materials. From a technology perspective, we’re experiencing an explosion of capability that will be invasive in the next 3-5 years. Language models have come a long way, to the point where full models are being kept private so as not to endanger the public.

Already today, the average person has the ability to start their own commercial space venture for less than the cost of a juice franchise. PwC Australia’s Charmaine Green believes secret trends can hide among obvious ones. She outlines three trends leading to her hypothesis that Australia is well placed to become the global creative hub for video game development.

Economies like Australia are situated to capitalize on this trend, and video game development can become a permanent and substantial part of the economy. In Australia, Green argues, we have all the basic elements needed: high ingenuity, creative risk taking, and the freedom and flexibility that comes with the country’s small-to-mid studios.

SOURCE

Tech trends that will change the world by 2025

Read Full Post »


AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights

Reporter: Stephen J. Williams, Ph.D.

Recent report from CBI Insights shows the rapid pace at which the biggest tech firms (Google, Apple, Microsoft, Facebook, and Amazon) are acquiring artificial intelligence (AI) startups, potentially confounding the AI talent shortage that exists.

The link to the report and free download is given here at https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/

Part of the report:

TECH GIANTS LEAD IN AI ACQUISITIONS

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, & Apple (FAMGA) have all been aggressively acquiring AI startups in the last decade.

Among the FAMGA companies, Apple leads the way, making 20 total AI acquisitions since 2010. It is followed by Google (the frontrunner from 2012 to 2016) with 14 acquisitions and Microsoft with 10.

Apple’s AI acquisition spree, which has helped it overtake Google in recent years, was essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhone X just by looking at it, stems from Apple’s M&A moves in chips and computer vision, including the acquisition of AI company RealFace.

In fact, many of FAMGA’s prominent products and services came out of acquisitions of AI companies — such as Apple’s Siri, or Google’s contributions to healthcare through DeepMind.

That said, tech giants are far from the only companies snatching up AI startups.

Since 2010, there have been 635 AI acquisitions, as companies aim to build out their AI capabilities and capture sought-after talent (as of 8/31/2019).

The pace of these acquisitions has also been increasing. AI acquisitions saw a more than 6x uptick from 2013 to 2018, including last year’s record of 166 AI acquisitions — up 38% year-over-year.

In 2019, there have already been 140+ acquisitions (as of August), putting the year on track to beat the 2018 record at the current run rate.

Part of this increase in the pace of AI acquisitions can be attributed to a growing diversity in acquirers. Where once AI was the exclusive territory of major tech companies, today, smaller AI startups are becoming acquisition targets for traditional insurance, retail, and healthcare incumbents.

For example, in February 2018, Roche Holding acquired New York-based cancer startup Flatiron Health for $1.9B — one of the largest M&A deals in artificial intelligence. This year, Nike acquired AI-powered inventory management startup Celect, Uber acquired computer vision company Mighty AI, and McDonald’s acquired personalization platform Dynamic Yield.

Despite the increased number of acquirers, however, tech giants are still leading the charge. Acquisitive tech giants have emerged as powerful global corporations with a competitive advantage in artificial intelligence, and startups have played a pivotal role in helping these companies scale their AI initiatives.

Apple, Google, Microsoft, Facebook, Intel, and Amazon are the most active acquirers of AI startups, each acquiring 7+ companies.

To read more on recent Acquisitions in the AI space please see the following articles on this Open Access Online Journal

Diversification and Acquisitions, 2001 – 2015: Trail known as “Google Acquisitions” – Understanding Alphabet’s Acquisitions: A Sector-By-Sector Analysis

Clarivate Analytics expanded IP data leadership by new acquisition of the leading provider of intellectual property case law and analytics Darts-ip

2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Forbes Opinion: 13 Industries Soon To Be Revolutionized By Artificial Intelligence

Artificial Intelligence and Cardiovascular Disease

Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

 

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Read Full Post »

Older Posts »