Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Use of Systems Biology for Design of inhibitor of Galectins as Cancer Therapeutic – Strategy and Software
Curator:Stephen J. Williams, Ph.D.
Below is a slide representation of the overall mission 4 to produce a PROTAC to inhibit Galectins 1, 3, and 9.
Using A Priori Knowledge of Galectin Receptor Interaction to Create a BioModel of Galectin 3 Binding
Now after collecting literature from PubMed on “galectin-3” AND “binding” to determine literature containing kinetic data we generate a WordCloud on the articles.
This following file contains the articles needed for BioModels generation.
From the WordCloud we can see that these corpus of articles describe galectin binding to the CRD (carbohydrate recognition domain). Interestingly there are many articles which describe van Der Waals interactions as well as electrostatic interactions. Certain carbohydrate modifictions like Lac NAc and Gal 1,4 may be important. Many articles describe the bonding as well as surface interactions. Many studies have been performed with galectin inhibitors like TDGs (thio-digalactosides) like TAZ TDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside). This led to an interesting article
.
Dual thio-digalactoside-binding modes of human galectins as the structural basis for the design of potent and selective inhibitors
Human galectins are promising targets for cancer immunotherapeutic and fibrotic disease-related drugs. We report herein the binding interactions of three thio-digalactosides (TDGs) including TDG itself, TD139 (3,3′-deoxy-3,3′-bis-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside, recently approved for the treatment of idiopathic pulmonary fibrosis), and TAZTDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside) with human galectins-1, -3 and -7 as assessed by X-ray crystallography, isothermal titration calorimetry and NMR spectroscopy. Five binding subsites (A-E) make up the carbohydrate-recognition domains of these galectins. We identified novel interactions between an arginine within subsite E of the galectins and an arene group in the ligands. In addition to the interactions contributed by the galactosyl sugar residues bound at subsites C and D, the fluorophenyl group of TAZTDG preferentially bound to subsite B in galectin-3, whereas the same group favored binding at subsite E in galectins-1 and -7. The characterised dual binding modes demonstrate how binding potency, reported as decreased Kd values of the TDG inhibitors from μM to nM, is improved and also offer insights to development of selective inhibitors for individual galectins.
Figures
Figure 1. Chemical structures of L3, TDG…
Figure 2. Structural comparison of the carbohydrate…
NCCN Shares Latest Expert Recommendations for Prostate Cancer in Spanish and Portuguese
Reporter: Stephen J. Williams, Ph.D.
Currently many biomedical texts and US government agency guidelines are only offered in English or only offered in different languages upon request. However Spanish is spoken in a majority of countries worldwide and medical text in that language would serve as an under-served need. In addition, Portuguese is the main language in the largest country in South America, Brazil.
The LPBI Group and others have noticed this need for medical translation to other languages. Currently LPBI Group is translating their medical e-book offerings into Spanish (for more details see https://pharmaceuticalintelligence.com/vision/)
Below is an article on The National Comprehensive Cancer Network’s decision to offer their cancer treatment guidelines in Spanish and Portuguese.
PLYMOUTH MEETING, PA [8 September, 2021] — The National Comprehensive Cancer Network® (NCCN®)—a nonprofit alliance of leading cancer centers in the United States—announces recently-updated versions of evidence- and expert consensus-based guidelines for treating prostate cancer, translated into Spanish and Portuguese. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) feature frequently updated cancer treatment recommendations from multidisciplinary panels of experts across NCCN Member Institutions. Independent studies have repeatedly found that following these recommendations correlates with better outcomes and longer survival.
“Everyone with prostate cancer should have access to care that is based on current and reliable evidence,” said Robert W. Carlson, MD, Chief Executive Officer, NCCN. “These updated translations—along with all of our other translated and adapted resources—help us to define and advance high-quality, high-value, patient-centered cancer care globally, so patients everywhere can live better lives.”
Prostate cancer is the second most commonly occurring cancer in men, impacting more than a million people worldwide every year.[1] In 2020, the NCCN Guidelines® for Prostate Cancer were downloaded more than 200,000 times by people outside of the United States. Approximately 47 percent of registered users for NCCN.org are located outside the U.S., with Brazil, Spain, and Mexico among the top ten countries represented.
“NCCN Guidelines are incredibly helpful resources in the work we do to ensure cancer care across Latin America meets the highest standards,” said Diogo Bastos, MD, and Andrey Soares, MD, Chair and Scientific Director of the Genitourinary Group of The Latin American Cooperative Oncology Group (LACOG). The organization has worked with NCCN in the past to develop Latin American editions of the NCCN Guidelines for Breast Cancer, Colon Cancer, Non-Small Cell Lung Cancer, Prostate Cancer, Multiple Myeloma, and Rectal Cancer, and co-hosted a webinar on “Management of Prostate Cancer for Latin America” earlier this year. “We appreciate all of NCCN’s efforts to make sure these gold-standard recommendations are accessible to non-English speakers and applicable for varying circumstances.”
NCCN also publishes NCCN Guidelines for Patients®, containing the same treatment information in non-medical terms, intended for patients and caregivers. The NCCN Guidelines for Patients: Prostate Cancer were found to be among the most trustworthy sources of information online according to a recent international study. These patient guidelines have been divided into two books, covering early and advanced prostate cancer; both have been translated into Spanish and Portuguese as well.
NCCN collaborates with organizations across the globe on resources based on the NCCN Guidelines that account for local accessibility, consideration of metabolic differences in populations, and regional regulatory variation. They can be downloaded free-of-charge for non-commercial use at NCCN.org/global or via the Virtual Library of NCCN Guidelines App. Learn more and join the conversation with the hashtag #NCCNGlobal.
[1] Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global Cancer Statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin, in press. The online GLOBOCAN 2018 database is accessible at http://gco.iarc.fr/, as part of IARC’s Global Cancer Observatory.
About the National Comprehensive Cancer Network
The National Comprehensive Cancer Network® (NCCN®) is a not-for-profit alliance of leading cancer centers devoted to patient care, research, and education. NCCN is dedicated to improving and facilitating quality, effective, efficient, and accessible cancer care so patients can live better lives. The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) provide transparent, evidence-based, expert consensus recommendations for cancer treatment, prevention, and supportive services; they are the recognized standard for clinical direction and policy in cancer management and the most thorough and frequently-updated clinical practice guidelines available in any area of medicine. The NCCN Guidelines for Patients® provide expert cancer treatment information to inform and empower patients and caregivers, through support from the NCCN Foundation®. NCCN also advances continuing education, global initiatives, policy, and research collaboration and publication in oncology. Visit NCCN.org for more information and follow NCCN on Facebook @NCCNorg, Instagram @NCCNorg, and Twitter @NCCN.
Please see LPBI Group’s efforts in medical text translation and Natural Language Processing of Medical Text at
From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery
Curator: Stephen J. Williams, PhD
Marc W. Kirschner*
Department of Systems Biology Harvard Medical School
Boston, Massachusetts 02115
With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like species, arise by descent with modification, so in their earliest forms even the founders of great dynasties are only marginally different than their sister fields and species. It is only in retrospect that we can recognize the significant founding events. Before embarking on a definition of systems biology, it may be worth remembering that confusion and controversy surrounded the introduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retrospect molecular biology was new and different. It introduced both new subject matter and new technological approaches, in addition to a new style.
As a point of departure for systems biology, consider the quintessential experiment in the founding of molecular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonlethal mutation in these genes in a multicellular organism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiological. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.
That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been perfected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new continent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the simplistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organisms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combinations, something that is not assured in chemistry. It also downplays the significant regulatory features that involve interactions between gene products, their localization, binding, posttranslational modification, degradation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the conserved genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different phenotypes in different tissues of metazoan organisms. These circuits may have certain robustness, but more important they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes themselves. Among other things it loads the deck in evolutionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.
Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the phenotype. One aspect of systems biology is the development of techniques to examine broadly the level of protein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.
High-throughput biology has opened up another important area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and reproducible environment. The real world of ecology, evolution, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later extended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some geneticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and protein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quantitative effects, partially masked or accentuated by other genetic and environmental conditions. To understand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environmental variation.
Extracts and explants are relatively accessible to synthetic manipulation. Next there is the explicit reconstruction of circuits within cells or the deliberate modification of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of describing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, proteins, cells in tissues, and whole organisms in their environment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.
You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biological organization and processes in terms of the molecular constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the importance of defining a succession of physiological states in that process, and on evolutionary biology and ecology for the appreciation that all aspects of the organism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology generates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of systems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.
Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.
5.1.5. Large-Scale Proteomics
While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.
All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.
5.2. Genetic Approaches
Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.
Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.
5.2.1. Resistance Cloning
The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.
In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].
While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].
When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].
While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.
5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens
When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).
An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].
The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].
Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].
An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].
From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.
SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY
Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence
The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question
1. Introduction
The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.
Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)
Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).
Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16].
2. Systems Biology in Cancer Research
Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].
In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.
2.1. Biological Network Analysis for Biomarker Validation
The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].
2.2. De Novo Construction of Biological Networks
While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].
2.3. Network Based Machine Learning
A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].
In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.
3. Network-Based Learning in Cancer Research
As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.
3.1. Molecular Characterization with Network Information
Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].
Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.
3.2. Tumor Heterogeneity Study with Network Information
The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].
In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].
3.3. Drug Target Identification with Network Information
In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].
Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.
4. Deep Learning in Cancer Research
DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].
In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].
4.1. Challenges for Deep Learning in Cancer Research
Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].
Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).
4.2. Molecular Charactization with Network and DNN Model
DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].
Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].
4.3. Tumor Heterogeneity with Network and DNN Model
As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].
4.4. Drug Target Identification with Networks and DNN Models
In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].
The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].
4.5. Graph Neural Network Model
In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].
In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].
4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge
The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].
As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].
Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.
“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”
Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell2014, 158, 929–944. [Google Scholar] [CrossRef] [PubMed]
Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell2018, 173, 283–285. [Google Scholar] [CrossRef]
Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol.2007, 3, 140. [Google Scholar] [CrossRef]
Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol.2017, 1, 25. [Google Scholar] [CrossRef] [PubMed]
Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol.2019, 20, e262–e273. [Google Scholar] [CrossRef]
Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods2015, 12, 615. [Google Scholar]
Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun.2020, 11, 729. [Google Scholar] [CrossRef]
Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet.2019, 10, 13. [Google Scholar] [CrossRef]
Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun.2020, 11, 728. [Google Scholar] [CrossRef]
Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res.2018, 24, 1248–1259. [Google Scholar] [CrossRef]
Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis2019, 8, 44. [Google Scholar] [CrossRef]
Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics2019, 35, 5191–5198. [Google Scholar] [CrossRef]
Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature2020, 578, 82. [Google Scholar] [CrossRef] [PubMed]
King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science2003, 302, 643–646. [Google Scholar] [CrossRef] [PubMed]
Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol.2010, 28, 1075. [Google Scholar] [CrossRef] [PubMed]
Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol.2009, 27, 1160. [Google Scholar] [CrossRef]
Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol.2014, 5, 412. [Google Scholar] [CrossRef] [PubMed]
Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform.2019, 20, 572–584. [Google Scholar] [CrossRef] [PubMed]
Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet.2016, 17, 630. [Google Scholar] [CrossRef] [PubMed]
Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet.2017, 8, 84. [Google Scholar] [CrossRef]
Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med.2011, 17, 297. [Google Scholar] [CrossRef] [PubMed]
Use of Systems Biology in Anti-Microbial Drug Development
Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965
In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.
Genome Sequences and Proteomic Structural Databases
In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.
Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.
There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.
We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018; Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.
FIGURE 2
Figure 2.(A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).
Examples of Understanding and Combatting Resistance
The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.
FIGURE 3
Figure 3.(A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).
Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:
Improving diagnostic yield in pediatric cancer precision medicine
Elaine R Mardis
Advent of genomics have revolutionized how we diagnose and treat lung cancer
We are currently needing to understand the driver mutations and variants where we can personalize therapy
PD-L1 and other checkpoint therapy have not really been used in pediatric cancers even though CAR-T have been successful
The incidence rates and mortality rates of pediatric cancers are rising
Large scale study of over 700 pediatric cancers show cancers driven by epigenetic drivers or fusion proteins. Need for transcriptomics. Also study demonstrated that we have underestimated germ line mutations and hereditary factors.
They put together a database to nominate patients on their IGM Cancer protocol. Involves genetic counseling and obtaining germ line samples to determine hereditary factors. RNA and protein are evaluated as well as exome sequencing. RNASeq and Archer Dx test to identify driver fusions
PECAN curated database from St. Jude used to determine driver mutations. They use multiple databases and overlap within these databases and knowledge base to determine or weed out false positives
They have used these studies to understand the immune infiltrate into recurrent cancers (CytoCure)
They found 40 germline cancer predisposition genes, 47 driver somatic fusion proteins, 81 potential actionable targets, 106 CNV, 196 meaningful somatic driver mutations
They are functioning well at NCI with respect to grant reviews, research, and general functions in spite of the COVID pandemic and the massive demonstrations on also focusing on the disparities which occur in cancer research field and cancer care
There are ongoing efforts at NCI to make a positive difference in racial injustice, diversity in the cancer workforce, and for patients as well
Need a diverse workforce across the cancer research and care spectrum
Data show that areas where the clinicians are successful in putting African Americans on clinical trials are areas (geographic and site specific) where health disparities are narrowing
Grants through NCI new SeroNet for COVID-19 serologic testing funded by two RFAs through NIAD (RFA-CA-30-038 and RFA-CA-20-039) and will close on July 22, 2020
Tuesday, June 23
12:45 PM – 1:46 PM EDT
Virtual Educational Session
Immunology, Tumor Biology, Experimental and Molecular Therapeutics, Molecular and Cellular Biology/Genetics
This educational session will update cancer researchers and clinicians about the latest developments in the detailed understanding of the types and roles of immune cells in tumors. It will summarize current knowledge about the types of T cells, natural killer cells, B cells, and myeloid cells in tumors and discuss current knowledge about the roles these cells play in the antitumor immune response. The session will feature some of the most promising up-and-coming cancer immunologists who will inform about their latest strategies to harness the immune system to promote more effective therapies.
Judith A Varner, Yuliya Pylayeva-Gupta
Introduction
Judith A Varner
New techniques reveal critical roles of myeloid cells in tumor development and progression
Different type of cells are becoming targets for immune checkpoint like myeloid cells
In T cell excluded or desert tumors T cells are held at periphery so myeloid cells can infiltrate though so macrophages might be effective in these immune t cell naïve tumors, macrophages are most abundant types of immune cells in tumors
CXCLs are potential targets
PI3K delta inhibitors,
Reduce the infiltrate of myeloid tumor suppressor cells like macrophages
When should we give myeloid or T cell therapy is the issue
Judith A Varner
Novel strategies to harness T-cell biology for cancer therapy
Positive and negative roles of B cells in cancer
Yuliya Pylayeva-Gupta
New approaches in cancer immunotherapy: Programming bacteria to induce systemic antitumor immunity
There are numerous examples of highly successful covalent drugs such as aspirin and penicillin that have been in use for a long period of time. Despite historical success, there was a period of reluctance among many to purse covalent drugs based on concerns about toxicity. With advances in understanding features of a well-designed covalent drug, new techniques to discover and characterize covalent inhibitors, and clinical success of new covalent cancer drugs in recent years, there is renewed interest in covalent compounds. This session will provide a broad look at covalent probe compounds and drug development, including a historical perspective, examination of warheads and electrophilic amino acids, the role of chemoproteomics, and case studies.
Benjamin F Cravatt, Richard A. Ward, Sara J Buhrlage
Discovering and optimizing covalent small-molecule ligands by chemical proteomics
Benjamin F Cravatt
Multiple approaches are being investigated to find new covalent inhibitors such as: 1) cysteine reactivity mapping, 2) mapping cysteine ligandability, 3) and functional screening in phenotypic assays for electrophilic compounds
Using fluorescent activity probes in proteomic screens; have broad useability in the proteome but can be specific
They screened quiescent versus stimulated T cells to determine reactive cysteines in a phenotypic screen and analyzed by MS proteomics (cysteine reactivity profiling); can quantitate 15000 to 20,000 reactive cysteines
Isocitrate dehydrogenase 1 and adapter protein LCP-1 are two examples of changes in reactive cysteines they have seen using this method
They use scout molecules to target ligands or proteins with reactive cysteines
For phenotypic screens they first use a cytotoxic assay to screen out toxic compounds which just kill cells without causing T cell activation (like IL10 secretion)
INTERESTINGLY coupling these MS reactive cysteine screens with phenotypic screens you can find NONCANONICAL mechanisms of many of these target proteins (many of the compounds found targets which were not predicted or known)
Electrophilic warheads and nucleophilic amino acids: A chemical and computational perspective on covalent modifier
The covalent targeting of cysteine residues in drug discovery and its application to the discovery of Osimertinib
Richard A. Ward
Cysteine activation: thiolate form of cysteine is a strong nucleophile
Thiolate form preferred in polar environment
Activation can be assisted by neighboring residues; pKA will have an effect on deprotonation
pKas of cysteine vary in EGFR
cysteine that are too reactive give toxicity while not reactive enough are ineffective
Accelerating drug discovery with lysine-targeted covalent probes
This Educational Session aims to guide discussion on the heterogeneous cells and metabolism in the tumor microenvironment. It is now clear that the diversity of cells in tumors each require distinct metabolic programs to survive and proliferate. Tumors, however, are genetically programmed for high rates of metabolism and can present a metabolically hostile environment in which nutrient competition and hypoxia can limit antitumor immunity.
Jeffrey C Rathmell, Lydia Lynch, Mara H Sherman, Greg M Delgoffe
T-cell metabolism and metabolic reprogramming antitumor immunity
Jeffrey C Rathmell
Introduction
Jeffrey C Rathmell
Metabolic functions of cancer-associated fibroblasts
Mara H Sherman
Tumor microenvironment metabolism and its effects on antitumor immunity and immunotherapeutic response
Greg M Delgoffe
Multiple metabolites, reactive oxygen species within the tumor microenvironment; is there heterogeneity within the TME metabolome which can predict their ability to be immunosensitive
Took melanoma cells and looked at metabolism using Seahorse (glycolysis): and there was vast heterogeneity in melanoma tumor cells; some just do oxphos and no glycolytic metabolism (inverse Warburg)
As they profiled whole tumors they could separate out the metabolism of each cell type within the tumor and could look at T cells versus stromal CAFs or tumor cells and characterized cells as indolent or metabolic
T cells from hyerglycolytic tumors were fine but from high glycolysis the T cells were more indolent
When knock down glucose transporter the cells become more glycolytic
If patient had high oxidative metabolism had low PDL1 sensitivity
Showed this result in head and neck cancer as well
Metformin a complex 1 inhibitor which is not as toxic as most mito oxphos inhibitors the T cells have less hypoxia and can remodel the TME and stimulate the immune response
Metformin now in clinical trials
T cells though seem metabolically restricted; T cells that infiltrate tumors are low mitochondrial phosph cells
T cells from tumors have defective mitochondria or little respiratory capacity
They have some preliminary findings that metabolic inhibitors may help with CAR-T therapy
Obesity, lipids and suppression of anti-tumor immunity
Lydia Lynch
Hypothesis: obesity causes issues with anti tumor immunity
Less NK cells in obese people; also produce less IFN gamma
RNASeq on NOD mice; granzymes and perforins at top of list of obese downregulated
Upregulated genes that were upregulated involved in lipid metabolism
All were PPAR target genes
NK cells from obese patients takes up palmitate and this reduces their glycolysis but OXPHOS also reduced; they think increased FFA basically overloads mitochondria
Long recognized for their role in cancer diagnosis and prognostication, pathologists are beginning to leverage a variety of digital imaging technologies and computational tools to improve both clinical practice and cancer research. Remarkably, the emergence of artificial intelligence (AI) and machine learning algorithms for analyzing pathology specimens is poised to not only augment the resolution and accuracy of clinical diagnosis, but also fundamentally transform the role of the pathologist in cancer science and precision oncology. This session will discuss what pathologists are currently able to achieve with these new technologies, present their challenges and barriers, and overview their future possibilities in cancer diagnosis and research. The session will also include discussions of what is practical and doable in the clinic for diagnostic and clinical oncology in comparison to technologies and approaches primarily utilized to accelerate cancer research.
Jorge S Reis-Filho, Thomas J Fuchs, David L Rimm, Jayanta Debnath
Using old methods and new methods; so cell counting you use to find the cells then phenotype; with quantification like with Aqua use densitometry of positive signal to determine a threshold to determine presence of a cell for counting
Hiplex versus multiplex imaging where you have ten channels to measure by cycling of flour on antibody (can get up to 20plex)
Hiplex can be coupled with Mass spectrometry (Imaging Mass spectrometry, based on heavy metal tags on mAbs)
However it will still take a trained pathologist to define regions of interest or field of desired view
Introduction
Jayanta Debnath
Challenges and barriers of implementing AI tools for cancer diagnostics
Jorge S Reis-Filho
Implementing robust digital pathology workflows into clinical practice and cancer research
Jayanta Debnath
Invited Speaker
Thomas J Fuchs
Founder of spinout of Memorial Sloan Kettering
Separates AI from computational algothimic
Dealing with not just machines but integrating human intelligence
Making decision for the patients must involve human decision making as well
How do we get experts to do these decisions faster
AI in pathology: what is difficult? =è sandbox scenarios where machines are great,; curated datasets; human decision support systems or maps; or try to predict nature
1) learn rules made by humans; human to human scenario 2)constrained nature 3)unconstrained nature like images and or behavior 4) predict nature response to nature response to itself
In sandbox scenario the rules are set in stone and machines are great like chess playing
In second scenario can train computer to predict what a human would predict
So third scenario is like driving cars
System on constrained nature or constrained dataset will take a long time for commuter to get to decision
Fourth category is long term data collection project
He is finding it is still finding it is still is difficult to predict nature so going from clinical finding to prognosis still does not have good predictability with AI alone; need for human involvement
End to end partnering (EPL) is a new way where humans can get more involved with the algorithm and assist with the problem of constrained data
An example of a workflow for pathology would be as follows from Campanella et al 2019 Nature Medicine: obtain digital images (they digitized a million slides), train a massive data set with highthroughput computing (needed a lot of time and big software developing effort), and then train it using input be the best expert pathologists (nature to human and unconstrained because no data curation done)
Led to first clinically grade machine learning system (Camelyon16 was the challenge for detecting metastatic cells in lymph tissue; tested on 12,000 patients from 45 countries)
The first big hurdle was moving from manually annotated slides (which was a big bottleneck) to automatically extracted data from path reports).
Now problem is in prediction: How can we bridge the gap from predicting humans to predicting nature?
With an AI system pathologist drastically improved the ability to detect very small lesions
Incidence rates of several cancers (e.g., colorectal, pancreatic, and breast cancers) are rising in younger populations, which contrasts with either declining or more slowly rising incidence in older populations. Early-onset cancers are also more aggressive and have different tumor characteristics than those in older populations. Evidence on risk factors and contributors to early-onset cancers is emerging. In this Educational Session, the trends and burden, potential causes, risk factors, and tumor characteristics of early-onset cancers will be covered. Presenters will focus on colorectal and breast cancer, which are among the most common causes of cancer deaths in younger people. Potential mechanisms of early-onset cancers and racial/ethnic differences will also be discussed.
Stacey A. Fedewa, Xavier Llor, Pepper Jo Schedin, Yin Cao
Cancers that are and are not increasing in younger populations
Stacey A. Fedewa
Early onset cancers, pediatric cancers and colon cancers are increasing in younger adults
Younger people are more likely to be uninsured and these are there most productive years so it is a horrible life event for a young adult to be diagnosed with cancer. They will have more financial hardship and most (70%) of the young adults with cancer have had financial difficulties. It is very hard for women as they are on their childbearing years so additional stress
Types of early onset cancer varies by age as well as geographic locations. For example in 20s thyroid cancer is more common but in 30s it is breast cancer. Colorectal and testicular most common in US.
SCC is decreasing by adenocarcinoma of the cervix is increasing in women’s 40s, potentially due to changing sexual behaviors
Breast cancer is increasing in younger women: maybe etiologic distinct like triple negative and larger racial disparities in younger African American women
Increased obesity among younger people is becoming a factor in this increasing incidence of early onset cancers
Other Articles on this Open Access Online Journal on Cancer Conferences and Conference Coverage in Real Time Include
June 22-24: Free Registration for AACR Members, the Cancer Community, and the Public
This virtual meeting will feature more than 120 sessions and 4,000 e-posters, including sessions on cancer health disparities and the impact of COVID-19 on clinical trials
This Virtual Meeting is Part II of the AACR Annual Meeting. Part I was held online in April and was centered only on clinical findings. This Part II of the virtual meeting will contain all the Sessions and Abstracts pertaining to basic and translational cancer research as well as clinical trial findings.
The prestigious Pezcoller Foundation-AACR International Award for Extraordinary Achievement in Cancer Research was established in 1997 to annually recognize a scientist of international renown who has made a major scientific discovery in basic cancer research OR who has made significant contributions to translational cancer research; who continues to be active in cancer research and has a record of recent, noteworthy publications; and whose ongoing work holds promise for continued substantive contributions to progress in the field of cancer. For more information regarding the 2020 award recipient go to aacr.org/awards.
Princess Anne Margaret Cancer Center, Toronto, Ontario
For determining how stem cells contribute to normal and leukemic hematopoeisis
not every cancer cell equal in their Cancer Hallmarks
how do we monitor and measure clonal dynamics
Barnie Clarkson did pivotal work on this
most cancer cells are post mitotic but minor populations of cells were dormant and survive chemotherapy
only one cell is 1 in a million can regenerate and transplantable in mice and experiments with flow cytometry resolved the question of potency and repopulation of only small percentage of cells and undergo long term clonal population
so instead of going to cell lines and using thousands of shRNA looked at clinical data and deconvoluted the genetic information (RNASeq data) to determine progenitor and mature populations (how much is stem and how much is mature populations)
in leukemic patients they have seen massive expansion of a single stem cell population so only need one cell in AML if the stem cells have the mutational hits early on in their development
finding the “seeds of relapse”: finding the small subpopulation of stem cells that will relapse
they looked in BALL;; there are cells resistant to l-aspariginase, dexamethasone, and vincristine
a lot of OXPHOS related genes (in DRIs) that may be the genes involved in this resistance
it a wonderful note of acknowledgement he dedicated this award to all of his past and present trainees who were the ones, as he said, made this field into what it is and for taking it into directions none of them could forsee
Monday, June 22
1:30 PM – 3:30 PM EDT
Virtual Educational Session
Experimental and Molecular Therapeutics, Drug Development, Cancer Chemistry
How can one continue to deliver innovative medicines to patients when biological targets are becoming ever scarcer and less amenable to therapeutic intervention? Are there sound strategies in place that can clear the path to targets previously considered “undruggable”? Recent advances in lead finding methods and novel technologies such as covalent screening and targeted protein degradation have enriched the toolbox at the disposal of drug discovery scientists to expand the druggable ta
Stefan N Gradl, Elena S Koltun, Scott D Edmondson, Matthew A. Marx, Joachim Rudolph
Cancer researchers are faced with a deluge of high-throughput data. Using these data to advance understanding of cancer biology and improve clinical outcomes increasingly requires effective use of computational and informatics tools. This session will introduce informatics resources that support the data management, analysis, visualization, and interpretation. The primary focus will be on high-throughput genomic data and imaging data. Participants will be introduced to fundamental concepts
Rachel Karchin, Daniel Marcus, Andriy Fedorov, Obi Lee Griffith
Precision medicine refers to the use of prevention and treatment strategies that are tailored to the unique features of each individual and their disease. In the context of cancer this might involve the identification of specific mutations shown to predict response to a targeted therapy. The biomedical literature describing these associations is large and growing rapidly. Currently these interpretations exist largely in private or encumbered databases resulting in extensive repetition of effort.
CIViC’s Role in Precision Medicine
Realizing precision medicine will require this information to be centralized, debated and interpreted for application in the clinic. CIViC is an open access, open source, community-driven web resource for Clinical Interpretation of Variants in Cancer. Our goal is to enable precision medicine by providing an educational forum for dissemination of knowledge and active discussion of the clinical significance of cancer genome alterations. For more details refer to the 2017 CIViC publication in Nature Genetics.
U24 funding announced: We are excited to announce that the Informatics Technology for Cancer Research (ICTR) program of the National Cancer Institute (NCI) has awarded funding to the CIViC team! Starting this year, a five-year, $3.7 million U24 award (CA237719), will support CIViC to develop Standardized and Genome-Wide Clinical Interpretation of Complex Genotypes for Cancer Precision Medicine.
Informatics tools for high-throughput analysis of cancer mutations
Rachel Karchin
CRAVAT is a platform to determine, categorize, and curate cancer mutations and cancer related variants
adding new tools used to be hard but having an open architecture allows for modular growth and easy integration of other tools
so they are actively making an open network using social media
While LOD has had some uptake across the web, the number of databases using this protocol compared to the other technologies is still modest. But whether or not we use LOD, we do need to ensure that databases are designed specifically for the web and for reuse by humans and machines. To provide guidance for creating such databases independent of the technology used, the FAIR principles were issued through FORCE11: the Future of Research Communications and e-Scholarship. The FAIR principles put forth characteristics that contemporary data resources, tools, vocabularies and infrastructures should exhibit to assist discovery and reuse by third-parties through the web. Wilkinson et al.,2016. FAIR stands for: Findable, Accessible, Interoperable and Re-usable. The definition of FAIR is provided in Table 1:
Number
Principle
F
Findable
F1
(meta)data are assigned a globally unique and persistent identifier
F2
data are described with rich metadata
F3
metadata clearly and explicitly include the identifier of the data it describes
F4
(meta)data are registered or indexed in a searchable resource
A
Accessible
A1
(meta)data are retrievable by their identifier using a standardized communications protocol
A1.1
the protocol is open, free, and universally implementable
A1.2
the protocol allows for an authentication and authorization procedure, where necessary
A2
metadata are accessible, even when the data are no longer available
I
Interoperable
I1
(meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation.
I2
(meta)data use vocabularies that follow FAIR principles
I3
(meta)data include qualified references to other (meta)data
R
Reusable
R1
meta(data) are richly described with a plurality of accurate and relevant attributes
R1.1
(meta)data are released with a clear and accessible data usage license
R1.2
(meta)data are associated with detailed provenance
R1.3
(meta)data meet domain-relevant community standards
A detailed explanation of each of these is included in the Wilkinson et al., 2016 article, and the Dutch Techcenter for Life Sciences has a set of excellent tutorials, so we won’t go into too much detail here.
for outside vendors to access their data, vendors would need a signed Material Transfer Agreement but NCI had formulated a framework to facilitate sharing of data using a DIACOM standard for imaging data
Monday, June 22
1:30 PM – 3:01 PM EDT
Virtual Educational Session
Experimental and Molecular Therapeutics, Cancer Chemistry, Drug Development, Immunology
The engineering and physical science disciplines have been increasingly involved in the development of new approaches to investigate, diagnose, and treat cancer. This session will address many of these efforts, including therapeutic methods such as improvements in drug delivery/targeting, new drugs and devices to effect immunomodulation and to synergize with immunotherapies, and intraoperative probes to improve surgical interventions. Imaging technologies and probes, sensors, and bioma
Claudia Fischbach, Ronit Satchi-Fainaro, Daniel A Heller
How should we think about exceptional and super responders to cancer therapy? What biologic insights might ensue from considering these cases? What are ways in which considering super responders may lead to misleading conclusions? What are the pros and cons of the quest to locate exceptional and super responders?
Alice P Chen, Vinay K Prasad, Celeste Leigh Pearce
The reprogramming of cellular metabolism is a hallmark feature observed across cancers. Contemporary research in this area has led to the discovery of tumor-specific metabolic mechanisms and illustrated ways that these can serve as selective, exploitable vulnerabilities. In this session, four international experts in tumor metabolism will discuss new findings concerning the rewiring of metabolic programs in cancer that support metabolic fitness, biosynthesis, redox balance, and the reg
Costas Andreas Lyssiotis, Gina M DeNicola, Ayelet Erez, Oliver Maddocks
June 22-24: Free Registration for AACR Members, the Cancer Community, and the Public
This virtual meeting will feature more than 120 sessions and 4,000 e-posters, including sessions on cancer health disparities and the impact of COVID-19 on clinical trials
This Virtual Meeting is Part II of the AACR Annual Meeting. Part I was held online in April and was centered only on clinical findings. This Part II of the virtual meeting will contain all the Sessions and Abstracts pertaining to basic and translational cancer research as well as clinical trial findings.
The Opening Ceremony will include the following presentations: Welcome from AACR CEO Margaret Foti, PhD, MD (hc)
CHIEF EXECUTIVE OFFICER
MARGARET FOTI, PHD, MD (HC)
American Association for Cancer Research
Philadelphia, Pennsylvania
Dr. Foti mentions that AACR is making progress in including more ethnic and gender equality in cancer research and she feels that the disparities seen in health care, and in cancer care, is related to the disparities seen in the cancer research profession
AACR is very focused now on blood cancers and creating innovation summits on this matter
In 2019 awarded over 60 grants but feel they will be able to fund more research in 2020
Government funding is insufficient at current levels
Remarks from AACR Immediate Past President Elaine R. Mardis, PhD, FAACR
involved in planning and success of the first virtual meeting (it was really well done)
# of registrants was at unprecedented numbers
the scope for this meeting will be wider than the first meeting
they have included special sessions including COVID19 and health disparities
70 educational and methodology workshops on over 70 channels
AACR Award for Lifetime Achievement in Cancer Research
How should we think about exceptional and super responders to cancer therapy? What biologic insights might ensue from considering these cases? What are ways in which considering super responders may lead to misleading conclusions? What are the pros and cons of the quest to locate exceptional and super responders?
Alice P Chen, Vinay K Prasad, Celeste Leigh Pearce
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 27, 2020 Minisymposium on AACR Project Genie & Bioinformatics 4:00 PM – 6:00 PM
April 27, 2020, 4:00 PM – 6:00 PM
Virtual Meeting: All Session Times Are U.S. EDT
Session Type
Virtual Minisymposium
Track(s)
Bioinformatics and Systems Biology
17 Presentations
4:00 PM – 6:00 PM
– Chairperson Gregory J. Riely. Memorial Sloan Kettering Cancer Center, New York, NY
4:00 PM – 4:01 PM
– Introduction Gregory J. Riely. Memorial Sloan Kettering Cancer Center, New York, NY
Precision medicine requires an end-to-end learning healthcare system, wherein the treatment decisions for patients are informed by the prior experiences of similar patients. Oncology is currently leading the way in precision medicine because the genomic and other molecular characteristics of patients and their tumors are routinely collected at scale. A major challenge to realizing the promise of precision medicine is that no single institution is able to sequence and treat sufficient numbers of patients to improve clinical-decision making independently. To overcome this challenge, the AACR launched Project GENIE (Genomics Evidence Neoplasia Information Exchange).
AACR Project GENIE is a publicly accessible international cancer registry of real-world data assembled through data sharing between 19 of the leading cancer centers in the world. Through the efforts of strategic partners Sage Bionetworks (https://sagebionetworks.org) and cBioPortal (www.cbioportal.org), the registry aggregates, harmonizes, and links clinical-grade, next-generation cancer genomic sequencing data with clinical outcomes obtained during routine medical practice from cancer patients treated at these institutions. The consortium and its activities are driven by openness, transparency, and inclusion, ensuring that the project output remains accessible to the global cancer research community for the benefit of all patients.AACR Project GENIE fulfills an unmet need in oncology by providing the statistical power necessary to improve clinical decision-making, particularly in the case of rare cancers and rare variants in common cancers. Additionally, the registry can power novel clinical and translational research.
Because we collect data from nearly every patient sequenced at participating institutions and have committed to sharing only clinical-grade data, the GENIE registry contains enough high-quality data to power decision making on rare cancers or rare variants in common cancers. We see the GENIE data providing another knowledge turn in the virtuous cycle of research, accelerating the pace of drug discovery, improving the clinical trial design, and ultimately benefiting cancer patients globally.
The first set of cancer genomic data aggregated through AACR Project Genomics Evidence Neoplasia Information Exchange (GENIE) was available to the global community in January 2017. The seventh data set, GENIE 7.0-public, was released in January 2020 adding more than 9,000 records to the database. The combined data set now includes nearly 80,000 de-identified genomic records collected from patients who were treated at each of the consortium’s participating institutions, making it among the largest fully public cancer genomic data sets released to date. These data will be released to the public every six months. The public release of the eighth data set, GENIE 8.0-public, will take place in July 2020.
The combined data set now includes data for over 80 major cancer types, including data from greater than 12,500 patients with lung cancer, nearly 11,000 patients with breast cancer, and nearly 8,000 patients with colorectal cancer.
For more details about the data, analyses, and summaries of the data attributes from this release, GENIE 7.0-public, consult the data guide.
Users can access the data directly via cbioportal, or download the data directly from Sage Bionetworks. Users will need to create an account for either site and agree to the terms of access.
For frequently asked questions, visit our FAQ page.
In fall of 2019 AACR announced the Bio Collaborative which collected pan cancer data in conjuction and collaboration and support by a host of big pharma and biotech companies
they have a goal to expand to more than 6 cancer types and more than 50,000 records including smoking habits, lifestyle data etc
They have started with NSCLC have have done mutational analysis on these
included is tumor mutational burden and using cbioportal able to explore genomic data even further
treatment data is included as well
need to collect highly CURATED data with PRISM backbone to get more than outcome data, like progression data
they might look to incorporate digital pathology but they are not there yet; will need good artificial intelligence systems
4:01 PM – 4:15 PM
– Invited Speaker Gregory J. Riely. Memorial Sloan Kettering Cancer Center, New York, NY
4:15 PM – 4:20 PM
– Discussion
4:20 PM – 4:30 PM
1092 – A systematic analysis of BRAF mutations and their sensitivity to different BRAF inhibitors: Zohar Barbash, Dikla Haham, Liat Hafzadi, Ron Zipor, Shaul Barth, Arie Aizenman, Lior Zimmerman, Gabi Tarcic. Novellusdx, Jerusalem, Israel
Abstract: The MAPK-ERK signaling cascade is among the most frequently mutated pathways in human cancer, with the BRAF V600 mutation being the most common alteration. FDA-approved BRAF inhibitors as well as combination therapies of BRAF and MEK inhibitors are available and provide survival benefits to patients with a BRAF V600 mutation in several indications. Yet non-V600 BRAF mutations are found in many cancers and are even more prevalent than V600 mutations in certain tumor types. As the use of NGS profiling in precision oncology is becoming more common, novel alterations in BRAF are being uncovered. This has led to the classification of BRAF mutations, which is dependent on its biochemical properties and affects it sensitivity to inhibitors. Therefore, annotation of these novel variants is crucial for assigning correct treatment. Using a high throughput method for functional annotation of MAPK activity, we profiled 151 different BRAF mutations identified in the AACR Project GENIE dataset, and their response to 4 different BRAF inhibitors- vemurafenib and 3 different exploratory 2nd generation inhibitors. The system is based on rapid synthesis of the mutations and expression of the mutated protein together with fluorescently labeled reporters in a cell-based assay. Our results show that from the 151 different BRAF mutations, ~25% were found to activate the MAPK pathway. All of the class 1 and 2 mutations tested were found to be active, providing positive validation for the method. Additionally, many novel activating mutations were identified, some outside of the known domains. When testing the response of the active mutations to different classes of BRAF inhibitors, we show that while vemurafenib efficiently inhibited V600 mutations, other types of mutations and specifically BRAF fusions were not inhibited by this drug. Alternatively, the second-generation experimental inhibitors were effective against both V600 as well as non-V600 mutations.Using this large-scale approach to characterize BRAF mutations, we were able to functionally annotate the largest number of BRAF mutations to date. Our results show that the number of activating variants is large and that they possess differential sensitivity to different types of direct inhibitors. This data can serve as a basis for rational drug design as well as more accurate treatment options for patients.
Molecular profiling is becoming imperative for successful targeted therapies
500 unique mutations in BRAF so need to use bioinformatic pipeline; start with NGS panels then cluster according to different subtypes or class specific patterns
certain mutation like V600E mutations have distinct clustering in tumor types
25% of mutations occur with other mutations; mutations may not be functional; they used highthruput system to analyze other V600 braf mutations to determine if functional
active yet uncharacterized BRAF mutations seen in a major proportion of human tumors
using genomic drug data found that many inhibitors like verafanib are specific to a specific mutation but other inhibitors that are not specific to a cleft can inhibit other BRAF mutants
40% of 135 mutants were functionally active
USE of Functional Profiling instead of just genomic profiling
Q?: They have already used this platform and analysis for RTKs and other genes as well successfully
Q? how do you deal with co reccuring mutations: platform is able to do RTK plus signaling protiens
4:30 PM – 4:35 PM
– Discussion
4:35 PM – 4:45 PM
1093 – Calibration Tool for Genomic Aggregates (CTGA): A deep learning framework for calibrating somatic mutation profiling data from conventional gene panel data. Jordan Anaya, Craig Cummings, Jocelyn Lee, Alexander Baras. Johns Hopkins Sidney Kimmel Comprehensive Cancer Center, MD, Genentech, Inc., CA, AACR, Philadelphia, PA
Abstract: It has been suggested that aggregate genomic measures such as mutational burden can be associated with response to immunotherapy. Arguably, the gold standard for deriving such aggregate genomic measures (AGMs) would be from exome level sequencing. While many clinical trials run exome level sequencing, the vast majority of routine genomic testing performed today, as seen in AACR Project GENIE, is targeted / gene-panel based sequencing.
Despite the smaller size of these gene panels focused on clinically targetable alterations, it has been shown they can estimate, to some degree, exomic mutational burden; usually by normalizing mutation count by the relevant size of the panels. These smaller gene panels exhibit significant variability both in terms of accuracy relative to exomic measures and in comparison to other gene panels. While many genes are common to the panels in AACR Project GENIE, hundreds are not. These differences in extent of coverage and genomic loci examined can result in biases that may negatively impact panel to panel comparability.
To address these issues we developed a deep learning framework to model exomic AGMs, such as mutational burden, from gene panel data as seen in AACR Project GENIE. This framework can leverage any available sample and variant level information, in which variants are featurized to effectively re-weight their importance when estimating a given AGM, such as mutational burden, through the use of multiple instance learning techniques in this form of weakly supervised data.
Using TCGA data in conjunction with AACR Project GENIE gene panel definitions, as a proof of concept, we first applied this framework to learn expected variant features such as codons and genomic position from mutational data (greater than 99.9% accuracy observed). Having established the validity of the approach, we then applied this framework to somatic mutation profiling data in which we show that data from gene panels can be calibrated to exomic TMB and thereby improve panel to panel compatibility. We observed approximately 25% improvements in mean squared error and R-squared metrics when using our framework over conventional approaches to estimate TMB from gene panel data across the 9 tumors types examined (spanning melanoma, lung cancer, colon cancer, and others). This work highlights the application of sophisticated machine learning approaches towards the development of needed calibration techniques across seemingly disparate gene panel assays used clinically today.
4:45 PM – 4:50 PM
– Discussion
4:50 PM – 5:00 PM
1094 – Genetic determinants of EGFR-driven lung cancer growth and therapeutic response in vivoGiorgia Foggetti, Chuan Li, Hongchen Cai, Wen-Yang Lin, Deborah Ayeni, Katherine Hastings, Laura Andrejka, Dylan Maghini, Robert Homer, Dmitri A. Petrov, Monte M. Winslow, Katerina Politi. Yale School of Medicine, New Haven, CT, Stanford University School of Medicine, Stanford, CA, Stanford University School of Medicine, Stanford, CA, Yale School of Medicine, New Haven, CT, Stanford University School of Medicine, Stanford, CA, Yale School of Medicine, New Haven, CT
5:00 PM – 5:05 PM
– Discussion
5:05 PM – 5:15 PM
1095 – Comprehensive pan-cancer analyses of RAS genomic diversityRobert Scharpf, Gregory Riely, Mark Awad, Michele Lenoue-Newton, Biagio Ricciuti, Julia Rudolph, Leon Raskin, Andrew Park, Jocelyn Lee, Christine Lovly, Valsamo Anagnostou. Johns Hopkins Sidney Kimmel Comprehensive Cancer Center, Baltimore, MD, Memorial Sloan Kettering Cancer Center, New York, NY, Dana-Farber Cancer Institute, Boston, MA, Vanderbilt-Ingram Cancer Center, Nashville, TN, Amgen, Inc., Thousand Oaks, CA, AACR, Philadelphia, PA
5:15 PM – 5:20 PM
– Discussion
5:20 PM – 5:30 PM
1096 – Harmonization standards from the Variant Interpretation for Cancer Consortium. Alex H. Wagner, Reece K. Hart, Larry Babb, Robert R. Freimuth, Adam Coffman, Yonghao Liang, Beth Pitel, Angshumoy Roy, Matthew Brush, Jennifer Lee, Anna Lu, Thomas Coard, Shruti Rao, Deborah Ritter, Brian Walsh, Susan Mockus, Peter Horak, Ian King, Dmitriy Sonkin, Subha Madhavan, Gordana Raca, Debyani Chakravarty, Malachi Griffith, Obi L. Griffith. Washington University School of Medicine, Saint Louis, MO, Reece Hart Consulting, CA, Broad Institute, Boston, MA, Mayo Clinic, Rochester, MN, Washington University School of Medicine, Saint Louis, MO, Washington University School of Medicine, Saint Louis, MO, Baylor College of Medicine, Houston, TX, Oregon Health and Science University, Portland, OR, National Cancer Institute, Bethesda, MD, Georgetown University, Washington, DC, The Jackson Laboratory for Genomic Medicine, Farmington, CT, National Center for Tumor Diseases, Heidelberg, Germany, University of Toronto, Toronto, ON, Canada, University of Southern California, Los Angeles, CA, Memorial Sloan Kettering Cancer Center, New York, NY
Abstract: The use of clinical gene sequencing is now commonplace, and genome analysts and molecular pathologists are often tasked with the labor-intensive process of interpreting the clinical significance of large numbers of tumor variants. Numerous independent knowledge bases have been constructed to alleviate this manual burden, however these knowledgebases are non-interoperable. As a result, the analyst is left with a difficult tradeoff: for each knowledgebase used the analyst must understand the nuances particular to that resource and integrate its evidence accordingly when generating the clinical report, but for each knowledgebase omitted there is increased potential for missed findings of clinical significance.The Variant Interpretation for Cancer Consortium (VICC; cancervariants.org) was formed as a driver project of the Global Alliance for Genomics and Health (GA4GH; ga4gh.org) to address this concern. VICC members include representatives from several major somatic interpretation knowledgebases including CIViC, OncoKB, Jax-CKB, the Weill Cornell PMKB, the IRB-Barcelona Cancer Biomarkers Database, and others. Previously, the VICC built and reported on a harmonized meta-knowledgebase of 19,551 biomarker associations of harmonized variants, diseases, drugs, and evidence across the constituent resources.In that study, we analyzed the frequency with which the tumor samples from the AACR Project GENIE cohort would match to harmonized associations. Variant matches increased dramatically from 57% to 86% when broader matching to regions describing categorical variants were allowed. Unlike precise sequence variants with specified alternate alleles, categorical variants describe a collection of potential variants with a common feature, such as “V600” (non-valine alleles at the 600 residue), “Exon 20 mutations” (all non-silent mutations in exon 20), or “Gain-of-function” (hypermorphic alterations that activate or amplify gene activity). However, matching observed sequence variants to categorical variants is challenging, as the latter are typically only described as unstructured text. Here we describe the expressive and computational GA4GH Variation Representation specification (vr-spec.readthedocs.io), which we co-developed as members of the GA4GH Genomic Knowledge Standards work stream. This specification provides a schema for common, precise forms of variation (e.g. SNVs and Indels) and the method for computing identifiers from these objects. We highlight key aspects of the specification and our work to apply it to the characterization of categorical variation, showcasing the variant terminology and classification tools developed by the VICC to support this effort. These standards and tools are free, open-source, and extensible, overcoming barriers to standardized variant knowledge sharing and search.
store information from different databases by curating them and classifying them then harmonizing them into values
harmonize each variant across their knowledgebase; at any level of evidence
had 29% of patients variants that matched when compare across many knowledgebase databases versus only 13% when using individual databases
they are also trying to curate the database so a variant will have one code instead of various refseq codes or protein codes
VIC is an open consortium
5:30 PM – 5:35 PM
– Discussion
5:35 PM – 5:45 PM
1097 – FGFR2 in-frame indels: A novel targetable alteration in intrahepatic cholangiocarcinoma. Yvonne Y. Li, James M. Cleary, Srivatsan Raghavan, Liam F. Spurr, Qibiao Wu, Lei Shi, Lauren K. Brais, Maureen Loftus, Lipika Goyal, Anuj K. Patel, Atul B. Shinagare, Thomas E. Clancy, Geoffrey Shapiro, Ethan Cerami, William R. Sellers, William C. Hahn, Matthew Meyerson, Nabeel Bardeesy, Andrew D. Cherniack, Brian M. Wolpin. Dana-Farber Cancer Institute, Boston, MA, Dana-Farber Cancer Institute, Boston, MA, Massachusetts General Hospital, Boston, MA, Brigham and Women’s Hospital, Boston, MA, Dana-Farber Cancer Institute, Boston, MA, Dana-Farber Cancer Institute, Boston, MA, Broad Institute of MIT and Harvard, Cambridge, MA, Massachusetts General Hospital, Boston, MA
5:45 PM – 5:50 PM
– Discussion
5:50 PM – 6:00 PM
– Closing RemarksGregory J. Riely. Memorial Sloan Kettering Cancer Center, New York, NY
Personalized Medicine, Omics, and Health Disparities in Cancer: Can Personalized Medicine Help Reduce the Disparity Problem?
Curator: Stephen J. Williams, PhD
In a Science Perspectives article by Timothy Rebbeck, health disparities, specifically cancer disparities existing in the sub-Saharan African (SSA) nations, highlighting the cancer incidence disparities which exist compared with cancer incidence in high income areas of the world [1]. The sub-Saharan African nations display a much higher incidence of prostate, breast, and cervix cancer and these cancers are predicted to double within the next twenty years, according to IARC[2]. Most importantly,
the histopathologic and demographic features of these tumors differ from those in high-income countries
meaning that the differences seen in incidence may reflect a true health disparity as increases rates in these cancers are not seen in high income countries (HIC).
Most frequent male cancers in SSA include prostate, lung, liver, leukemia, non-Hodgkin’s lymphoma, and Kaposi’s sarcoma (a cancer frequently seen in HIV infected patients [3]). In SSA women, breast and cervical cancer are the most common and these display higher rates than seen in high income countries. In fact, liver cancer is seen in SSA females at twice the rate, and in SSA males almost three times the rate as in high income countries.
Reasons for cancer disparity in SSA
Patients with cancer are often diagnosed at a late stage in SSA countries. This contrasts with patients from high income countries, which have their cancers usually diagnosed at an earlier stage, and with many cancers, like breast[4], ovarian[5, 6], and colon, detecting the tumor in the early stages is critical for a favorable outcome and prognosis[7-10]. In addition, late diagnosis also limits many therapeutic options for the cancer patient and diseases at later stages are much harder to manage, especially with respect to unresponsiveness and/or resistance of many therapies. In addition, treatments have to be performed in low-resource settings in SSA, and availability of clinical lab work and imaging technologies may be limited.
Molecular differences in SSA versus HIC cancers which may account for disparities
Emerging evidence suggests that there are distinct molecular signatures with SSA tumors with respect to histotype and pathology. For example Dr. Rebbeck mentions that Nigerian breast cancers were defined by increased mutational signatures associated with deficiency of the homologous recombination DNA repair pathway, pervasive mutations in the tumor suppressor gene TP53, mutations in GATA binding protein 3 (GATA3), and greater mutational burden, compared with breast tumors from African Americans or Caucasians[11]. However more research will be required to understand the etiology and causal factors related to this molecular distinction in mutational spectra.
It is believed that there is a higher rate of hereditary cancers in SSA. And many SSA cancers exhibit the more aggressive phenotype than in other parts of the world. For example breast tumors in SSA black cases are twice as likely than SSA Caucasian cases to be of the triple negative phenotype, which is generally more aggressive and tougher to detect and treat, as triple negative cancers are HER2 negative and therefore are not a candidate for Herceptin. Also BRCA1/2 mutations are more frequent in black SSA cases than in Caucasian SSA cases [12, 13].
Initiatives to Combat Health Disparities in SSA
Multiple initiatives are being proposed or in action to bring personalized medicine to the sub-Saharan African nations. These include:
H3Africa empowers African researchers to be competitive in genomic sciences, establishes and nurtures effective collaborations among African researchers on the African continent, and generates unique data that could be used to improve both African and global health.
There is currently a global effort to apply genomic science and associated technologies to further the understanding of health and disease in diverse populations. These efforts work to identify individuals and populations who are at risk for developing specific diseases, and to better understand underlying genetic and environmental contributions to that risk. Given the large amount of genetic diversity on the African continent, there exists an enormous opportunity to utilize such approaches to benefit African populations and to inform global health.
The Human Heredity and Health in Africa (H3Africa) consortium facilitates fundamental research into diseases on the African continent while also developing infrastructure, resources, training, and ethical guidelines to support a sustainable African research enterprise – led by African scientists, for the African people. The initiative consists of 51 African projects that include population-based genomic studies of common, non-communicable disorders such as heart and renal disease, as well as communicable diseases such as tuberculosis. These studies are led by African scientists and use genetic, clinical, and epidemiologic methods to identify hereditary and environmental contributions to health and disease. To establish a foundation for African scientists to continue this essential work into the future work, the consortium also supports many crucial capacity building elements, such as: ethical, legal, and social implications research; training and capacity building for bioinformatics; capacity for biobanking; and coordination and networking.
Advancing precision medicine in a way that is equitable and beneficial to society means ensuring that healthcare systems can adopt the most scientifically and technologically appropriate approaches to a more targeted and personalized way of diagnosing and treating disease. In certain instances, countries or institutions may be able to bypass, or “leapfrog”, legacy systems or approaches that prevail in developed country contexts.
The World Economic Forum’s Leapfrogging with Precision Medicine project will develop a set of tools and case studies demonstrating how a precision medicine approach in countries with greenfield policy spaces can potentially transform their healthcare delivery and outcomes. Policies and governance mechanisms that enable leapfrogging will be iterated and scaled up to other projects.
Successes in personalized genomic research in SSA
As Dr. Rebbeck states:
Because of the underlying genetic and genomic relationships between Africans and members of the African diaspora (primarily in North America and Europe), knowledge gained from research in SSA can be used to address health disparities that are prevalent in members of the African diaspora.
For example members of the West African heritage and genomic ancestry has been reported to confer the highest genomic risk for prostate cancer in any worldwide population [14].
Science 03 Jan 2020:
Vol. 367, Issue 6473, pp. 27-28
DOI: 10.1126/science.aay474
Summary/Abstract
Cancer is an increasing global public health burden. This is especially the case in sub-Saharan Africa (SSA); high rates of cancer—particularly of the prostate, breast, and cervix—characterize cancer in most countries in SSA. The number of these cancers in SSA is predicted to more than double in the next 20 years (1). Both the explanations for these increasing rates and the solutions to address this cancer epidemic require SSA-specific data and approaches. The histopathologic and demographic features of these tumors differ from those in high-income countries (HICs). Basic knowledge of the epidemiology, clinical features, and molecular characteristics of cancers in SSA is needed to build prevention and treatment tools that will address the future cancer burden. The distinct distribution and determinants of cancer in SSA provide an opportunity to generate knowledge about cancer risk factors, genomics, and opportunities for prevention and treatment globally, not only in Africa.
Parkin DM, Ferlay J, Jemal A, Borok M, Manraj S, N’Da G, Ogunbiyi F, Liu B, Bray F: Cancer in Sub-Saharan Africa: International Agency for Research on Cancer; 2018.
Chinula L, Moses A, Gopal S: HIV-associated malignancies in sub-Saharan Africa: progress, challenges, and opportunities. Current opinion in HIV and AIDS 2017, 12(1):89-95.
Colditz GA: Epidemiology of breast cancer. Findings from the nurses’ health study. Cancer 1993, 71(4 Suppl):1480-1489.
Hamilton TC, Penault-Llorca F, Dauplat J: [Natural history of ovarian adenocarcinomas: from epidemiology to experimentation]. Contracept Fertil Sex 1998, 26(11):800-804.
Garner EI: Advances in the early detection of ovarian carcinoma. J Reprod Med 2005, 50(6):447-453.
Brockbank EC, Harry V, Kolomainen D, Mukhopadhyay D, Sohaib A, Bridges JE, Nobbenhuis MA, Shepherd JH, Ind TE, Barton DP: Laparoscopic staging for apparent early stage ovarian or fallopian tube cancer. First case series from a UK cancer centre and systematic literature review. European journal of surgical oncology : the journal of the European Society of Surgical Oncology and the British Association of Surgical Oncology 2013, 39(8):912-917.
Kolligs FT: Diagnostics and Epidemiology of Colorectal Cancer. Visceral medicine 2016, 32(3):158-164.
Rocken C, Neumann U, Ebert MP: [New approaches to early detection, estimation of prognosis and therapy for malignant tumours of the gastrointestinal tract]. Zeitschrift fur Gastroenterologie 2008, 46(2):216-222.
Srivastava S, Verma M, Henson DE: Biomarkers for early detection of colon cancer. Clinical cancer research : an official journal of the American Association for Cancer Research 2001, 7(5):1118-1126.
Pitt JJ, Riester M, Zheng Y, Yoshimatsu TF, Sanni A, Oluwasola O, Veloso A, Labrot E, Wang S, Odetunde A et al: Characterization of Nigerian breast cancer reveals prevalent homologous recombination deficiency and aggressive molecular features. Nature communications 2018, 9(1):4181.
Zheng Y, Walsh T, Gulsuner S, Casadei S, Lee MK, Ogundiran TO, Ademola A, Falusi AG, Adebamowo CA, Oluwasola AO et al: Inherited Breast Cancer in Nigerian Women. Journal of clinical oncology : official journal of the American Society of Clinical Oncology 2018, 36(28):2820-2825.
Rebbeck TR, Friebel TM, Friedman E, Hamann U, Huo D, Kwong A, Olah E, Olopade OI, Solano AR, Teo SH et al: Mutational spectrum in a worldwide study of 29,700 families with BRCA1 or BRCA2 mutations. Human mutation 2018, 39(5):593-620.
Lachance J, Berens AJ, Hansen MEB, Teng AK, Tishkoff SA, Rebbeck TR: Genetic Hitchhiking and Population Bottlenecks Contribute to Prostate Cancer Disparities in Men of African Descent. Cancer research 2018, 78(9):2432-2443.
Other articles on Cancer Health Disparities and Genomics on this Online Open Access Journal Include:
AACR and Dr. Margaret Foti Announce Free Virtual Annual Meeting for April 27, 28 2020 and other Free Resources
Reporter: Stephen J. Williams, PhD
Please see the following email from Dr. Foti and the AACR on VIRTUAL MEETING to be conducted April 27 and 28, 2020.
This is truly a wonderful job by AACR. In a previous posting I had considered the need for moving international scientific meetings to an online format which would make the information available to a wider audience as well as to those who don’t have the opportunity to travel to a meeting site. At @pharma_BI we will curate and live tweet the talks in order to enhance meeting engagement, as part of the usual eConference Proceedings we do.
Again Great Job by the AACR!
Dear Colleagues,
We hope you are staying safe and well and are adjusting to the challenges of the COVID-19 global pandemic. During this crisis, we remain steadfast in supporting our members and our mission.
I am pleased to announce a number of actions that we are taking to disseminate innovative cancer science and medicine to the global cancer research community:
AACR Virtual Annual Meeting 2020: Selected Presentations. We were excited to receive more than 225 clinical trials for presentation at the Annual Meeting. Due to the time-sensitive nature of these trials—many of which are practice-changing—we are making them available to the community at the time of the original April meeting. Therefore, as per our recent announcement, the AACR will host a slate of selected sessions online featuring these cutting-edge data.
This Virtual Annual Meeting will be held on April 27 and 28, 2020, and will include more than 30 oral presentations in several clinical trial plenary sessions along with commentaries from expert discussants, as well as clinical trial poster sessions consisting of short videos providing the authors’ perspectives. The Virtual Meeting will feature a New Drugs on the Horizon session as well as nine minisymposia that will showcase a broad sample of basic and translational science. Topics will include genomics, tumor microenvironment, novel targets, drug discovery, therapeutics, immunotherapy, biomarkers, and cancer prevention. A special minisymposium titled “Advancing Cancer Research Through an International Cancer Registry” will feature use cases of data available through AACR Project GENIE.
This Virtual Meeting will be available free to everyone, although attendees will be asked to register to participate. The session and presentation titles for the Virtual Meeting, as well as a link to the registration site, will be posted to the AACR website by Monday, April 13.
Release of Abstracts. All of the abstracts scheduled for presentation in the Virtual Meeting—and any other clinical trial abstracts that are scheduled for presentation at the rescheduled meeting—will be posted online on Monday, April 27. All other abstracts that have been accepted for presentation at the rescheduled meeting will be posted online on Friday, May 15.
AACR Annual Meeting 2019: Free Webcast Presentations. The complete webcasts of the AACR Annual Meeting are typically made freely available 15 months after the conclusion of the meeting. However, we have made these webcast presentations available free effective immediately, so that you can review the most compelling science from the Annual Meeting 2019 which was held in Atlanta.
Free Access to AACR Journals. To ensure that all members of the cancer research community have access to the information they need during this challenging time, we have opened access to our nine highly esteemed journals effective today through the end of the virtual meeting. Please be sure to visit the AACR journals webpage for journal highlights, and to sign-up for eTOC alerts.
Rescheduled AACR Annual Meeting. We are planning to reschedule the Annual Meeting for late August while at the same time closely monitoring the developments surrounding COVID-19. An official announcement of the rescheduled meeting will be made in the near future.
We hope that these plans will enable you to continue your important work during this global health crisis. Thank you for all you do to accelerate progress against cancer, and thank you for your loyalty to the AACR.
Sincerely,
Margaret Foti, PhD, MD (hc)
Chief Executive Officer
American Association for Cancer Research
For more information on Virtual Meetings please see
Leading Thoracic Oncologists from the United States and Milan, Italy shared their opinions and views on treating lung cancer patients during this COVID-19 pandemic. Included in the panel is a thoracic oncologist from Milan Italy who gave special insights into the difficulties and the procedures they are using to help control the spread of infection within this high at-risk patient population and changes to current treatment strategy in light of this current virus outbreak. Please see live notes and can follow on Twitter at #LungCancerandCOVID19. Included below is the recording of the Zoom session.
UPDATED 3/29/2020
Leading Lung Cancer Oncologists from around the world are meeting and discussing concerns for lung cancer patients and oncologist during the novel coronavirus (SARS-COV2; COVID19) pandemic. The town hall “COVID-19 and the Impact on Thoracic Oncology” will be held on Zoom on Saturday March 28, 2020 at 10:00 – 11:30 AM EST. sponsored by Axiom Healthcare Strategies . You can register at
Anne Chiang, MD, PhD, Associate Professor; Chief Network Officer and Deputy Chief Medical Officer, Smilow Cancer Network
Roy S. Herbst, MD, PhD, Ensign Professor of Medicine (Medical Oncology) and Professor of Pharmacology; Chief of Medical Oncology, Yale Cancer Center and Smilow Cancer Hospital; Associate Cancer Center Director for Translational Research, Yale Cancer Center
Kurt Schalper, MD, PhD Assistant Professor of Pathology; Director, Translational Immuno-oncology Laboratory
Martin J. Edelman, MD, Chair, Department of Hematology/Oncology, Fox Chase Cancer Center
Corey J. Langer, MD , Professor of Medicine, University of Pennsylvania
Hossain Borghaei, DO, MS , Chief of Thoracic Medical Oncology and Director of Lung Cancer Risk Assessment, Fox Chase Cancer Center
Marina Garassino, MD, Fondazione IRCCS Instituto Nazionale del Tumori
Kristen Ashley Marrone, MD, Thoracic Medical Oncologist. Johns Hopkins Bayview Medical Center
Taofeek Owonikoko, MD, PhD, MSCR, Medical Oncologist, Emory University School of Medicine
Jeffrey D. Bradley, MD, FACR, FASTRO , Emory University School of Medicine
— Hossein Borghaei, DO (@HosseinBorghaei) March 27, 2020
UPDATED 3/29/2020
Below is a collection of live Tweets from this meeting as well as some notes and comments from each of the speakers and panelists. The recording of this Town Hall will be posted on this site when available. The Town Hall was well attended with over 250 participants
Town Hall Notes
The following represent some notes taken at this Town Hall.
Dr. Owonkiko: 1-2% lethality in China; for patients newly diagnosed with lung cancer 1) limit contact between patient, physician and healthcare facility = telemedicine and oral chemo suggested 2) for immunotherapy if i.v. must monitor health carefully
Dr. Kurt Schalper: on COVID19 testing: Three types of tests each having pros and cons.
viral culture: not always practical as you need lots of specimen
ELISA: looking for circulating antibodies but not always specific for type of coronavirus
RT-PCR: most sensitive but right now not much clarity on best primers to use; he noted that there is a 15% variance in test results using different primers to different targeted COVID19 genes
Dr. Marina Garassino: The Lombardi outbreak was 1st in Italy and took them by surprise. She admits they were about one month behind in preparation where they did not have enough masks as late as January 31. It was impractical to socially distance given Italian customs in greeting each other. In addition, they had to determine which facilities would be COVID negative and COVID positive an this required access to testing. Right now they are only testing symptomatic patients and healthcare workers have to test negative multiple times. As concerning therapy with lung cancer patients, they have been delaying as much as possible the initiation of therapy. Patients that are on immunotherapy and immunosuppresive drugs are being monitored by CT scan more often during this pandemic so as instances of pneumotitis began increasing they were unsure if these patients are at increased risk of infection to COVID19 or just a bias in that they are screening more often so their risk to COVID 19 is unclear. Dr. Garissino also felt we need to move from hospital based to community based measures of prevention against COVID infection (social distancing, citizens more vigilant). She noted that usually the cancer patients are more careful with respect to preventative measures than the general populace. Healthcare workers have to test negative twice in three days if they had been in close contact with a COVID postitive patient. However her hospital is still running at 80% capacity so patients are getting treated. However there are ethical issues as to who gets treated, who gets respirators, and other ethical issues related to unfortunate rationing of care.
Dr. Anne Chiang: Scheduled visits have notably decreased. They have seen patients visits decrease from 4500 down to 2300 in two weeks but telemedicine visits or virtual visits have increased to 1000 so are replacing the on site visits. She also said they are trying to reduce or eliminate the extremely immuno-suppressive drugs from chemotherapy regimens. For example they are removing pemetrexemed from standard regimens and also considering neoadjuvant chemotherapy. As far as biopsies, liquid biopsies can be obtained in the home so more preferred as patients do not have to come in for biopsy.
Dr. Edelman: Fox Chase is somewhat unique in being an NCI center which only does oncology so they rely on neighboring Jeanes Hospital of the Temple University Health System for a lot of their outpatient and surgical and general medicine needs. Patients who will be transferred back to Fox Chase are screened for COVID19.
Brenden Stiles: Lung cancer surgeries have ground to a halt. He did only one last week. The hospital wants to conserve resources and considers lung cancer surgery to great a COVID risk. They have shut down elective surgeries and there are no clinical trials being conducted. He said that lung cancer research will be negatively impacted by the pandemic as resources are shuttled to COVID research efforts.
#lungcancerandcovid19@Annechiangmd talks on cutting pemetrexemed for tx regimen and considering more neoadjuvant tx. Also liquid biopsy can be performed in home setting
We currently have a post op lung cancer patient from another cancer center intubated in our #ICU with #COVID19 and a horrible air leak. Also had a patient w/ part solid tumor who I recommended to wait, tell me he is having surgery elsewhere.
Very thoughtful discussion about need for having prognosis and end of life discussions with lung cancer patients prior to going into hospital. Coordination with the long term treatment team, acute care team, and family are critical. #lcsm#COVID19https://t.co/7ni9YIbrsB
4) Surgery not w/o risk. Last week I only did 1 lung cancer case – on a woman w/ clear local progression on interval CT. She was healthy enough to go home pod#1 after lobectomy. However, we subsequently found out she was exposed to #COVID19. Please proceed carefully.