Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence Applications in Health Care’ Category


From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »


This AI Just Evolved From Companion Robot To Home-Based Physician Helper

Reporter: Ethan Coomber, Research Assistant III, Data Science and Podcast Library Development 

Article Author: Gil Press Senior Contributor Enterprise & Cloud @Forbes 

Twitter: @GilPress I write about technology, entrepreneurs and innovation.

Intuition Robotics announced today that it is expanding its mission of improving the lives of older adults to include enhancing their interactions with their physicians. The Israeli startup has developed the AI-based, award-winning proactive social robot ElliQ which has spent over 30,000 days in older adults’ homes over the past two years. Now ElliQ will help increase patient engagement while offering primary care providers continuous actionable data and insights for early detection and intervention.

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

Higher patient engagement leads to lower costs of delivering care and the quality of the physician-patient relationship is positively associated with improved functional health, studies have found. Typically, however, primary care physicians see their patients anywhere from once a month to once a year, even though about 85% of seniors in the U.S. have at least one chronic health condition. ElliQ, with the consent of its users, can provide data on the status of patients in between office visits and facilitate timely and consistent communications between physicians and their patients.

Supporting the notion of a home-based physician assistant robot is the transformation of healthcare delivery in the U.S. More and more primary care physicians are moving from a fee-for-service business model, where doctors are paid according to the procedures used to treat a patient, to “capitation,” where doctors are paid a set amount for each patient they see. This shift in how doctors are compensated is gaining momentum as a key solution for reducing the skyrocketing costs of healthcare: “…inadequate, unnecessary, uncoordinated, and inefficient care and suboptimal business processes eat up at least 35%—and maybe over 50%—of the more than $3 trillion that the country spends annually on health care. That suggests more than $1 trillion is being squandered,” states “The Case for Capitation,” a Harvard Business Review article.

Under this new business model, physicians have a strong incentive to reduce or eliminate visits to the ER and hospitalization, so ElliQ’s assistance in early intervention and support of proactive and preventative healthcare is highly valuable. ElliQ’s “new capabilities provide physicians with visibility into the patient’s condition at home while allowing seamless communication… can assist me and my team in early detection and mitigation of health issues, and it increases patients’ involvement in their care through more frequent engagement and communication,” says in a statement Dr. Peter Barker of Family Doctors, a Mass General Brigham-affiliated practice in Swampscott, MA, that is working with Intuition Robotics.

With the new stage in its evolution, ElliQ becomes “a conversational agent for self-reported data on how people are doing based on what the doctor is telling us to look for and, at the same time, a super-simple communication channel between the physician and the patient,” says Skuler. As only 20% of the individual’s health has to do with the administration of healthcare, Skuler says the balance is already taken care of by ElliQ—encouraging exercise, watching nutrition, keeping mentally active, connecting to the outside world, and promoting a sense of purpose.

A recent article in The Communication of the ACM pointed out that “usability concerns have for too long overshadowed questions about the usefulness and acceptability of digital technologies for older adults.” Specifically, the authors challenge the long-held assumption that accessibility and aging research “fall under the same umbrella despite the fact that aging is neither an illness nor a disability.”

For Skuler, a “pyramid of value” is represented in Intuition Robotics offering. At the foundation is the physical product, easy to use and operate and doing what it is expected to do. Then there is the layer of “building relationships based on trust and empathy,” with a lot of humor and social interaction and activities for the users. On top are specific areas of value to older adults, and the first one is healthcare. There will be more in the future, anything that could help older adults live better lives, such as direct connections to the local community. ”Healthcare is an interesting experiment and I’m very much looking forward to see what else the future holds for ElliQ,” says Skuler.

Original. Reposted with permission, 7/7/2021.

Other related articles published in this Open Access Online Scientific Journal include the Following:

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber
https://pharmaceuticalintelligence.com/2021/06/23/the-future-of-speech-based-human-computer-interaction/

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Read Full Post »


Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Immune-checkpoint blockers (ICBs) immunotherapy appears promising for various cancer types, offering a durable therapeutic advantage. Only a number of cases with cancer respond to this therapy. Biomarkers are required to adequately predict the responses of patients. This article evaluates this issue utilizing a system method to characterize the immune response of the anti-tumor based on the entire tumor environment. Researchers build mechanical biomarkers and cancer-specific response models using interpretable machine learning that predict the response of patients to ICB.

The lymphatic and immunological systems help the body defend itself by combating. The immune system functions as the body’s own personal police force, hunting down and eliminating pathogenic baddies.

According to Federica Eduati, Department of Biomedical Engineering at TU/e, “The immune system of the body is quite adept at detecting abnormally behaving cells. Cells that potentially grow into tumors or cancer in the future are included in this category. Once identified, the immune system attacks and destroys the cells.”

Immunotherapy and machine learning are combining to assist the immune system solve one of its most vexing problems: detecting hidden tumorous cells in the human body.

It is the fundamental responsibility of our immune system to identify and remove alien invaders like bacteria or viruses, but also to identify risks within the body, such as cancer. However, cancer cells have sophisticated ways of escaping death by shutting off immune cells. Immunotherapy can reverse the process, but not for all patients and types of cancer. To unravel the mystery, Eindhoven University of Technology researchers used machine learning. They developed a model to predict whether immunotherapy will be effective for a patient using a simple trick. Even better, the model outperforms conventional clinical approaches.

The outcomes of this research are published on 30th June, 2021 in the journal Patterns in an article entitled “Interpretable systems biomarkers predict response to immune-checkpoint inhibitors”.

The Study

  • Characterization of the tumor microenvironment from RNAseq and prior knowledge
  • Multi-task machine-learning models for predicting antitumor immune responses
  • Identification of cancer-type-specific, interpretable biomarkers of immune responses
  • EaSIeR is a tool to predict biomarker-based immunotherapy response from RNA-seq

“Tumor also contains multiple types of immune and fibroblast cells which can play a role in favor of or anti-tumor, and communicates among themselves,” said Oscar Lapuente-Santana, a researcher doctoral student in the computational biology group. “We had to learn how complicated regulatory mechanisms in the micro-environment of the tumor affect the ICB response. We have used RNA sequencing datasets to depict numerous components of the Tumor Microenvironment (TME) in a high-level illustration.”

Using computational algorithms and datasets from previous clinical patient care, the researchers investigated the TME.

Eduati explained

While RNA-sequencing databases are publically available, information on which patients responded to ICB therapy is only available for a limited group of patients and cancer types. So, to tackle the data problem, we used a trick.

All 100 models learned in the randomized cross-validation were included in the EaSIeR tool. For each validation dataset, we used the corresponding cancer-type-specific model: SKCM for the melanoma Gide, Auslander, Riaz, and Liu cohorts; STAD for the gastric cancer Kim cohort; BLCA for the bladder cancer Mariathasan cohort; and GBM for the glioblastoma Cloughesy cohort. To make predictions for each job, the average of the 100 cancer-type-specific models was employed. The predictions of each dataset’s cancer-type-specific models were also compared to models generated for the remaining 17 cancer types.

From the same datasets, the researchers selected several surrogate immunological responses to be used as a measure of ICB effectiveness.

Lapuente-Santana stated

One of the most difficult aspects of our job was properly training the machine learning models. We were able to fix this by looking at alternative immune responses during the training process.

Some of the researchers employed the machine learning approach given in the paper to participate in the “Anti-PD1 Response Prediction DREAM Challenge.”

DREAM is an organization that carries out crowd-based tasks with biomedical algorithms. “We were the first to compete in one of the sub-challenges under the name cSysImmunoOnco team,” Eduati remarks.

The researchers noted,

We applied machine learning to seek for connections between the obtained system-based attributes and the immune response, estimated using 14 predictors (proxies) derived from previous publications. We treated these proxies as individual tasks to be predicted by our machine learning models, and we employed multi-task learning algorithms to jointly learn all tasks.

The researchers discovered that their machine learning model surpasses biomarkers that are already utilized in clinical settings to evaluate ICB therapies.

But why are Eduati, Lapuente-Santana, and their colleagues using mathematical models to tackle a medical treatment problem? Is this going to take the place of the doctor?

Eduati explains

Mathematical models can provide an overview of the interconnection between individual molecules and cells and at the same time predicting a particular patient’s tumor behavior. This implies that immunotherapy with ICB can be personalized in a patient’s clinical setting. The models can aid physicians with their decisions about optimum therapy, it is vital to note that they will not replace them.

Furthermore, the model aids in determining which biological mechanisms are relevant for the biological response.

The researchers noted

Another advantage of our concept is that it does not need a dataset with known patient responses to immunotherapy for model training.

Further testing is required before these findings may be implemented in clinical settings.

Main Source:

Lapuente-Santana, Ó., van Genderen, M., Hilbers, P. A., Finotello, F., & Eduati, F. (2021). Interpretable systems biomarkers predict response to immune-checkpoint inhibitorsPatterns, 100293. https://www.cell.com/patterns/pdfExtended/S2666-3899(21)00126-4

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Inhibitory CD161 receptor recognized as a potential immunotherapy target in glioma-infiltrating T cells by single-cell analysis

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/02/20/inhibitory-cd161-receptor-identified-in-glioma-infiltrating-t-cells-by-single-cell-analysis-2/

Immunotherapy may help in glioblastoma survival

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/03/16/immunotherapy-may-help-in-glioblastoma-survival/

Deep Learning for In-silico Drug Discovery and Drug Repurposing: Artificial Intelligence to search for molecules boosting response rates in Cancer Immunotherapy: Insilico Medicine @John Hopkins University

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/17/deep-learning-for-in-silico-drug-discovery-and-drug-repurposing-artificial-intelligence-to-search-for-molecules-boosting-response-rates-in-cancer-immunotherapy-insilico-medicine-john-hopkins-univer/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/02/cancer-detection-and-therapeutics/

Read Full Post »


The Future of Speech-Based Human-Computer Interaction

Reporter: Ethan Coomber, Research Assistant III

2021 LPBI Summer Internship in Data Science and Podcast Library Development
This article reports on a research conducted by the Tokyo Institute of Technology, published on 9 June 2021.

As technology continues to advance, the human-computer relationship develops alongside with it. As researchers and developers find new ways to improve a computer’s ability to recognize the distinct pitches that compose a human’s voice, the potential of technology begins to push back what people previously thought was possible. This constant improvement in technology has allowed us to identify new potential challenges in voice-based technological interaction.

When humans interact with one another, we do not convey our message with only our voices. There are a multitude of complexities to our emotional states and personality that cannot be obtained simply through the sound coming out of our mouths. Aspects of our communication such as rhythm, tone, and pitch are essential in our understanding of one another. This presents a challenge to artificial intelligence as technology is not able to pick up on these cues.

https://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php

In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.

– Kate Seaborn

While it will always be difficult for technology to perfectly replicate a human interaction, the inclusion of filler terms such as “I mean…”, “um” and “like…” have been shown to improve human’s interaction and comfort when communicating with technology. Humans prefer communicating with agents that match their personality and overall communication style. The illusion of making the artificial intelligence appear human has a dramatic affect on the overall comfort of the person interacting with the technology. Several factors that have been proven to improve communication are when the artificial intelligence comes across as happy or empathetic with a higher pitched voice.

Using machine learning, computers are able to recognize patterns within human speech rather than requiring programming for specific patterns. This allows for the technology to adapt to human tendencies as they continue to see them. Over time, humans develop nuances in the way they speak and communicate which frequently results in a tendency to shorten certain words. One of the more common examples is the expression “I don’t know”. This expression is frequently reduced to the phrase “dunno”. Using machine learning, computers would be able to recognize this pattern and realize what the human’s intention is.

With advances in technology and the development of voice assistance in our lives, we are expanding our interactions to include computer interfaces and environments. While there are still many advances that need to be made in order to achieve the desirable level of communication, developers have identified the necessary steps to achieve the desirable human-computer interaction.

Sources:

Tokyo Institute of Technology. “The role of computer voice in the future of speech-based human-computer interaction.” ScienceDaily. ScienceDaily, 9 June 2021.

Rev. “Speech Recognition Trends to Watch in 2021 and Beyond: Responsible AI.” Rev, 2 June 2021, http://www.rev.com/blog/artificial-intelligence-machine-learning-speech-recognition.

“The Role of Computer Voice in the Future of Speech-Based Human-Computer Interaction.” EurekAlert!, 1 June 2021, http://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

The Human Genome Project
Reporter: Larry H Bernstein, MD, FCAP, Curator
https://pharmaceuticalintelligence.com/2015/09/09/the-human-genome-project/

Read Full Post »


Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy

Reporter: Srinivas Sriram, Research Assistant I
Research Team: Srinivas Sriram, Abhisar Anand

2021 LPBI Summer Intern in Data Science and Website Construction
This article reports on a research study conducted from January 2021 to May 2021.
This Research was completed before the 2021 LPBI Summer Internship that began on 6/15/2021.

The main criterion of this study was to utilize the dataset (shown above) to develop a DL network that could accurately predict new seizures based on incoming data. To begin the study, our research group did some exploratory data analysis on the dataset and we recognized the key defining pattern of the data that allowed for the development of the DL model. This pattern of the data can be represented in the graph above, where the lines representing seizure data had major spikes in extreme hertz values, while the lines representing normal patient data remained stable without any spikes. We utilized this pattern as a baseline for our model. 

Conclusions and Future Improvements:

Through our system, we were able to create a prototype solution that would predict when seizures happened in a potential patient using an accurate LSTM network and a reliable hardware system. This research can be implemented in hospitals with patients suffering from epilepsy in order to help them as soon as they experience a seizure to prevent damage. However, future improvements need to be made to this solution to allow it to be even more viable in the Healthcare Industry, which is listed below.

  • Needs to be implemented on a more reliable EEG headset (covers all neurons of the brain, less prone to electric disruptions shown in the prototype). 
  • Needs to be tested on live patients to deem whether the solution is viable and provides a potential solution to the problem. 
  • The network can always be fine-tuned to maximize performance. 
  • A better alert system can be implemented to provide as much help as possible. 

These improvements, when implemented, can help provide a real solution to one of the most common diseases faced in the world. 

Background Information:

Epilepsy is described as a brain disorder diagnostic category for multiple occurrences of seizures that happen within recurrent and/or a brief timespan. According to the World Health Organization, seizure disorders, including epilepsy, are among the most common neurological diseases. Those who suffer seizures have a 3 times higher risk of premature death. Epilepsy is often treatable, especially when physicians can provide necessary treatment quickly. When untreated, however, seizures can cause physical, psychological, and emotional, including isolation from others. Quick diagnosis and treatment prevent suffering and save lives. The importance of a quick diagnosis of epilepsy has led to our research team developing Deep Learning (DL) algorithms for the sole purpose of detecting epileptic seizures as soon as they occur. 

Throughout the years, one common means of detecting Epilepsy has emerged in the form of an electroencephalogram (EEG). EEGs can detect and compile “normal” and “abnormal “brain wave activity” and “indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities”. EEG waves are classified mainly by brain wave frequencies (EEG, 2020). The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slow, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. EEG detection of electrical brain wave frequencies can be used to detect and diagnose seizures based on their deviation from usual brain wave patterns.

In this particular research project, our research group hoped to develop a DL algorithm that when implemented on a live, portable EEG brain wave capturing device, could accurately predict when a particular patient was suffering from Epilepsy as soon as it occurred. This would be accomplished by creating a network that could detect when the brain frequencies deviated from the normal frequency ranges. 

The Study:

Line Graph representing EEG Brain Waves from a Seizure versus EEG Brain Waves from a normal individual. 

Source Dataset: https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition

To expand more on the dataset, it is an EEG data set compiled by Qiuyi Wu and Ernest Fokoue (2021) from the work of medical researchers R.Andrzejak, M.D. et al. (2001) which had been made public domain through the UCI Machine Learning Repository We also confirmed fair use permission with UCI. The dataset had been gathered by Andrzejak during examinations of 500 patients with a chronic seizure disorder. R.G.Andrzejak, et al. (2001) recorded each entry in the EEG dataset used for this project within 23.6 seconds in a time-series data structure. Each row in the dataset represented a patient recorded. The continuous variables in the dataset were single EEG data points at that specific point in time during the measuring period. At the end of the dataset, was a y-variable that indicated whether or not the patient had a seizure during the period the data was recorded. The continuous variables, or the EEG data, for each patient, varied widely based on whether the patient was experiencing a seizure at that time. The Wu & Fokoue Dataset (2021) consists of one file of 11,500 rows, each with 178 sequential data points concatenated from the original dataset of 5 data folders, each including 100 files of EEG recordings of 23.6 seconds and containing 4097 data points. Each folder contained a single, original subset. Subset A contained EEG data gathering during epileptic seizure…. Subset B contained EEG data from brain tumor sites. Subset 3, from a healthy site where tumors had been located. Subsets 4 and 5 from non-seizure patients at rest with eyes open and closed, respectively. 

Based on the described data, our team recognized that a Recurrent Neural Network (RNN) was needed to input the sequential data and return an output of whether the sequential data was a seizure or not. However, we realized that RNN models are known to get substantially large over time, reducing computation speeds. To help provide a solution to this issue, our group decided to implement a long-short-term memory (LSTM) model. After deciding our model’s architecture, we proceeded to train our model in two different DL frameworks inside Python, TensorFlow, and PyTorch. Through various rounds of retesting and redesigning, we were able to train and develop two accurate models in each of the models that not only performed well while learning the data while training, but also could accurately predict new data in the testing set (98 percent accuracy on the unseen data). These LSTM networks could classify normal EEG data when the brain waves are normal, and then immediately predict the seizure data based on if a dramatic spike occurred in the data. 

After training our model, we had to implement our model in a real-life prototype scenario in which we utilized a Single Board Computer (SBC) in the Raspberry Pi 4 and a live capturing EEG headset in the Muse 2 Headband. The two hardware components would sync up through Bluetooth and the headband would return EEG data to the Raspberry Pi, which would process the data. Through the Muselsl API in Python, we were able to retrieve this EEG data in a format similar to the manner implemented during training. This new input data would be fed into our LSTM network (TensorFlow was chosen for the prototype due to its better performance than the PyTorch network), which would then output the result of the live captured EEG data in small intervals. This constant cycle would be able to accurately predict a seizure as soon as it occurs through batches of EEG data being fed into the LSTM network. Part of the reason why our research group chose the Muse Headband, in particular, was not only due to its compatibility with Python but also due to the fact that it was able to represent seizure data. Because none of our members had epilepsy, we had to find a reliable way of testing our model to make sure it worked on the new data. Through electrical disruptions in the wearable Muse Headband, we were able to simulate these seizures that worked with our network’s predictions. In our program, we implemented an alert system that would email the patient’s doctor as soon as a seizure was detected.

Individual wearing the Muse 2 Headband

Image Source: https://www.techguide.com.au/reviews/gadgets-reviews/muse-2-review-device-help-achieve-calm-meditation/

Sources Cited:

Wu, Q. & Fokoue, E. (2021).  Epileptic seizure recognition data set: Data folder & Data set description. UCI Machine Learning Repository: Epileptic Seizure Recognition. Jan. 30. Center for Machine Learning and Intelligent Systems, University of California Irvine.

Nayak, C. S. (2020). EEG normal waveforms.” StatPearls [Internet]. U.S. National Library of Medicine, 31 Jul. 2020, www.ncbi.nlm.nih.gov/books/NBK539805/#.

Epilepsy. (2019). World Health Organization Fact Sheet. Jun. https://www.who.int/ news-room/fact-sheet s/detail/epilepsy

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I

https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms

Reporter: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe

Reporter: Howard Donohue, PhD (EAW)

https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

Read Full Post »


Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I
Research Team: Abhisar Anand, Srinivas Sriram

2021 LPBI Summer Internship in Data Science and Website construction.
This article reports on a research study conducted till December 2020.
Research completed before the 2021 LPBI Summer Internship began in 6/15/2021.

As the field of Artificial Intelligence progresses, various algorithms have been implemented by researchers to classify emotions from EEG signals. Few researchers from China and Singapore released a paper (“An Investigation of Deep Learning Models from EEG-Based Emotion Recognition”) analyzing different types of DL model architectures such as deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid of CNN and LSTM (CNN-LSTM). The dataset used in this investigation was the DEAP Dataset which consisted of EEG signals of patients that watched 40 one-minute long music videos and then rated them in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. The result of the investigation presented that CNN (90.12%) and CNN-LSTM (94.7%) models had the highest performance out of the batch of DL models. On the other hand, the DNN model had a very fast training speed but was not able to perform as accurately as other other models. The LSTM model was also not able to perform accurately and the training speed was much slower as it was difficult to achieve convergence.

This research in the various model architectures provides a sense of what the future of Emotion Classification with AI holds. These Deep Learning models can be implemented in a variety of different scenarios across the world, all to help with detecting emotions in scenarios where it may be difficult to do so. However, there needs to be more research implemented in the model training aspect to ensure the accuracy of the classification is top-notch. Along with that, newer and more reliable hardware can be implemented in society to provide an easy-to-access and portable EEG collection device that can be used in any different scenario across the world. Overall, although future improvements need to be implemented, the future of making sure that emotions are accurately detected in all people is starting to look a lot brighter thanks to the innovation of AI in the neuroscience field.

Emotions are a key factor in any person’s day to day life. Most of the time, we as humans can detect these emotions through physical cues such as movements, facial expressions, and tone of voice. However, in certain individuals, it can be hard to identify their emotions through their visible physical cues. Recent studies in the Machine Learning and AI field provide a particular development in the ability to detect emotions through brainwaves, more specifically EEG brainwaves. These researchers from across the world utilize the same concept of EEG implemented in AI to help predict the state an individual is in at any given moment.

Emotion classification based on brain wave: a survey (Figure 4)

Image Source: https://hcis-journal.springeropen.com/articles/10.1186/s13673-019-0201-x

EEGs can detect and compile normal and abnormal brain wave activity and indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities. EEG signals are classified mainly by brain wave frequencies. The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slowed, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. These EEG signals can help for the detection of emotions based on the frequencies that the signals happen in and the activity of the signals (whether they are active or relatively calm). 

Sources:

Zhang, Yaqing, et al. “An Investigation of Deep Learning Models for EEG-Based Emotion Recognition.” Frontiers in Neuroscience, vol. 14, 2020. Crossref, doi:10.3389/fnins.2020.622759.

Nayak, Anilkumar, Chetan, Arayamparambil. “EEG Normal Waveforms.” National Center for Biotechnology Information, StatPearls Publishing LLC., 4 May 2021, http://www.ncbi.nlm.nih.gov/books/NBK539805.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy
Reporter: Srinivas Sriram, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-the-instant-prediction-of-patients-with-epilepsy/

Prediction of Cardiovascular Risk by Machine Learning (ML) Algorithm: Best performing algorithm by predictive capacity had area under the ROC curve (AUC) scores: 1st, quadratic discriminant analysis; 2nd, NaiveBayes and 3rd, neural networks, far exceeding the conventional risk-scaling methods in Clinical Use
Curator: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/07/04/prediction-of-cardiovascular-risk-by-machine-learning-ml-algorithm-best-performing-algorithm-by-predictive-capacity-had-area-under-the-roc-curve-auc-scores-1st-quadratic-discriminant-analysis/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes
Reporter: Amandeep Kaur, B.Sc., M.Sc.
https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms
Reporter: Dror Nir, PhD
https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe
Reporter: Howard Donohue, PhD (EAW)
https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Read Full Post »


Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »


Renal tumor macrophages linked to recurrence are identified using single-cell protein activity analysis

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

When malignancy returns after a period of remission, it is called a cancer recurrence. After the initial or primary cancer has been treated, this can happen weeks, months, or even years later. The possibility of recurrence is determined by the type of primary cancer. Because small patches of cancer cells might stay in the body after treatment, cancer might reoccur. These cells may multiply and develop large enough to cause symptoms or cause cancer over time. The type of cancer determines when and where cancer recurs. Some malignancies have a predictable recurrence pattern.

Even if primary cancer recurs in a different place of the body, recurrent cancer is designated for the area where it first appeared. If breast cancer recurs distantly in the liver, for example, it is still referred to as breast cancer rather than liver cancer. It’s referred to as metastatic breast cancer by doctors. Despite treatment, many people with kidney cancer eventually develop cancer recurrence and incurable metastatic illness.

The most frequent type of kidney cancer is Renal Cell Carcinoma (RCC). RCC is responsible for over 90% of all kidney malignancies. The appearance of cancer cells when viewed under a microscope helps to recognize the various forms of RCC. Knowing the RCC subtype can help the doctor assess if the cancer is caused by an inherited genetic condition and help to choose the best treatment option. The three most prevalent RCC subtypes are as follows:

  • Clear cell RCC
  • Papillary RCC
  • Chromophobe RCC

Clear Cell RCC (ccRCC) is the most prevalent subtype of RCC. The cells are clear or pale in appearance and are referred to as the clear cell or conventional RCC. Around 70% of people with renal cell cancer have ccRCC. The rate of growth of these cells might be sluggish or rapid. According to the American Society of Clinical Oncology (ASCO), clear cell RCC responds favorably to treatments like immunotherapy and treatments that target specific proteins or genes.

Researchers at Columbia University’s Vagelos College of Physicians and Surgeons have developed a novel method for identifying which patients are most likely to have cancer relapse following surgery.

The study

Their findings are detailed in a study published in the journal Cell entitled, “Single-Cell Protein Activity Analysis Identifies Recurrence-Associated Renal Tumor Macrophages.” The researchers show that the presence of a previously unknown type of immune cell in kidney tumors can predict who will have cancer recurrence.

According to co-senior author Charles Drake, MD, PhD, adjunct professor of medicine at Columbia University Vagelos College of Physicians and Surgeons and the Herbert Irving Comprehensive Cancer Center,

the findings imply that the existence of these cells could be used to identify individuals at high risk of disease recurrence following surgery who may be candidates for more aggressive therapy.

As Aleksandar Obradovic, an MD/PhD student at Columbia University Vagelos College of Physicians and Surgeons and the study’s co-first author, put it,

it’s like looking down over Manhattan and seeing that enormous numbers of people from all over travel into the city every morning. We need deeper details to understand how these different commuters engage with Manhattan residents: who are they, what do they enjoy, where do they go, and what are they doing?

To learn more about the immune cells that invade kidney cancers, the researchers employed single-cell RNA sequencing. Obradovic remarked,

In many investigations, single-cell RNA sequencing misses up to 90% of gene activity, a phenomenon known as gene dropout.

The researchers next tackled gene dropout by designing a prediction algorithm that can identify which genes are active based on the expression of other genes in the same family. “Even when a lot of data is absent owing to dropout, we have enough evidence to estimate the activity of the upstream regulator gene,” Obradovic explained. “It’s like when playing ‘Wheel of Fortune,’ because I can generally figure out what’s on the board even if most of the letters are missing.”

The meta-VIPER algorithm is based on the VIPER algorithm, which was developed in Andrea Califano’s group. Califano is the head of Herbert Irving Comprehensive Cancer Center’s JP Sulzberger Columbia Genome Center and the Clyde and Helen Wu professor of chemistry and systems biology. The researchers believe that by including meta-VIPER, they will be able to reliably detect the activity of 70% to 80% of all regulatory genes in each cell, eliminating cell-to-cell dropout.

Using these two methods, the researchers were able to examine 200,000 tumor cells and normal cells in surrounding tissues from eleven patients with ccRCC who underwent surgery at Columbia’s urology department.

The researchers discovered a unique subpopulation of immune cells that can only be found in tumors and is linked to disease relapse after initial treatment. The top genes that control the activity of these immune cells were discovered through the VIPER analysis. This “signature” was validated in the second set of patient data obtained through a collaboration with Vanderbilt University researchers; in this second set of over 150 patients, the signature strongly predicted recurrence.

These findings raise the intriguing possibility that these macrophages are not only markers of more risky disease, but may also be responsible for the disease’s recurrence and progression,” Obradovic said, adding that targeting these cells could improve clinical outcomes

Drake said,

Our research shows that when the two techniques are combined, they are extremely effective at characterizing cells within a tumor and in surrounding tissues, and they should have a wide range of applications, even beyond cancer research.

Main Source

Single-cell protein activity analysis identifies recurrence-associated renal tumor macrophages

https://www.cell.com/cell/fulltext/S0092-8674(21)00573-0

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Renal (Kidney) Cancer: Connections in Metabolism at Krebs cycle  and Histone Modulation

Curator: Demet Sag, PhD, CRA, GCP

https://pharmaceuticalintelligence.com/2015/10/14/renal-kidney-cancer-connections-in-metabolism-at-krebs-cycle-through-histone-modulation/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Bioinformatic Tools for Cancer Mutational Analysis: COSMIC and Beyond

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2015/12/02/bioinformatic-tools-for-cancer-mutational-analysis-cosmic-and-beyond-2/

Deep-learning AI algorithm shines new light on mutations in once obscure areas of the genome

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2014/12/24/deep-learning-ai-algorithm-shines-new-light-on-mutations-in-once-obscure-areas-of-the-genome/

Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning

https://pharmaceuticalintelligence.com/2021-medical-text-analysis-nlp/premalata-pati-phd-postdoc-in-pharmaceutical-sciences-medical-text-analysis-with-machine-learning/

Read Full Post »


Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

This image has an empty alt attribute; its file name is morethanthes.jpg
Seeing “through” the cancer with the power of data analysis — possible with the help of artificial intelligence. Credit: MPI f. Molecular Genetics/ Ella Maru Studio
Image Source: https://medicalxpress.com/news/2021-04-sum-mutations-cancer-genes-machine.html

Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low-risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) and Artificial Intelligence (AI) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions by predicting new algorithms.

In the majority of human cancers, heritable loss of gene function through cell division may be mediated as often by epigenetic as by genetic abnormalities. Epigenetic modification occurs through a process of interrelated changes in CpG island methylation and histone modifications. Candidate gene approaches of cell cycle, growth regulatory and apoptotic genes have shown epigenetic modification associated with loss of cognate proteins in sporadic pituitary tumors.

On 11th November 2020, researchers from the University of California, Irvine, has established the understanding of epigenetic mechanisms in tumorigenesis and publicized a previously undetected repertoire of cancer driver genes. The study was published in “Science Advances

Researchers were able to identify novel tumor suppressor genes (TSGs) and oncogenes (OGs), particularly those with rare mutations by using a new prediction algorithm, called DORGE (Discovery of Oncogenes and tumor suppressor genes using Genetic and Epigenetic features) by integrating the most comprehensive collection of genetic and epigenetic data.

The senior author Wei Li, Ph.D., the Grace B. Bell chair and professor of bioinformatics in the Department of Biological Chemistry at the UCI School of Medicine said

Existing bioinformatics algorithms do not sufficiently leverage epigenetic features to predict cancer driver genes, even though epigenetic alterations are known to be associated with cancer driver genes.

The Study

This study demonstrated how cancer driver genes, predicted by DORGE, included both known cancer driver genes and novel driver genes not reported in current literature. In addition, researchers found that the novel dual-functional genes, which DORGE predicted as both TSGs and OGs, are highly enriched at hubs in protein-protein interaction (PPI) and drug/compound-gene networks.

Prof. Li explained that the DORGE algorithm, successfully leveraged public data to discover the genetic and epigenetic alterations that play significant roles in cancer driver gene dysregulation and could be instrumental in improving cancer prevention, diagnosis and treatment efforts in the future.

Another new algorithmic prediction for the identification of cancer genes by Machine Learning has been carried out by a team of researchers at the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin and the Institute of Computational Biology of Helmholtz Zentrum München combining a wide variety of data analyzed it with “Artificial Intelligence” and identified numerous cancer genes. They termed the algorithm as EMOGI (Explainable Multi-Omics Graph Integration). EMOGI can predict which genes cause cancer, even if their DNA sequence is not changed. This opens up new perspectives for targeted cancer therapy in personalized medicine and the development of biomarkers. The research was published in Nature Machine Intelligence on 12th April 2021.

In cancer, cells get out of control. They proliferate and push their way into tissues, destroying organs and thereby impairing essential vital functions. This unrestricted growth is usually induced by an accumulation of DNA changes in cancer genes—i.e. mutations in these genes that govern the development of the cell. But some cancers have only very few mutated genes, which means that other causes lead to the disease in these cases.

The Study

Overlap of EMOGI’s positive predictions with known cancer genes (KCGs) and candidate cancer genes
Image Source: https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-021-00325-y/MediaObjects/42256_2021_325_MOESM1_ESM.pdf

The aim of the study has been represented in 4 main headings

  • Additional targets for personalized medicine
  • Better results by combination
  • In search of hints for further studies
  • Suitable for other types of diseases as well

The team was headed by Annalisa Marsico. The team used the algorithm to identify 165 previously unknown cancer genes. The sequences of these genes are not necessarily altered-apparently, already a dysregulation of these genes can lead to cancer. All of the newly identified genes interact closely with well-known cancer genes and be essential for the survival of tumor cells in cell culture experiments. The EMOGI can also explain the relationships in the cell’s machinery that make a gene a cancer gene. The software integrates tens of thousands of data sets generated from patient samples. These contain information about DNA methylations, the activity of individual genes and the interactions of proteins within cellular pathways in addition to sequence data with mutations. In these data, a deep-learning algorithm detects the patterns and molecular principles that lead to the development of cancer.

Marsico says

Ideally, we obtain a complete picture of all cancer genes at some point, which can have a different impact on cancer progression for different patients

Unlike traditional cancer treatments such as chemotherapy, personalized treatments are tailored to the exact type of tumor. “The goal is to choose the best treatment for each patient, the most effective treatment with the fewest side effects. In addition, molecular properties can be used to identify cancers that are already in the early stages.

Roman Schulte-Sasse, a doctoral student on Marsico’s team and the first author of the publication says

To date, most studies have focused on pathogenic changes in sequence, or cell blueprints, at the same time, it has recently become clear that epigenetic perturbation or dysregulation gene activity can also lead to cancer.

This is the reason, researchers merged sequence data that reflects blueprint failures with information that represents events in cells. Initially, scientists confirmed that mutations, or proliferation of genomic segments, were the leading cause of cancer. Then, in the second step, they identified gene candidates that are not very directly related to the genes that cause cancer.

Clues for future directions

The researcher’s new program adds a considerable number of new entries to the list of suspected cancer genes, which has grown to between 700 and 1,000 in recent years. It was only through a combination of bioinformatics analysis and the newest Artificial Intelligence (AI) methods that the researchers were able to track down the hidden genes.

Schulte-Sasse says “The interactions of proteins and genes can be mapped as a mathematical network, known as a graph.” He explained by giving an example of a railroad network; each station corresponds to a protein or gene, and each interaction among them is the train connection. With the help of deep learning—the very algorithms that have helped artificial intelligence make a breakthrough in recent years – the researchers were able to discover even those train connections that had previously gone unnoticed. Schulte-Sasse had the computer analyze tens of thousands of different network maps from 16 different cancer types, each containing between 12,000 and 19,000 data points.

Many more interesting details are hidden in the data. Patterns that are dependent on particular cancer and tissue were seen. The researchers were also observed this as evidence that tumors are triggered by different molecular mechanisms in different organs.

Marsico explains

The EMOGI program is not limited to cancer, the researchers emphasize. In theory, it can be used to integrate diverse sets of biological data and find patterns there. It could be useful to apply our algorithm for similarly complex diseases for which multifaceted data are collected and where genes play an important role. An example might be complex metabolic diseases such as diabetes.

Main Source

New prediction algorithm identifies previously undetected cancer driver genes

https://advances.sciencemag.org/content/6/46/eaba6784  

Integration of multiomics data with graph convolutional networks to identify new cancer genes and their associated molecular mechanisms

https://www.nature.com/articles/s42256-021-00325-y#citeas

Other Related Articles published in this Open Access Online Scientific Journal include the following:

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis

Curator & Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/02/cancer-detection-and-therapeutics/

Free Bio-IT World Webinar: Machine Learning to Detect Cancer Variants

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2016/05/04/free-bio-it-world-webinar-machine-learning-to-detect-cancer-variants/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning

https://pharmaceuticalintelligence.com/2021-medical-text-analysis-nlp/premalata-pati-phd-postdoc-in-pharmaceutical-sciences-medical-text-analysis-with-machine-learning/

Read Full Post »


Fighting Chaos with care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur, BSc, MSc (Exp. 6/2021)

According to the Global Health Security Index released by Johns Hopkins University in October 2019 in collaboration with Nuclear Threat Initiative (NTI) and The Economist Intelligence Unit (EIU), the United States was announced to be the best developed country in the world to tackle any pandemic or health emergency in future.

The table turned within in one year of outbreak of the novel coronavirus COVID-19. By the end of March 2021, the country with highest COVID-19 cases and deaths in the world was United States. According to the latest numbers provided by World Health Organization (WHO), there were more than 540,000 deaths and more than 30 million confirmed cases in the United States.

Joia Mukherjee, associate professor of global health and social medicine in the Blavatnik Institute at Harvard Medical School said,

“When we think about how to balance control of an epidemic over chaos, we have to double down on care and concern for the people and communities who are hardest hit”.

She also added that U.S. possess all the necessary building blocks required for a health system to work, but it lacks trust, leadership, engagement and care to assemble it into a working system.

Mukherjee mentioned about the issues with the Index that it undervalued the organized and integrated system which is necessary to help public meet their needs for clinical care. Another necessary element for real health safety which was underestimated was conveying clear message and social support to make effective and sustainable efforts for preventive public health measures.

Mukherjee is a chief medical officer at Partners In Health, an organization focused on strengthening community-based health care delivery. She is also a core member of HMS community members who play important role in constructing a more comprehensive response to the pandemic in all over the U.S. With years of experience, they are training global health care workers, analyzing the results and constructing an integrated health system to fight against the widespread health emergency caused by coronavirus all around the world.

Mukherjee encouraged to strengthen the consensus among the community to constrain this infectious disease epidemic. She suggested that validation of the following steps are crucial such as testing of the people with symptoms of infection with coronavirus, isolation of infected individuals by providing them with necessary resources and providing clinical treatment and care to those people who are in need. Mukherjee said, that community engagement and material support are not just idealistic goal rather these are essential components for functioning of health care system during an outburst of coronavirus.

Continued alertness such as social distancing and personal contact with infected individual is important because it is not possible to rapidly replace the old-school public health approaches with new advanced technologies like smart phone applications or biomedical improvements.

Public health specialists emphasized that the infection limitation is the only and most vital strategy for controlling the outbreak in near future, even if the population is getting vaccinated. It is crucial to slowdown the spread of disease for restricting the natural modification of more dangerous variants as that could potentially escape the immune protection mechanism developed by recently generated vaccines as well as natural immune defense systems.

Making Crucial connections

The treatment is more expensive and complicated in areas with less health facilities, said Paul Farmer, the Kolokotrones University Professor at Harvard and chair of the HMS Department of Global Health and Social Medicine. He called this situation as treatment nihilism. Due to shortage of resources, the maximum energy is focused in public health care and prevention efforts. U.S. has resources to cope up with the increasing demand of hospital space and is developing vaccines, but there is a form of containment nihilism- which means prevention and infection containment are unattainable- said by many experts.

Farmer said, integration of necessary elements such as clinical care, therapies, vaccines, preventive measures and social support into a single comprehensive plan is the best approach for a better response to COVID-19 disease. He understands the importance of community trust and integrated health care system for fighting against this pandemic, as being one of the founders of Partners In Health and have years of experience along with his colleagues from HMS and PIH in fighting epidemics of HIV, Ebola, cholera, tuberculosis, other infectious and non-infectious diseases.

PIH launched the Massachusetts Community Tracing Collaborative (CTC), which is an initiative of contact tracing statewide in partnership with several other state bodies, local boards of Health system and PIH. The CTC was setup in April 2020 in U.S. by Governor Charlie Baker, with leadership from HMS faculty, to build a unified response to COVID-19 and create a foundation for a long-term movement towards a more integrated community-based health care system.

The contact tracing involves reaching out to individuals who are COVID-19 positive, then further detect people who came in close contact with infected individuals and screen out people with coronavirus symptoms and encourage them to seek testing and take necessary precautions to break the chain of infection into the community.

In the initial phase of outbreak, the CTC group comprises of contact tracers and health care coordinators who spoke 23 different languages, including social workers, public health practitioners, nurses and staff members from local board health agencies with deep links to the communities they are helping. The CTC worked with 339 out of 351 state municipalities with local public health agencies relied completely on CTC whereas some cities and towns depend occasionally on CTC backup. According to a report, CTC members reached up to 80 percent of contact tracking in hard-hit and resource deprived communities such as New Bedford.

Putting COVID-19 in context

Based on generations of experience helping people surviving some of the deadliest epidemic and endemic outbreaks in places like Haiti, Mexico, Rwanda and Peru, the staff was alert that people with bad social and economic condition have less space to get quarantined and follow other public health safety measures and are most vulnerable people at high risk in the pandemic situation.

Infected individuals or individuals at risk of getting infected by SARS-CoV-2 had many questions regarding when to seek doctor’s help and where to get tested, reported by contact tracers. People were worried about being evicted from work for two weeks and some immigrants worried about basic supplies as they were away from their family and friends.

The CTC team received more than 7,000 requests for social support assistance in the initial three months. The staff members and contact tracers were actively connecting the resourceful individuals with the needy people and filling up the gap when there was shortage in their own resources.

Farmer said, “COVID is a misery-seeking missile that has targeted the most vulnerable.”

The reality that infected individuals concerned about lacking primary household items, food items and access to childcare, emphasizes the urgency of rudimentary social care and community support in fighting against the pandemic. Farmer said, to break the chain of infection and resume society it is mandatory to meet all the elementary needs of people.

“What kinds of help are people asking for?” Farmer said and added “it’s important to listen to what your patients are telling you.”

An outbreak of care

The launch of Massachusetts CTC with the support from PIH, started receiving requests from all around the country to assist initiating contact tracing procedures. In May, 2020 the organization announced the launch of a U.S. public health accompaniment to cope up with the asked need.

The unit has included team members in nearly 24 states and municipal health departments in the country and work in collaboration with local organizations. The technical support on things like choosing and implementing the tools and software for contact tracing was provided by PIH. To create awareness and provide new understanding more rapidly, a learning collaboration was established with more than 200 team members from more than 100 different organizations. The team worked to meet the needs of population at higher risk of infection by advocating them for a stronger and more reliable public health response.

The PIH public health team helped to train contact trackers in the Navajo nation and operate to strengthen the coordination between SARS-CoV-2 testing, efforts for precaution, clinical health care delivery and social support in vulnerable communities around the U.S.

“For us to reopen our schools, our churches, our workplaces,” Mukherjee said, “we have to know where the virus is spreading so that we don’t just continue on this path.”

SOURCE:

https://hms.harvard.edu/news/fighting-chaos-care?utm_source=Silverpop&utm_medium=email&utm_term=field_news_item_1&utm_content=HMNews04052021

Other related articles were published in this Open Access Online Scientific Journal, including the following:

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

The WHO team is expected to soon publish a 300-page final report on its investigation, after scrapping plans for an interim report on the origins of SARS-CoV-2 — the new coronavirus responsible for killing 2.7 million people globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/27/the-who-team-is-expected-to-soon-publish-a-300-page-final-report-on-its-investigation-after-scrapping-plans-for-an-interim-report-on-the-origins-of-sars-cov-2-the-new-coronavirus-responsibl/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Mechanistic link between SARS-CoV-2 infection and increased risk of stroke using 3D printed models and human endothelial cells

Reporter: Adina Hazan, PhD

https://pharmaceuticalintelligence.com/2020/12/28/mechanistic-link-between-sars-cov-2-infection-and-increased-risk-of-stroke-using-3d-printed-models-and-human-endothelial-cells/

Artificial intelligence predicts the immunogenic landscape of SARS-CoV-2

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2021/02/04/artificial-intelligence-predicts-the-immunogenic-landscape-of-sars-cov-2/

Read Full Post »

Older Posts »