Feeds:
Posts
Comments

Posts Tagged ‘computational biology’

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

The Vibrant Philly Biotech Scene: Focus on Computer-Aided Drug Design and Gfree Bio, LLC

Curator and Interviewer: Stephen J. Williams, Ph.D.

 

 

philly philly2night

 

 

 

 

 

 

 

This post is the second in a series of posts highlighting interviews with Philadelphia area biotech startup CEO’s and show how a vibrant biotech startup scene is evolving in the city as well as the Delaware Valley area. Philadelphia has been home to some of the nation’s oldest biotechs including Cephalon, Centocor, hundreds of spinouts from a multitude of universities as well as home of the first cloned animal (a frog), the first transgenic mouse, and Nobel laureates in the field of molecular biology and genetics. Although some recent disheartening news about the fall in rankings of Philadelphia as a biotech hub and recent remarks by CEO’s of former area companies has dominated the news, biotech incubators like the University City Science Center and Bucks County Biotechnology Center as well as a reinvigorated investment community (like PCCI and MABA) are bringing Philadelphia back. And although much work is needed to bring the Philadelphia area back to its former glory days (including political will at the state level) there are many bright spots such as the innovative young companies as outlined in these posts.

efavirenz_med-2In today’s post, I had the opportunity to talk with molecular modeler Charles H. Reynolds, Ph.D., founder and CEO of Gfree Bio LLC, a computational structure-based design and modeling company based in the Pennsylvania Biotech Center of Bucks County. Gfree is actually one of a few molecular modeling companies at the Bucks County Biotech Center (I highlighted another company RabD Biotech which structural computational methods to design antibody therapeutics).

Below is the interview with Dr. Reynolds of Gfree Bio LLC and Leaders in Pharmaceutical Business Intelligence (LPBI):

LPBI: Could you briefly explain, for non-molecular modelers, your business and the advantages you offer over other molecular modeling programs (either academic programs or other biotech companies)? As big pharma outsources more are you finding that your company is filling a needed niche market?

GfreeBio: Gfree develops and deploys innovative computational solutions to accelerate drug discovery. We can offer academic labs a proven partner for developing SBIR/STTR proposals that include a computational or structure-based design component. This can be very helpful in developing a successful proposal. We also provide the same modeling and structure-based design input for small biotechs that do not have these capabilities internally. Working with Gfree is much more cost-effective than trying to develop these capabilities internally. We have helped several small biotechs in the Philadelphia region assess their modeling needs and apply computational tools to advance their discovery programs. (see publication and collaboration list here).

LPBI: Could you offer more information on the nature of your 2014 STTR award?

GfreeBio: Gfree has been involved in three successful SBIR/STTR awards in 2014.   I am the PI for an STTR with Professor Burgess of Texas A&M that is focused on new computational and synthetic approaches to designing inhibitors for protein-protein interactions. Gfree is also collaborating with the Wistar Institute and Phelix Therapeutics on two other Phase II proposals in the areas of oncology and infectious disease.

LPBI: Why did you choose the Bucks County Pennsylvania Biotechnology Center?

GfreeBio: I chose to locate my company at the Biotech Center because it is a regional hub for small biotech companies and it provides a range of shared resources that are very useful to the company. Many of my most valuable collaborations have resulted from contacts at the center.

LPBI: The Blumberg Institute and Natural Products Discovery Institute has acquired a massive phytochemical library. How does this resource benefit the present and future plans for GfreeBio?

GfreeBio: To date Gfree Bio has not been an active collaborator with the Natural Products Insititute, but I have a good relationship with the Director and that could change at any time.

LPBI: Was the state of Pennsylvania and local industry groups support GfreeBio’s move into the Doylestown incubator? Has the partnership with Ben Franklin Partners and the Center provided you with investment and partnership opportunities?

GfreeBio: Gfree Bio has not been actively seeking outside investors, at least to date. We have been focused on growing the company through collaborations and consulting relationships. However, we have benefitted from being part of the Keystone Innovation Zone, a state program that provides incentives for small technology-based businesses in Pennsylvania.

LPBI: You will be speaking at a conference in the UK on reinventing the drug discovery process through tighter collaborations between biotech, academia, and non-profit organizations.  How do you feel the Philadelphia area can increase this type of collaboration to enhance not only the goals and missions of nonprofits, invigorate the Pennsylvania biotech industry, but add much needed funding to the local academic organizations?

GfreeBio: I think this type of collaboration across sectors appears to be one of the most important emerging models for drug discovery.   The Philadelphia region has been in many ways hard hit by the shift of drug discovery from large vertically integrated pharmaceutical companies to smaller biotechs, since this area was at the very center of “Big Pharma.” But I think the region is bouncing back as it shifts more to being a center for biotech. The three ingredients for success in the new pharma model are great universities, a sizeable talent pool, and access to capital. The last item may be the biggest challenge locally. The KIZ program (Keystone Innovation Zone) is a good start, but the region and state could do more to help promote innovation and company creation. Some other states are being much more aggressive.

LPBI: In addition, the Pennsylvania Biotechnology Center in Bucks County appears to have this ecosystem: nonprofit organizations, biotechs, and academic researchers. Does this diversity of researchers/companies under one roof foster the type of collaboration needed, as will be discussed at the UK conference? Do you feel collaborations which are in close physical proximity are more effective and productive than a “virtual-style” (online) collaboration model? Could you comment on some of the collaborations GfreeBio is doing with other area biotechs and academics?

GfreeBio: I do think the “ecosystem” at the Pennsylvania Biotechnology Center is important in fostering new innovative companies. It promotes collaborations that might not happen otherwise, and I think close proximity is always a big plus. As I mentioned before, many of the current efforts of Gfree have come from contacts at the center.   This includes SBIR/STTR collaborations and contract work for local small biotech companies.

LPBI: Thompson Reuters just reported that China’s IQ (Innovation Quotient) has risen dramatically with the greatest patents for pharmaceuticals and compounds from natural products. Have you or your colleagues noticed more competition or business from Chinese pharmaceutical companies?

GfreeBio: The rise of Asia, particularly China, has been one of the most significant recent trends in the pharmaceutical industry. Initially, this was almost exclusively in the CRO space, but now China is aggressively building a fully integrated domestic pharmaceutical industry.

LPBI: How can the Philadelphia ecosystem work closer together to support greater innovation?

GfreeBio: A lot has happened in recent years to promote innovation and company creation in the region. There could always be more opportunities for networking and collaboration within the Philadelphia community. Of course the biggest obstacle in this business is often financing. Philadelphia needs more public and private sources for investment in startups.

LPBI: Thank you Dr. Reynolds.

Please look for future posts in this series on the Philly Biotech Scene on this site

Also, if you would like your Philadelphia biotech startup to be highlighted in this series please contact me: sjwilliamspa@comcast.net or @StephenJWillia2.
Our site is read by ~ 570,000 readers, among them thousand international readers daily and followed by thousands of Twitter followers.

 

Other posts on this site in this VIBRANT PHILLY BIOTECH SCENE SERIES OR referring to PHILADELPHIA BIOTECH include:

RAbD Biotech Presents at 1st Pitch Life Sciences-Philadelphia

The Vibrant Philly Biotech Scene: Focus on Vaccines and Philimmune, LLC

What VCs Think about Your Pitch? Panel Summary of 1st Pitch Life Science Philly

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

LytPhage Presents at 1st Pitch Life Sciences-Philadelphia

Hastke Inc. Presents at 1st Pitch Life Sciences-Philadelphia

PCCI’s 7th Annual Roundtable “Crowdfunding for Life Sciences: A Bridge Over Troubled Waters?” May 12 2014 Embassy Suites Hotel, Chesterbrook PA 6:00-9:30 PM

Pfizer Cambridge Collaborative Innovation Events: ‘The Role of Innovation Districts in Metropolitan Areas to Drive the Global an | Basecamp Business

Mapping the Universe of Pharmaceutical Business Intelligence: The Model developed by LPBI and the Model of Best Practices LLC

Read Full Post »

Preface to Metabolomics as a Discipline in Medicine

Author: Larry H. Bernstein, MD, FCAP

 

The family of ‘omics fields has rapidly outpaced its siblings over the decade since
the completion of the Human Genome Project.  It has derived much benefit from
the development of Proteomics, which has recently completed a first draft of the
human proteome.  Since genomics, transcriptomics, and proteomics, have matured
considerably, it has become apparent that the search for a driver or drivers of cellular signaling and metabolic pathways could not depend on a full clarity of the genome. There have been unresolved issues, that are not solely comprehended from assumptions about mutations.

The most common diseases affecting mankind are derangements in metabolic
pathways, develop at specific ages periods, and often in adulthood or in the
geriatric period, and are at the intersection of signaling pathways.  Moreover,
the organs involved and systemic features are heavily influenced by physical
activity, and by the air we breathe and the water we drink.

The emergence of the new science is also driven by a large body of work
on protein structure, mechanisms of enzyme action, the modulation of gene
expression, the pH dependent effects on protein binding and conformation.
Beyond what has just been said, a significant portion of DNA has been
designated as “dark matter”. It turns out to have enormous importance in
gene regulation, even though it is not transcriptional, effected in a
modulatory way by “noncoding RNAs.  Metabolomics is the comprehensive
analysis of small molecule metabolites. These might be substrates of
sequenced enzyme reactions, or they might be “inhibiting” RNAs just
mentioned.  In either case, they occur in the substructures of the cell
called organelles, the cytoplasm, and in the cytoskeleton.

The reactions are orchestrated, and they can be modified with respect to
the flow of metabolites based on pH, temperature, membrane structural
modifications, and modulators.  Since most metabolites are generated by
enzymatic proteins that result from gene expression, and metabolites give
organisms their biochemical characteristics, the metabolome links
genotype with phenotype.

Metabolomics is still developing, and the continued development has
relied on two major events. The first is chromatographic separation and
mass  spectroscopy (MS), MS/MS, as well as advances in fluorescence
ultrasensitive optical photonic methods, and the second, as crucial,
is the developments in computational biology. The continuation of
this trend brings expectations of an impact on pharmaceutical and
on neutraceutical developments, which will have an impact on medical
practice. What has lagged behind, and may continue to contribute to the
lag is the failure to develop a suitable electronic medical record to
assist the physician in decisions confronted with so much as yet,
hidden data, the ready availability of which could guide more effective
diagnosis and management of the patient. Put all of this together, and
we can meet series challenges as the research community
interprets and integrates the complex data they are acquiring.

.

Read Full Post »

RAbD Biotech Presents at 1st Pitch Life Sciences-Philadelphia-September 16, 2014

RAbD is a new biotechnology company founded by Fox  Chase Cancer Center investigators Gregory Adams, Ph.D., Matthew Robinson, Ph.D. and Roland Dunbrack, Ph.D. that is focused on the knowledge-based design of antibodies that bind to key functional, often highly conserved and difficult to target epitopes. We are using homology modeling, crystal structures, protein docking and design software and algorithms to drive combinatorial sampling of CDRs to computationally design new antibodies and then express, validate and perform further design in an iterative manner.Brian Smith, Ph.D., MBA is RAbD Biotech’s Business Development Lead.

Contact information for RAbD Biotech:

Website  http://rabdbiotech.com/

LinkedIn

Twitter @RAbDBiotech

The overall goal of RAbD is to

“drug the undruggable”

The company using in silico design methods to design to produce novel antibodies and biomimetics. The company is developing a first in class biomimetic, RaD-003, for the treatment of ovarian cancer.  Ovarian  cancer is one of the most deadly of all women’s cancers, with very low 5 year survival rates.  An expected 22,000 US women a year will be diagnosed and expected 16,000 will die every year.  Cisplatin/paclitaxel therapy is only approved and effective chemotherapy for ovarian cancer yet resistance develops quickly and is common. RaD-003  targets the MISII receptor (Mullerian Inhibiting Substance Type II Receptor), which is expressed on ovarian cancer cells but not on normal ovarian epithelium.

It has been shown that activation of this receptor by the Mullerian Inhibiting Substance (MIS) has antitumor activity in ovarian cancer.

The MISII receptor had been considered undruggable as

  • MIS is too expensive and difficult to produce
  • previous attempts to develop therapeutic antibodies ot MISIIR have proven difficult

Therefore, the company used their computational platform to produce a “first in class” chimeric biomimetic to more effectively target and activate MISIIR.

For  more information about this meeting and the Mid-Atlantic Bioangels and 1st Pitch please see posting on this site

Read Full Post »

Metabolomic analysis of two leukemia cell lines. II.

Larry H. Bernstein, MD, FCAP, Reviewer and Curator

Leaders in Pharmaceutical Intelligence

 

In Part I of metabolomics of two leukemia cell lines, we have established a major premise for the study, an insight into the use of an experimental model, and some insight into questions raised.

I here return to examine these before pursuing more detail in the study.

Q1. What strong metabolic pathways come into focus in this study?

Answer – The aerobic and anaerobic glycolytic pathways, with a difference measured in the extent of participation of mitochondrial oxidative phosphorylation.

Q2. Would we expect to also gain insight into the effect, on balance, played by a suppressed ubiquitin pathway?

Answer – lets look into this in Part II.

Q3. Would the synthesis of phospholipid and the maintenance of membrane structures requires availability of NADPH, which would be a reversal of the TCA cycle at the cost of delta G in catabolic energy, be consistent with increased dependence of anaerobic glycolysis  with unchecked replication?

Answer: Part II might show this, as the direction and the difference between the cell lines is consistent with a Warburg (Pasteur) effect.

Recall the observation that the model is based on experimental results from  lymphocytic leukemia cell lines in cell culture.  The internal metabolic state is inferred from measurement of external metabolites.

The classification of the lymphocytic leukemias in humans is based on T-cell and B-cell lineages, but actually uses cell differentiation (CD) markers on the cytoskeleton for recognition.  It is only a conjecture that if the cells line were highly anaplastic, they might not be sustainable in cell culture in perpetuity.
The analogue of these cells to what I would expect to see in humans is the SLL having the characteristic marking: CD5, see http://www.pathologyoutlines.com/topic/lymphomaSLL.html

Micro description
=======================================================

● Effacement of nodal architecture by pale staining pseudofollicles or proliferation centers with ill-defined borders, containing small round mature lymphocytes, prolymphocytes (larger than small lymphocytes, abundant basophilic cytoplasm, prominent nucleoli), paraimmunoblasts (larger cells with distinct nucleoli) and many smudge cells
● Pseudofollicular centers are highlighted by decreasing light through the condenser at low power; cells have pale cytoplasm but resemble soccer balls or smudge cells on peripheral smear (cytoplasm is bubbly in mantle cell lymphoma); may have plasmacytoid features
● May have marginal zone, perifollicular or interfollicular patterns, but these cases also have proliferation centers (Mod Pathol 2000;13:1161)
● Interfollicular pattern: large, reactive germinal centers; resembles follicular lymphoma but germinal centers are bcl2 negative and tumor cells resemble SLL by morphology and immunostains
(Am J Clin Path 2000;114:41)
● Paraimmunoblastic variant: diffuse proliferation of paraimmunoblasts (normally just in pseudoproliferation centers); rare, <30 reported cases; usually multiple lymphadenopathies and rapid disease progression; case report in 69 year old man (Hum Pathol 2002;33:1145); consider as mantile cell lymphoma if t(11;14)(q13;q32) is present; may also represent CD5+ diffuse large B cell lymphoma
Bone marrow: small focal aggregates of variable size with irregular, poorly circumscribed outlines; lymphocytes are well differentiated, small, round with minimal atypia; may have foci of transformation; rarely has granulomas (J Clin Pathol 2005;58:815)
● Marrow infiltrative patterns are also described as diffuse (unmutated IgH genes, ZAP-70+, more aggressive), nodular (associated with IgH hypermutation, ZAP-70 negative) or mixed (variable mutation of IgH, variable ZAP-70, Hum Pathol 2006;37:1153)

 

Positive stains
=======================================================

● CD5, CD19, CD20 (dim), CD23, surface Ig light chain, surface IgM (dim)
● Also CD43, CD79a, CD79b (dim in 20%, Arch Pathol Lab Med 2003;127:561), bcl2
● Variable CD11c, FMC7 (42%)
Negative stains
=======================================================

● CD10, cyclin D1
Molecular
=======================================================

● Trisomy 12 (30%, associated with atypical CLL and CD79b), deletion 13q14 (25-50%),
deletion of 11q23 (worse prognosis, 10-20%)

 

Results

We set up a pipeline that could be used to

  • infer intracellular metabolic states from semi-quantitative data
  • regarding metabolites exchanged between cells and their environment.

Our pipeline combined the following four steps:

  1. data acquisition,
  2. data analysis,
  3. metabolic modeling and
  4.  experimental validation of
  • the model predictions (Fig. 1A).

We demonstrated the pipeline and the predictive potential

  • to predict metabolic alternations in diseases such as cancer
  • based on two lymphoblastic leukemia cell lines.

The resulting Molt-4 and CCRF-CEM condition-specific cell line models were able

  • to explain metabolite uptake and secretion
  •  by predicting the distinct utilization of central metabolic pathways by the two cell lines.

Whereas the CCRF-CEM model

  • resembled more a glycolytic, commonly referred to as ‘Warburg’ phenotype,
  • our predictions suggested  a more respiratory phenotype for the Molt-4  model.

We found these predictions to be in agreement with measured gene expression differences

  • at key regulatory steps in the central metabolic pathways, and
  • they were also consistent with  data regarding the energy and redox states of the cells.

After a brief discussion of the data generation and analysis steps, the results

  • derived from model generation and analysis will be described in detail.

 

2.1 Pipeline for generation of condition-specific metabolic cell line models

2.1.1 Generation of experimental data

We monitored the growth and viability of lymphoblastic leukemia cell lines in
serum- free medium (File S2, Fig. S1). Multiple omics  data sets  were derived  from these cells.

Extracellular metabolomics (exo-metabolomic) data,

  • comprising measurements of the metabolites in the spent medium of the cell cultures
    (Paglia et al. 2012a),
  • were collected along with transcriptomic data, and
  • these data sets were used to construct the models.

 

2.1.4 Condition-specific models for CCRF-CEM and Molt-4 cells

To determine whether we had obtained two distinct models,

  • we evaluated the reactions, metabolites, and genes of the two models.

Both the Molt-4 and CCRF-CEM models contained approximately

  • half of the reactions and metabolites present in the global model (Fig. 1C).

They were very similar to each other in terms of their

  • reactions,
  • metabolites, and
  • genes (File S1, Table S5A–C).

The Molt– 4 model contained

  • seven reactions that were not present in the CCRF-CEM model
    (Co-A biosynthesis pathway and exchange reactions).

In contrast, the CCRF-CEM  contained

31 unique reactions

  • arginine and proline metabolism,
  • vitamin B6  metabolism,
  • fatty acid activation,
  • transport, and exchange reaction.
  • There  were 2 and 15 unique metabolites in the Molt-4 and CCRF-CEM models,  respectively
    (File S1, Table S5B).
    Approximately three quarters of the global  model  genesremained in the condition-specific cell line models  (Fig. 1C).

The Molt-4 model contained

  • 15 unique genes, and

the CCRF-CEM model had

  • 4 unique genes (File S1, Table S5C).

Both models lacked NADH dehydrogenase
(complex I of the electron transport chain—ETC),

  •  determined by  the  absence of expression of a mandatory subunit
    (NDUFB3, Entrez gene ID 4709).

The ETC was fueled by FADH2 originating from

  1. succinate dehydrogenase and
  2. from fatty acid oxidation, which
  • through flavoprotein electron transfer
  • could contribute to the same ubiquinone pool as
  • complex I and complex II (succinate dehydrogenase).

Despite their different in vitro growth rates
(which differed by 11 %, see File S2, Fig. S1) and

  • differences in exo-metabolomic data (Fig. 1B) and
  • transcriptomic data,
  • the internal networks were largely conserved
  • in the two condition-specific cell line models.

 

2.1.5 Condition-specific cell line models predict distinct metabolic strategies

Despite the overall similarity of the metabolic models,

  • differences in their cellular uptake and secretion patterns suggested
  • distinct metabolic states in the two cell lines
    (Fig. 1B and see “Materials and methods” section for more detail).

To interrogate the metabolic differences, we sampled the solution space

  • of each model  using an Artificial Centering Hit-and-Run (ACHR) sampler (Thiele et al. 2005).

For this  analysis, additional constraints were applied, emphasizing

  • the  quantitative differences in commonly uptaken and secreted metabolites.

The  maximum possible uptake and maximum possible secretion flux rates were

  • reduced according to the measured relative differences between the cell lines
    (Fig. 1D, see “Materials and methods” section).

We plotted the number of sample points containing a particular flux rate for each reaction. The resulting

  • binned histograms can be understood as representing the probability that
  • a particular reaction can have a certain flux value.

A comparison of the sample points obtained for the Molt-4 and CCRF-CEM models revealed

  • a  considerable shift in the distributions, suggesting
  • a higher utilization of  glycolysis by the CCRF-CEM model (File S2, Fig. S2).

This result  was further  supported by differences

  • in medians calculated from sampling points (File S1,  Table S6).

The shift persisted throughout all reactions of the pathway and

  • was  induced by the higher glucose uptake (35 %) from
  • the extracellular medium in CCRF-CEM cells.

The sampling median for glucose uptake was 34 % higher

  • in the  CCRF-CEM model than in Molt-4 model (File S2, Fig. S2).

The usage of the  TCA cycle was also distinct in the two condition-specific cell-line models (Fig. 2).

  • the models used succinate dehydrogenase differently (Figs. 23).

The Molt-4 model utilized an associated reaction to generate FADH2, whereas

  • in  the CCRF-CEM model, the histogram was shifted in the opposite direction,
  • toward  the generation of succinate.

Additionally, there was a higher efflux of  citrate toward

  • amino acid and lipid metabolism in the CCRF-CEM model (Fig. 2).

There was higher flux through anaplerotic and cataplerotic reactions

  • in the CCRF-CEM model than in the Molt-4 model (Fig. 2);
  • these reactions include the efflux  of citrate through

 

  1. ATP-citrate lyase,
  2. uptake of glutamine,
  3. generation of  glutamate from glutamine,
  4. transamination of pyruvate and
  5.  glutamate to alanine  and to 2-oxoglutarate,
  6. secretion of nitrogen, and
  7. secretion of alanine.

The Molt-4 model showed higher utilization of oxidative phosphorylation (Fig. 3),

  • supported by elevated median flux through ATP synthase (36 %) and other  enzymes,
  • which contributed to higher oxidative metabolism.

The sampling  analysis therefore revealed different usage of

  • central metabolic pathways by the condition-specific models.

 

Fig. 2

Differences in the use of the TCA cycle by the CCRF-CEM

Differences in the use of the TCA cycle by the CCRF-CEM

Differences in the use of the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).
The table provides the median values of the sampling results. Negative values in histograms and Table

  • describe reversible  reactions with flux in the reverse direction.

There are multiple reversible  reactions for the transformation of

  1. isocitrate and α-ketoglutarate,
  2. malate and  fumarate, and
  3. succinyl-CoA and succinate.

These reactions are  unbounded,  and therefore histograms are not shown.
The details of participating cofactors  have been removed.

Atp ATP, cit citrate, adp ADP, pi phosphate, oaa oxaloacetate, accoa acetyl-CoAcoa coenzyme-A,
icit isocitrate, αkg α-ketoglutarate, succcoa succinyl-CoAsucc succinate, fumfumarate, mal malate,
oxa oxaloacetate,  pyr pyruvate, lac lactate, ala alanine, gln glutamine, ETC electron transport  chain.

 

Electronic supplementary material The online version of this article
http://dx.doi.org:/10.1007/s11306-014-0721-3 
contains supplementary material,  which  is available to authorized users.

  1.  K. Aurich _ G. Paglia _ O ´ . Rolfsson _ S. Hrafnsdo´ ttir _
  2. Magnu´sdo´ ttir _ B. Ø. Palsson _ R. M. T. Fleming _ I. Thiele. Center for Systems Biology,
    University of Iceland, Reykjavik, Iceland
  3.  K. Aurich _ R. M. T. Fleming _ I. Thiele (&). Luxembourg Centre for Systems Biomedicine,
    University of Luxembourg, Campus Belval, Esch-Sur-Alzette, Luxembourg
    e-mail: ines.thiele@uni.lu
  4. M. Stefaniak. School of Health Science, Faculty of Food Science and Nutrition,
    University of Iceland, Reykjavik, Iceland
  5. Ø. Palsson. Department of Bioengineering, University of California San Diego, La Jolla, CA, USA

http://link.springer.com/static-content/images/404/art%253A10.1007%252
Fs11306-014-0721-3/MediaObjects/11306_2014_721_Fig3_HTML.gif

 

Fig. 3

Fatty acid oxidation and ETC _Fig3

Fatty acid oxidation and ETC _Fig3

 

Sampling reveals different utilization of oxidative phosphorylation by the

  • generated models.

Different distributions are observed for the CCRF-CEM model (red) and the Molt-4 model (blue).

  • Molt-4 has higher  median  flux through ETC reactions II–IV.

The table provides the median values  of the sampling results. Negative values in the histograms and in the table describe

  • reversible reactions with flux in the reverse direction.

Both models lack Complex I of the ETC because of constraints

  • arising from the mapping of transcriptomic data.

Electron transfer flavoprotein and

  • electron transfer flavoprotein–ubiquinone oxidoreductase
  •  both also carry higher flux in the Molt-4 model

 

2.1.6 Experimental validation of energy and redox status of CCRF-CEM and Molt-4 cells

Cancer cells have to balance their needs

  •  for energy and biosynthetic precursors, and they have
  • to maintain redox homeostasis to proliferate (Cairns et al. 2011).

We conducted enzymatic assays of cell lysates to measure levels and/or ratios of

  • ATP,
  • NADPH + NADP,
  • NADH + NAD, and
  • glutathione.

These measurements were used to provide support for

  • the in silico predicted metabolic differences (Fig. 4).

Additionally, an Oxygen Radical Absorbance Capacity (ORAC) assay was used

  • to evaluate the cellular antioxidant status (Fig. 4B).

Total concentrations of NADH + NAD, GSH + GSSG, NADPH + NADP and ATP, were higher in Molt-4 cells  (Fig. 4A).

The higher ATP concentration in Molt-4 cells could either result from

  • high production rates, or intracellular  accumulation connected to high or
  • low reactions fluxes (Fig. 4A).

Our simplified view that oxidative Molt-4 produces less ATP and was contradicted by

  • the higher ATP concentrations measured (Fig. 4L).

Yet we want to emphasize that concentrations

  • cannot be compared to flux values,
  • since we are modeling at steady-state.

NADH/NAD+ ratios for both cell lines were shifted toward NADH (Fig. 4D, E), but

  • the shift toward NADH was more pronounced in CCRF-CEM (Fig. 4E),
  • which matched  our expectation based on the higher utilization of
  • glycolysis and 2-oxoglutarate  dehydrogenase in the CCRF-CEM model (Fig. 4L).

 

Fig. 4 (not shown)

A–K  Experimentally determined ATP, NADH + NAD, NADPH + NADP, and GSH + GSSG concentrations, and ROS detoxification in the CCRF-CEM and Molt-4 cells.

L Expectations for cellular energy and redox states. Expectations are based on predicted metabolic differences of the Molt-4 and CCRF-CEM models

2.1.7 Comparison of network utilization and alteration in gene expression

With the assumption that

  • differential expression of particular genes would cause reaction flux changes,

we determined how the differences in gene expression (between CCRF-CEM and Molt-4)

  • compared to the flux differences observed in the  models.

Specifically, we checked whether the reactions associated with genes upregulated
(significantly more expressed in CCRF-CEM cells compared to Molt-4  cells)

  • were indeed more utilized by the CCRF-CEM model,

and we  checked  whether downregulated genes

  • were associated with reactions more utilized by the Molt-4 model.

The set of downregulated genes was associated with 15 reactions, and

  • the set of 49 upregulated genes was associated with 113 reactions in the models.

Reactions were defined as differently utilized

  • if the difference in flux exceeded 10 % (considering only non-loop reactions).

Of the reactions associated with upregulated genes,

  • 72.57 % were more utilized by the CCRF-CEM model, and
  • 2.65 % were more utilized by the Molt-4 model (File S1, Table S7).

In contrast, all 15 reactions associated with the 12 downregulated genes

  • were more utilized in the CCRF-CEM model (File S1, Table S8).

After this initial analysis, we approached the question from a different angle, asking

  • whether the majority of the reactions associated with each individual gene
  • upregulated in CCRF-CEM were more utilized by the CCRF-CEM model.
  •  this was the case for 77.55 % of the upregulated genes.

The majority of reactions associated with two (16.67 %) downregulated genes

  • were more utilized by the Molt-4 model.

Taken together, our comparisons of the

  • direction of gene expression with the fluxes of the two cancer cell-line models
  • confirmed that reactions associated with upregulated genes in the CCRF-CEM
    cells were generally more utilized by the CCRF-CEM model.

2.1.8 Accumulation of DEGs and AS genes at key metabolic steps

After we confirmed that most reactions associated with upregulated genes

  • were more utilized by the CCRF-CEM model,

we checked the locations of DEGs within the network. In this analysis, we paid special attention to

  • the central metabolic pathways that we had found
  • to be distinctively utilized by the two models.

Several DEGs and AS events were associated with

  • glycolysis,
  • the ETC,
  • pyruvate metabolism, and
  • the PPP (Table 1).

 

Table 1

DEGs and AS events of central metabolic and cancer-related pathways

Full lists of DEGs and AS are provided in the supplementary material.

Upregulated significantly more expressed in CCRF-CEM compared to Molt-4 cells

PPP pentose phosphate pathway, OxPhos oxidative phosphorylation, Glycolysis/gluconglycolysis/gluconeogenesis, Pyruvate met. pyruvate metabolism

Moreover, in glycolysis, the DEGs and/or AS genes

  • were associated with all three rate-limiting steps, i.e., the steps mediated by
  1. hexokinase,
  2. pyruvate kinase, and
  3. phosphofructokinase.

Of these key enzymes,

  • hexokinase 1 (Entrez Gene ID: 3098) was alternatively spliced,
  • pyruvate kinase (PKM, Entrez gene ID: 5315) was significantly more
    expressed in the CCRF-CEM cells (Table 1),

in agreement with the higher in silico predicted flux.

However, in contrast to the observed

  • higher utilization of glycolysis in the CCRF-CEM model,
  • the gene associated with the rate-limiting glycolysis step, phosphofructokinase (Entrez Gene ID: 5213),
  • was significantly upregulated in Molt-4 cells relative to CCRF-CEM cells.

This higher expression was detected for only a single isozyme, however. Two of
the three genes associated with phosphofructokinase were also subject to
alternative splicing (Table 1). In addition to the key enzymes, fructose
bisphosphate aldolase (Entrez Gene ID: 230) was also significantly

  • upregulated in Molt-4 cells relative to CCRF-CEM cells,
  • in contrast to the predicted higher utilization of glycolysis in the CCRF-CEM model.

Additionally, glucose-6P-dehydrogenase (G6PD), which catalyzes

  • the first reaction and committed step of the PPP,
  • was an AS gene (Table 1).

A second AS gene associated with

  •  the PPP reaction of the deoxyribokinase
  • was RBKS (Entrez Gene ID: 64080).

This gene is also associated with ribokinase, but ribokinase was removed

  • because of the lack of ribose uptake or secretion.

Single AS genes were associated with different complexes of the ETC (Table 1).

Literature query revealed that at least 13 genes associated with alternative

  • splicing events were mentioned previously in connection with both alternative
    splicing and cancer (File S1, Table S14), and
  • 37 genes were associated with cancer, e.g., upregulated, downregulated at the
    level of mRNA or protein, or otherwise
  • connected to cancer metabolism and signaling.

One general observation was that there was a surprising

  • accumulation of metabolite transporters among the AS.

Overall, the high incidence of

  • differential gene expression events at metabolic control points
  • increases the plausibility of the in silico predictions.

 

2.1.9 Single gene deletion

Analyses of essential genes in metabolic models have been used

  • to predict candidate drug targets for cancer cells (Folger et al. 2011).

Here, we conducted an in silico gene deletion study for all model genes to identify

  • a unique set of knock-out (KO) genes
  • for each condition-specific cell line model.

The analysis yielded 63 shared lethal KO genes and

  • distinct sets of KO genes for the CCRF-CEM model (11 genes) and the Molt-4 model (3 genes).

For three of the unique CCRF-CEM KO genes,

  • the genes were only present in the CCRF-CEM model (File S1, Table S9).

 

The essential genes for both models were then

  • related to the cell-line-specific differences in metabolite uptake and secretion (Fig. 1B).

The CCRF-CEM model

  1. needed to generate putrescine from ornithine
    (ORNDC, Entrez Gene ID: 4953)
  2. to subsequently produce 5-methylthioadenosine for secretion (Fig. 1B).
  3. S-adenosylmethioninamine produced by adenosylmethionine decarboxylase
    (arginine and proline metabolism, associated with Entrez Gene ID: 262)
  • is a substrate required for generation of 5-methylthioadenosine.

Another example of a KO gene connected to an enforced exchange reaction was

  • glutamic-oxaloacetic transaminase 1 (GOT1, Entrez Gene ID: 2805).

Without GOT1, the CCRF-CEM model was forced to secrete

  • 4-hydroxyphenylpyruvate (Fig. 1B),
  • the second product of tyrosine transaminase,
  • which is produced only by that enzyme.

 

One KO gene in the Molt-4 model (Entrez Gene ID: 26227) was associated with

  • phosphoglycerate dehydrogenase (PGDH),
  • which catalyzes the conversion of 3-phospho-d-glycerate to 3-phosphohydroxypyruvate
  • while generating NADH from NAD+.

This KO gene is particularly interesting, given

  • the involvement of this reaction in a novel pathway for ATP generation in rapidly proliferating cells
    (Locasale et al. 2011; Vander Heiden 2011; Vazquez et al. 2011).

Reactions associated with unique KO genes were in many cases utilized more by the model, in which

  • the gene KO was lethal,
  • underlining the potential importance of these reactions for the models.

Thus, single gene deletion provided unique sets of lethal genes that could be

  • specifically targeted to kill these cells.

 

3 Discussion

In the current study, we explored the possibility of

  • semi-quantitatively integrating metabolomic data with
  • the human genome-scale reconstruction to facilitate analysis.

By constructing condition-specific cell line models

  • to provide a structured framework,
  • we derived insights that could not have been obtained from data analysis alone.

We derived condition-specific cell line models

  • for CCRF-CEM and
  • Molt-4 cells

that were able to explain the observed exo-metabolomic differences (Fig. 1B).

Despite the overall similarities between the models, the analysis revealed

  • distinct usage of central metabolic pathways (Figs. 234),
  • which we validated based on experimental data and
  • differential gene expression.

The additional data sufficiently supported

  • metabolic differences in the cell lines,
  • providing confidence in the generated models and the model-based predictions.

We used the validated models

  • to predict unique sets of lethal genes
  • to identify weak links in each model.

These weak links may represent potential drug targets.

Integrating omics data with the human genome-scale reconstruction

  • provides a structured framework (i.e., pathways)
  • that is based on careful consideration of the available biochemical literature
    (Thiele and Palsson2010).

This network context can simplify omics data analysis, and

  • it allows even non-biochemical experts
  • to gain fast and comprehensive insights
  • into the metabolic aspects of omics data sets.

Compared to transcriptomic data,

  • methods for the integration and analysis of metabolomic data
  • in the context of metabolic models are less well established,

although it is an active field of research (Li et al. 2013; Paglia et al. 2012b).
In contrast to other studies, our approach emphasizes

  • the representation of experimental conditions rather than
  • the reconstruction of a generic, cell-line-specific network,
  • which would require the combination of data sets from
  • many experimental conditions and extensive manual curation.

Rather, our way of model construction allowed us to efficiently

  • assess the metabolic characteristics of cells.

Despite the fact, that only a limited number of exchanged metabolites can be

  • measured by available metabolomics platforms and
  • at reasonable time-scale,

and that pathways of measured metabolites might still be unknown to date
(File S1, Tables S2–S3), our methods have the potential

  • to reveal metabolic characteristics of cells
  • which could be useful for biomedicine and personalized health.

The reasons why some cancers respond to certain treatments and not others
remain unclear, and choosing a treatment for a specific patient is often difficult
(Vander Heiden 2011). One potential application of our approach could be the
characterization of cancer phenotypes to explore how cancer cells or other cell
types

  • with particular metabolic characteristics respond to drugs.

The generation of our condition-specific cell line models involved

  • only limited manual curation,
  • making this approach a fast way to place metabolomic data
  • into a network context.

Model building mainly involves

  • the rigid reduction of metabolite exchanges
  • to match the observed metabolite exchange pattern
  • with as few additional metabolite exchanges as possible.

It should be noted that this reduction determines,

  • which pathways can be utilized by the model.

Our approach mostly conserved the internal network redundancy. However, a

  • more significant reduction may be achieved using different data.

Generally, a trade-off exists between the reduction of the internal network and

  • the increasing number of network gaps that need to be curated
  • by using additional omics data, such as transcriptomics and proteomics.

One way to prevent the emergence of network gaps would be

  • to use mapping algorithms that conserve network functionality,
    such as GIMME (Becker and Palsson 2008).

However, several additional methods exist for the integration of
transcriptomic data (Blazier and Papin 2012), and

  • which model-building method is best depends on the available data.

Interestingly, the lack of a significant contribution of our

  • gene expression data to the reduction of network size
  • suggests that the use of transcriptomic data is not necessary
  • to identify distinct metabolic strategies;
  • rather, the integration of exo-metabolomic data alone
    may provide sufficient insight.

However, sampling of the cell line models constrained

  • according to the exo-metabolomic profiles only, or
  • increasing the cutoff for the generation of absent and present calls (p < 0.01),
  • did not yield the same insights as presented herein (File S1, Table S18).

Only recently Gene Inactivation Moderated by Metabolism, Metabolomics and
Expression (GIM(3)E) became available, which

  • enforces minimum turnover of detected metabolites
  • based on intracellular metabolomics data as well as
  • gene expression microarray data (Schmidt et al. 2013).

In contrast to this approach, we emphasized our analysis on the

  • relative differences in the exo-metabolomic data of two cell lines.

GIM(3)E constitutes another integration method when the analysis should be

  • emphasized on intracellular metabolomics data (Schmidt et al. 2013).

The metabolic differences predicted by the models are generally plausible.
Cancers are known to be heterogeneous (Cairns et al. 2011), and

  • the contribution of oxidative phosphorylation to cellular ATP production
    may vary (Zu and Guppy 2004).

Moreover, leukemia cell lines have been shown

  • to depend on glucose, glutamine, and fatty acids to varying extents
  • to support proliferation.

Such dependence may cause the cells to adapt their metabolism

  • to the environmental conditions (Suganuma et al. 2010).

In addition to identifying supporting data in the literature, we performed

  • several analyses to validate the models and model predictions.

Our expectations regarding the levels and ratios of metabolites

  • relevant to energy and redox state were largely met (Fig. 4L).

The more pronounced shift of the NADH/NAD+ ratio

  • toward NADH in the CCRF-CEM cells
  • was in agreement with the predicted Warburg phenotype (Fig. 4),
  • and the higher lactate secretion in the CCRF-CEM cells (File S2, Fig. S2)
  • implies an increase in NADH relative to NAD+
    (Chiarugi et al. 2012; Nikiforov et al. 2011), again
  • matching the known Warburg phenotype.

ROS production is enhanced in certain types of cancer (Droge 2002; Ha et al. 2000), and

  • the generation of ROS is thought to contribute to
  1. mutagenesis,
  2. tumor promotion, and
  3. tumor progression (Dreher and Junod1996; Ha et al. 2000).

However, decreased mitochondrial glucose oxidation and

  • a transition to aerobic glycolysis
  • protect cells against ROS damage during biosynthesis and cell division
    (Brand and Hermfisse1997).

The higher ROS detoxification capability in Molt-4 cells, in combination with

  • higher spermidine dismutase utilization by the Molt-4 model (Fig. 4),
  • provided a consistent picture of the predicted respiratory phenotype (Fig. 4L).

Control of NADPH maintains the redox potential through GSH and

  • protects against oxidative stress, yet
  • changes in the NADPH ratio in response to oxidative damage
  • are not well understood (Ogasawara et al.2009).

Under stress conditions, as assumed for Molt-4 cells,

  • the NADPH/NADP+ ratio is expected to decrease because of
  • the continuous reduction of GSSG (Fig. 4L), and
  • this was confirmed in the Molt-4 cells (Fig. 4).

The higher amounts of GSH found in Molt-4 cells in vitro may demonstrate

  • an additional need for ROS scavengers because of
  • a greater reliance on oxidative metabolism.

Cancer is related to metabolic reprogramming, which results from

  • alterations of gene expression and
  • the expression of specific isoforms or
  • splice forms to support proliferation
    (Cortes-Cros et al. 2013; Marin-Hernandez et al. 2009).

The gene expression differences detected between the two cell lines in this study
supported the existence of

  • metabolic differences in these cell lines, particularly because
  • key steps of the metabolic pathways central to cancer metabolism
  • seemed to be differentially regulated (Table 1).

The detailed analysis of the respective

  • differences on the pathway fluxes exceeds the scope of this study, which was to
  • demonstrate the potential of the integration of exo-metabolomic data into the network context.

We found discrepancies between differential gene regulation and

  • the flux differences between the two models as well as
  • the utilization AS gene-associated reaction.

This is not surprising, since analysis of the detailed system is required

  • to make any further assumptions on the impact that
  • the differential regulation or splicing might have on the reaction flux,
  • given that for many of the concerned enzymes isozymes exist, or
  • only one of multiple subunits of a protein complex was concerned.

Additionally, reaction fluxes are regulated by numerous post-translational factors, e.g.,

  • protein modification,
  • inhibition through proteins or metabolites,
  • alter reaction fluxes (Lenzen 2014),

which are out of the scope of constraint-based steady-state modeling.

Rather, the results of the presented  approach

  • demonstrate how the models can be used to generate
  • informed hypothesis that can guide experimental work.

The combination of our tailored metabolic models and

  • differential gene expression analysis seems well-suited
  • to determine the potential drivers
  • involved in metabolic differences between cells.

Such information could be valuable for drug discovery, especially when more

  • peripheral metabolic pathways are considered.

Statistical comparisons of gene expression data with sampling-derived flux data

  • could be useful in future studies (Mardinoglu et al. 2013).

A single-gene-deletion analysis revealed that PGDH was

  • a lethal KO gene for the Molt-4 model only.

Differences in PGDH protein levels

  • correspond to the amount of glycolytic carbon
  • diverted into glycine biosynthesis.

Rapidly proliferating cells may use an

  • alternative glycolytic pathway for ATP generation,
  • which may provide an advantage in the case of
  • extensive oxidative phosphorylation and proliferation
    (Locasale et al.2011; Vander Heiden 2011; Vazquez et al. 2011).

For breast cancer cell lines, variable dependency on

  • the expression of PGDH has already been demonstrated
    (Locasale et al. 2011).

This example of a unique KO gene demonstrates how

  • in silico gene deletion in metabolomics-driven models
  • can identify the metabolic pathways used by cancer cells.

This approach can provide valuable information for drug discovery.

In conclusion, our contextualization method produced

  • metabolic models that agreed in many ways with the validation data sets.

The analyses described in this study have great potential to reveal

  • the mechanisms of metabolic reprogramming,
  • not only in cancer cells but also in other cells affected by diseases, and
  • for drug discovery in general.

 

4.3 Analysis of the extracellular metabolome

Mass spectrometry analysis of the exo-metabolome was performed by
Metabolon®, Inc. (Durham, NC, USA) using a standardized analytical platform.
In total, 75 extracellular metabolites were detected in the initial data set for at
least 1 of the 2 cell lines (Paglia et al. 2012a). Of these metabolites, 15 were not
part of our global model and were discarded. Apart from being absent in our
global model, an independent search in HMDB (Wishart et al. 2013) revealed no
pathway information was available for most of these metabolites (File S1, Tables S2–S3).
It should be noted that metabolites e.g.,

  • N-acetylisoleucine,
  • N-acetyl-methionine or pseudouridine,

constitute protein and RNA degradation products, which were out of the scope
of the metabolic network.

Thiamin (Vitamin B1) was part of the minimal medium of essential compounds
supplied to both models.Riboflavin (Vitamin B2) and Trehalose were excluded
since these compounds cannot be produced by human cells. Erythrose and
fructose were also excluded. In contrast 46 metabolites that were part of the
global model. The data set included two different time points, which allowed us
to treat the increase/decrease of a metabolite signal between time points as

  • evidence for uptake or secretion when the change was greater than 5 %
    from what was observed in the control (File S1, Tables S2–S3).

We found 12 metabolites that were taken up by both cell lines and
10 metabolites that were commonly secreted by both cell lines over
the course of the experiment.

Molt-4 cells took up three metabolites not taken up by CCRF-CEM cells, and
secreted one metabolite not secreted by CCRF-CEM cells. Two of the three
uniquely uptaken metabolites were essential amino acids:

  1. valine and
  2. methionine.

It is unlikely that these metabolites were not taken up by the CCRF-CEM cells,
and the CCRF-CEM model was allowed to take up this metabolite. Therefore,
no quantitative constraints were applied for the sampling analysis either.
CCRF-CEM cells had

  • four unique uptaken
  • and seven unique secreted metabolites
    (exchange not detected in Molt-4 cells).

 

4.4 Network refinement based on exo-metabolic data

Despite its comprehensiveness, the human metabolic reconstruction is

  • not complete with respect to extracellular metabolite transporters
    (Sahoo et al. 2014; Thiele et al. 2013).

Accordingly, we identified metabolite transport systems

  • from the literature for metabolites that were already part of the global model,
  • but whose extracellular transport was not yet accounted for.

Diffusion reactions were included whenever a respective transporter could not be identified.

In total, 34 reactions [11 exchange reactions, 16 transport reactions and 7 demand reactions
(File S1, Table S11)] were added to Recon 2 (Thiele et al. 2013), and 2 additional reactions
were added to the global model (File S1, Table S10).

4.5 Expression profiling

Molt-4 and CCRF-CEM cells were grown in advanced RPMI 1640 and 2 mM
GlutaMax, and the cells were resuspended in medium containing DMSO
(0.67 %) at a concentration of 5 × 105 cells/mL. The cell suspension (2 mL)
was seeded in 12-well plates in triplicate. After 48 h of growth, the cells
were collected by centrifugation at 201×g for 5 min. Cell pellets were snap-frozen
in liquid N2 and kept frozen until RNA extraction and analysis by Aros
(Aarhus, Denmark).

4.6 Analysis of transcriptomic data

We used the Affymetrix GeneChip Human Exon 1.0 ST Array to measure whole
genome exon expression. We generated detection above background (DABG) calls
using ROOT (version 22) and the XPS package for R (version 11.1), with Robust
Multi-array Analysis summarization. Calls for data mapping were assigned based
on p < 0.05 as the cutoff probability to distinguish presence versus absence for
the 1,278 model genes (File S1, Table S12).

Differential gene expression and alternative splicing analyses were performed by
using AltAnalyse software (v2.02beta) with default options on the raw data files
(CEL files). The Homo sapiens Ensemble 65 database was used, probe set filtering
was kept as DABG p < 0.05, and non-log expression < 70 was used for
constitutive probe sets to determine gene expression levels. For the comparison,
CCRF-CEM was the experimental group and Molt-4 was the baseline group. The
set of DEGs between cell lines was identified based on a p < 0.05 FDR cutoff
(File S1, Table S13A–B). Alternative splicing analysis was performed on core probe sets
with a minimum alternative exon score of 2 and a maximum absolute gene
expression change of 3 because alternative splicing is a less critical factor among
highly DEGs (File S1, Table S14). Gene expression data, complete lists of DABG p-values,
DEGs and alternative splicing events have been deposited in the Gene
Expression Omnibus
 (GEO) database (Accession number: GSE53123).

 

4.7 Deriving cell-type-specific subnetworks

Transcriptomic data were mapped to the model in a manual fashion (COBRA
function: deleteModelGenes). Specifically, reactions dependent on gene products
that were called as “absent” were constrained to zero, such that fluxes through
these reactions were disabled. Submodels were extracted based on the set of
reactions carrying flux (network pruning) by running fastFVA
(Gudmundsson and Thiele 2010) after mapping the metabolomic and
transcriptomic data using the COBRA toolbox (Schellenberger et al. 2011).

 

…..

 

Electronic supplementary material

Below is the link to the electronic supplementary material.

File S1. Supplementary material 1 (XLSX 915 kb)

File S2. Supplementary material 2 (DOCX 448 kb)

References

Antonucci, R., Pilloni, M. D., Atzori, L., & Fanos, V. (2012). Pharmaceutical research and metabolomics in the newborn. Journal of Maternal-Fetal and Neonatal Medicine, 25, 22–26.PubMedCrossRef

Barrett, T., Troup, D. B., Wilhite, S. E., Ledoux, P., Evangelista, C., Kim, I. F., et al. (2011). NCBI GEO: archive for functional genomics data sets—10 years on. Nucleic Acids Research, 39, D1005–D1010.PubMedCentralPubMedCrossRef

Beck, M., Schmidt, A., Malmstroem, J., Claassen, M., Ori, A., Szymborska, A., et al. (2011). The quantitative proteome of a human cell line.Molecular Systems Biology, 7, 549.PubMedCentralPubMedCrossRef

Becker, S. A., & Palsson, B. O. (2008). Context-specific metabolic networks are consistent with experiments. PLoS Computational Biology, 4, e1000082.PubMedCentralPubMedCrossRef

Blazier, A. S., & Papin, J. A. (2012). Integration of expression data in genome-scale metabolic network reconstructions. Frontiers in Physiology, 3, 299.PubMedCentralPubMedCrossRef

Bordbar, A., Lewis, N. E., Schellenberger, J., Palsson, B. O., & Jamshidi, N. (2010). Insight into human alveolar macrophage and M. tuberculosisinteractions via metabolic reconstructions. Molecular Systems Biology, 6, 422.PubMedCentralPubMedCrossRef

Bordbar, A., & Palsson, B. O. (2012). Using the reconstructed genome-scale human metabolic network to study physiology and pathology. Journal of Internal Medicine, 271, 131–141.PubMedCentralPubMedCrossRef

Brand, K. A., & Hermfisse, U. (1997). Aerobic glycolysis by proliferating cells: a protective strategy against reactive oxygen species. FASEB Journal, 11, 388–395.PubMed

Cairns, R. A., Harris, I. S., & Mak, T. W. (2011). Regulation of cancer cell metabolism. Nature Reviews Cancer, 11, 85–95.PubMedCrossRef

Chance, B., Sies, H., & Boveris, A. (1979). Hydroperoxide metabolism in mammalian organs. Physiological Reviews, 59, 527–605.PubMed

Chapman, E. H., Kurec, A. S., & Davey, F. R. (1981). Cell volumes of normal and malignant mononuclear cells. Journal of Clinical Pathology, 34, 1083–1090.PubMedCentralPubMedCrossRef

Chiarugi, A., Dolle, C., Felici, R., & Ziegler, M. (2012). The NAD metabolome—a key determinant of cancer cell biology. Nature Reviews Cancer, 12, 741–752.PubMedCrossRef

Cortes-Cros, M., Hemmerlin, C., Ferretti, S., Zhang, J., Gounarides, J. S., Yin, H., et al. (2013). M2 isoform of pyruvate kinase is dispensable for tumor maintenance and growth. Proceedings of the National Academy of Sciences of the United States of America, 110, 489–494.PubMedCentralPubMedCrossRef

Dreher, D., & Junod, A. F. (1996). Role of oxygen free radicals in cancer development. European Journal of Cancer, 32a, 30–38.PubMedCrossRef

Droge, W. (2002). Free radicals in the physiological control of cell function. Physiological Reviews, 82, 47–95.PubMed

Duarte, N. C., Becker, S. A., Jamshidi, N., Thiele, I., Mo, M. L., Vo, T. D., et al. (2007). Global reconstruction of the human metabolic network based on genomic and bibliomic data. Proceedings of the National Academy of Sciences of the United States of America, 104, 1777–1782.PubMedCentralPubMedCrossRef

Durot, M., Bourguignon, P. Y., & Schachter, V. (2009). Genome-scale models of bacterial metabolism: Reconstruction and applications. FEMS Microbiology Reviews, 33, 164–190.PubMedCentralPubMedCrossRef

Fleming, R. M., Thiele, I., & Nasheuer, H. P. (2009). Quantitative assignment of reaction directionality in constraint-based models of metabolism: Application to Escherichia coliBiophysical Chemistry, 145, 47–56.PubMedCentralPubMedCrossRef

Folger, O., Jerby, L., Frezza, C., Gottlieb, E., Ruppin, E., & Shlomi, T. (2011). Predicting selective drug targets in cancer through metabolic networks. Molecular Systems Biology, 7, 501.PubMedCentralPubMedCrossRef

Frezza, C., Zheng, L., Folger, O., Rajagopalan, K. N., MacKenzie, E. D., Jerby, L., et al. (2011). Haem oxygenase is synthetically lethal with the tumour suppressor fumarate hydratase. Nature, 477, 225–228.PubMedCrossRef

Ganske, F., & Dell, E. J. (2006). ORAC assay on the FLUOstar OPTIMA to determine antioxidant capacity. BMG LABTECH.

Gudmundsson, S., & Thiele, I. (2010). Computationally efficient flux variability analysis. BMC Bioinformatics, 11, 489.PubMedCentralPubMedCrossRef

Ha, H. C., Thiagalingam, A., Nelkin, B. D., & Casero, R. A, Jr. (2000). Reactive oxygen species are critical for the growth and differentiation of medullary thyroid carcinoma cells. Clinical Cancer Research, 6, 3783–3787.PubMed

Hyduke, D. R., Lewis, N. E., & Palsson, B. O. (2013). Analysis of omics data with genome-scale models of metabolism. Molecular BioSystems, 9, 167–174.PubMedCentralPubMedCrossRef

Jerby, L., & Ruppin, E. (2012). Predicting drug targets and biomarkers of cancer via genome-scale metabolic modeling. Clinical Cancer Research,18, 5572–5584.PubMedCrossRef

Jerby, L., Shlomi, T., & Ruppin, E. (2010). Computational reconstruction of tissue-specific metabolic models: Application to human liver metabolism.Molecular Systems Biology, 6, 401.PubMedCentralPubMedCrossRef

Jerby, L., Wolf, L., Denkert, C., Stein, G. Y., Hilvo, M., Oresic, M., et al. (2012). Metabolic associations of reduced proliferation and oxidative stress in advanced breast cancer. Cancer Research, 72, 5712–5720.PubMedCrossRef

Lenzen, S. (2014). A fresh view of glycolysis and glucokinase regulation: History and current status. Journal of Biological Chemistry, 289, 12189–12194.PubMedCrossRef

Lewis, N. E., Nagarajan, H., & Palsson, B. O. (2012). Constraining the metabolic genotype–phenotype relationship using a phylogeny of in silico methods. Nature Reviews Microbiology, 10, 291–305.PubMedCentralPubMed

Lewis, N. E., Schramm, G., Bordbar, A., Schellenberger, J., Andersen, M. P., Cheng, J. K., et al. (2010). Large-scale in silico modeling of metabolic interactions between cell types in the human brain. Nature Biotechnology, 28, 1279–1285.PubMedCentralPubMedCrossRef

Li, S., Park, Y., Duraisingham, S., Strobel, F. H., Khan, N., Soltow, Q. A., et al. (2013). Predicting network activity from high throughput metabolomics. PLoS Computational Biology, 9, e1003123.PubMedCentralPubMedCrossRef

Locasale, J. W., Grassian, A. R., Melman, T., Lyssiotis, C. A., Mattaini, K. R., Bass, A. J., et al. (2011). Phosphoglycerate dehydrogenase diverts glycolytic flux and contributes to oncogenesis. Nature Genetics, 43, 869–874.PubMedCentralPubMedCrossRef

Mardinoglu, A., Agren, R., Kampf, C., Asplund, A., Nookaew, I., Jacobson, P., et al. (2013). Integration of clinical data with a genome-scale metabolic model of the human adipocyte. Molecular Systems Biology, 9, 649.PubMedCentralPubMedCrossRef

Marin-Hernandez, A., Gallardo-Perez, J. C., Ralph, S. J., Rodriguez-Enriquez, S., & Moreno-Sanchez, R. (2009). HIF-1alpha modulates energy metabolism in cancer cells by inducing over-expression of specific glycolytic isoforms. Mini Reviews in Medicinal Chemistry, 9, 1084–1101.PubMedCrossRef

Mir, M., Wang, Z., Shen, Z., Bednarz, M., Bashir, R., Golding, I., et al. (2011). Optical measurement of cycle-dependent cell growth. Proceedings of the National Academy of Sciences of the United States of America, 108, 13124–13129.PubMedCentralPubMedCrossRef

Mo, M. L., Palsson, B. O., & Herrgard, M. J. (2009). Connecting extracellular metabolomic measurements to intracellular flux states in yeast. BMC Systems Biology, 3, 37.PubMedCentralPubMedCrossRef

Nikiforov, A., Dolle, C., Niere, M., & Ziegler, M. (2011). Pathways and subcellular compartmentation of NAD biosynthesis in human cells: From entry of extracellular precursors to mitochondrial NAD generation. The Journal of biological chemistry, 286, 21767–21778.PubMedCentralPubMedCrossRef

Ogasawara, Y., Funakoshi, M., & Ishii, K. (2009). Determination of reduced nicotinamide adenine dinucleotide phosphate concentration using high-performance liquid chromatography with fluorescence detection: Ratio of the reduced form as a biomarker of oxidative stress. Biological & Pharmaceutical Bulletin, 32, 1819–1823.CrossRef

Paglia, G., Hrafnsdottir, S., Magnusdottir, M., Fleming, R. M., Thorlacius, S., Palsson, B. O., et al. (2012a). Monitoring metabolites consumption and secretion in cultured cells using ultra-performance liquid chromatography quadrupole-time of flight mass spectrometry (UPLC-Q-ToF-MS).Analytical and Bioanalytical Chemistry, 402, 1183–1198.PubMedCrossRef

Paglia, G., Palsson, B. O., & Sigurjonsson, O. E. (2012b). Systems biology of stored blood cells: Can it help to extend the expiration date? Journal of Proteomics, 76, 163–167.PubMedCrossRef

Price, N. D., Schellenberger, J., & Palsson, B. O. (2004). Uniform sampling of steady-state flux spaces: Means to design experiments and to interpret enzymopathies. Biophysical Journal, 87, 2172–2186.PubMedCentralPubMedCrossRef

Reed, J. L., Famili, I., Thiele, I., & Palsson, B. O. (2006). Towards multidimensional genome annotation. Nature Reviews Genetics, 7, 130–141.PubMedCrossRef

Sahoo, S., Aurich, M. K., Jonsson, J. J., & Thiele, I. (2014). Membrane transporters in a human genome-scale metabolic knowledgebase and their implications for disease. Frontiers in Physiology, 5, 91.PubMedCentralPubMedCrossRef

Sahoo, S., & Thiele, I. (2013). Predicting the impact of diet and enzymopathies on human small intestinal epithelial cells. Human Molecular Genetics, 22, 2705–2722.PubMedCentralPubMedCrossRef

Schellenberger, J., & Palsson, B. O. (2009). Use of randomized sampling for analysis of metabolic networks. The Journal of biological chemistry,284, 5457–5461.PubMedCrossRef

Schellenberger, J., Que, R., Fleming, R. M., Thiele, I., Orth, J. D., Feist, A. M., et al. (2011). Quantitative prediction of cellular metabolism with constraint-based models: The COBRA Toolbox v2.0. Nature Protocols, 6, 1290–1307.PubMedCentralPubMedCrossRef

Schmidt, B. J., Ebrahim, A., Metz, T. O., Adkins, J. N., Palsson, B. O., & Hyduke, D. R. (2013). GIM3E: Condition-specific models of cellular metabolism developed from metabolomics and expression data. Bioinformatics (Oxford, England), 29, 2900–2908.CrossRef

Suganuma, K., Miwa, H., Imai, N., Shikami, M., Gotou, M., Goto, M., et al. (2010). Energy metabolism of leukemia cells: Glycolysis versus oxidative phosphorylation. Leukemia & Lymphoma, 51, 2112–2119.CrossRef

Thiele, I., & Palsson, B. O. (2010). A protocol for generating a high-quality genome-scale metabolic reconstruction. Nature Protocols, 5, 93–121.PubMedCentralPubMedCrossRef

Thiele, I., Price, N. D., Vo, T. D., & Palsson, B. O. (2005). Candidate metabolic network states in human mitochondria. Impact of diabetes, ischemia, and diet. The Journal of biological chemistry, 280, 11683–11695.PubMedCrossRef

Thiele, I., Swainston, N., Fleming, R. M., Hoppe, A., Sahoo, S., Aurich, M. K., et al. (2013). A community-driven global reconstruction of human metabolism. Nature Biotechnology, 31, 419–425.PubMedCrossRef

Uhlen, M., Oksvold, P., Fagerberg, L., Lundberg, E., Jonasson, K., Forsberg, M., et al. (2010). Towards a knowledge-based human protein Atlas.Nature Biotechnology, 28, 1248–1250.PubMedCrossRef

Vander Heiden, M. G. (2011). Targeting cancer metabolism: A therapeutic window opens. Nature Reviews Drug Discovery, 10, 671–684.PubMedCrossRef

Vazquez, A., Markert, E. K., & Oltvai, Z. N. (2011). Serine biosynthesis with one carbon catabolism and the glycine cleavage system represents a novel pathway for ATP generation. PLoS ONE, 6, e25881.PubMedCentralPubMedCrossRef

Wishart, D. S., Jewison, T., Guo, A. C., Wilson, M., Knox, C., Liu, Y., et al. (2013). HMDB 3.0—The human metabolome database in 2013. Nucleic Acids Research, 41, D801–D807.PubMedCentralPubMedCrossRef

Zu, X. L., & Guppy, M. (2004). Cancer metabolism: Facts, fantasy, and fiction. Biochemical and Biophysical Research Communications, 313, 459–465.PubMedCrossRef

 

Read Full Post »

Summary of Translational Medicine – e-Series A: Cardiovascular Diseases, Volume Four – Part 1

Summary of Translational Medicine – e-Series A: Cardiovascular Diseases, Volume Four – Part 1

Author and Curator: Larry H Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN

 

Part 1 of Volume 4 in the e-series A: Cardiovascular Diseases and Translational Medicine, provides a foundation for grasping a rapidly developing surging scientific endeavor that is transcending laboratory hypothesis testing and providing guidelines to:

  • Target genomes and multiple nucleotide sequences involved in either coding or in regulation that might have an impact on complex diseases, not necessarily genetic in nature.
  • Target signaling pathways that are demonstrably maladjusted, activated or suppressed in many common and complex diseases, or in their progression.
  • Enable a reduction in failure due to toxicities in the later stages of clinical drug trials as a result of this science-based understanding.
  • Enable a reduction in complications from the improvement of machanical devices that have already had an impact on the practice of interventional procedures in cardiology, cardiac surgery, and radiological imaging, as well as improving laboratory diagnostics at the molecular level.
  • Enable the discovery of new drugs in the continuing emergence of drug resistance.
  • Enable the construction of critical pathways and better guidelines for patient management based on population outcomes data, that will be critically dependent on computational methods and large data-bases.

What has been presented can be essentially viewed in the following Table:

 

Summary Table for TM - Part 1

Summary Table for TM – Part 1

 

 

 

There are some developments that deserve additional development:

1. The importance of mitochondrial function in the activity state of the mitochondria in cellular work (combustion) is understood, and impairments of function are identified in diseases of muscle, cardiac contraction, nerve conduction, ion transport, water balance, and the cytoskeleton – beyond the disordered metabolism in cancer.  A more detailed explanation of the energetics that was elucidated based on the electron transport chain might also be in order.

2. The processes that are enabling a more full application of technology to a host of problems in the environment we live in and in disease modification is growing rapidly, and will change the face of medicine and its allied health sciences.

 

Electron Transport and Bioenergetics

Deferred for metabolomics topic

Synthetic Biology

Introduction to Synthetic Biology and Metabolic Engineering

Kristala L. J. Prather: Part-1    <iBiology > iBioSeminars > Biophysics & Chemical Biology >

http://www.ibiology.org Lecturers generously donate their time to prepare these lectures. The project is funded by NSF and NIGMS, and is supported by the ASCB and HHMI.
Dr. Prather explains that synthetic biology involves applying engineering principles to biological systems to build “biological machines”.

Dr. Prather has received numerous awards both for her innovative research and for excellence in teaching.  Learn more about how Kris became a scientist at
Prather 1: Synthetic Biology and Metabolic Engineering  2/6/14IntroductionLecture Overview In the first part of her lecture, Dr. Prather explains that synthetic biology involves applying engineering principles to biological systems to build “biological machines”. The key material in building these machines is synthetic DNA. Synthetic DNA can be added in different combinations to biological hosts, such as bacteria, turning them into chemical factories that can produce small molecules of choice. In Part 2, Prather describes how her lab used design principles to engineer E. coli that produce glucaric acid from glucose. Glucaric acid is not naturally produced in bacteria, so Prather and her colleagues “bioprospected” enzymes from other organisms and expressed them in E. coli to build the needed enzymatic pathway. Prather walks us through the many steps of optimizing the timing, localization and levels of enzyme expression to produce the greatest yield. Speaker Bio: Kristala Jones Prather received her S.B. degree from the Massachusetts Institute of Technology and her PhD at the University of California, Berkeley both in chemical engineering. Upon graduation, Prather joined the Merck Research Labs for 4 years before returning to academia. Prather is now an Associate Professor of Chemical Engineering at MIT and an investigator with the multi-university Synthetic Biology Engineering Reseach Center (SynBERC). Her lab designs and constructs novel synthetic pathways in microorganisms converting them into tiny factories for the production of small molecules. Dr. Prather has received numerous awards both for her innovative research and for excellence in teaching.

VIEW VIDEOS

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk#t=0

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk#t=12

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk#t=74

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk#t=129

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk#t=168

https://www.youtube.com/watch?feature=player_embedded&v=ndThuqVumAk

 

II. Regulatory Effects of Mammalian microRNAs

Calcium Cycling in Synthetic and Contractile Phasic or Tonic Vascular Smooth Muscle Cells

in INTECH
Current Basic and Pathological Approaches to
the Function of Muscle Cells and Tissues – From Molecules to HumansLarissa Lipskaia, Isabelle Limon, Regis Bobe and Roger Hajjar
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/48240
1. Introduction
Calcium ions (Ca ) are present in low concentrations in the cytosol (~100 nM) and in high concentrations (in mM range) in both the extracellular medium and intracellular stores (mainly sarco/endo/plasmic reticulum, SR). This differential allows the calcium ion messenger that carries information
as diverse as contraction, metabolism, apoptosis, proliferation and/or hypertrophic growth. The mechanisms responsible for generating a Ca signal greatly differ from one cell type to another.
In the different types of vascular smooth muscle cells (VSMC), enormous variations do exist with regard to the mechanisms responsible for generating Ca signal. In each VSMC phenotype (synthetic/proliferating and contractile [1], tonic or phasic), the Ca signaling system is adapted to its particular function and is due to the specific patterns of expression and regulation of Ca.
For instance, in contractile VSMCs, the initiation of contractile events is driven by mem- brane depolarization; and the principal entry-point for extracellular Ca is the voltage-operated L-type calcium channel (LTCC). In contrast, in synthetic/proliferating VSMCs, the principal way-in for extracellular Ca is the store-operated calcium (SOC) channel.
Whatever the cell type, the calcium signal consists of  limited elevations of cytosolic free calcium ions in time and space. The calcium pump, sarco/endoplasmic reticulum Ca ATPase (SERCA), has a critical role in determining the frequency of SR Ca release by upload into the sarcoplasmic
sensitivity of  SR calcium channels, Ryanodin Receptor, RyR and Inositol tri-Phosphate Receptor, IP3R.
Synthetic VSMCs have a fibroblast appearance, proliferate readily, and synthesize increased levels of various extracellular matrix components, particularly fibronectin, collagen types I and III, and tropoelastin [1].
Contractile VSMCs have a muscle-like or spindle-shaped appearance and well-developed contractile apparatus resulting from the expression and intracellular accumulation of thick and thin muscle filaments [1].

Schematic representation of Calcium Cycling in Contractile and Proliferating VSMCs

Schematic representation of Calcium Cycling in Contractile and Proliferating VSMCs

 

Figure 1. Schematic representation of Calcium Cycling in Contractile and Proliferating VSMCs.

Left panel: schematic representation of calcium cycling in quiescent /contractile VSMCs. Contractile re-sponse is initiated by extracellular Ca influx due to activation of Receptor Operated Ca (through phosphoinositol-coupled receptor) or to activation of L-Type Calcium channels (through an increase in luminal pressure). Small increase of cytosolic due IP3 binding to IP3R (puff) or RyR activation by LTCC or ROC-dependent Ca influx leads to large SR Ca IP3R or RyR clusters (“Ca -induced Ca SR calcium pumps (both SERCA2a and SERCA2b are expressed in quiescent VSMCs), maintaining high concentration of cytosolic Ca and setting the sensitivity of RyR or IP3R for the next spike.
Contraction of VSMCs occurs during oscillatory Ca transient.
Middle panel: schematic representa tion of atherosclerotic vessel wall. Contractile VSMC are located in the media layer, synthetic VSMC are located in sub-endothelial intima.
Right panel: schematic representation of calcium cycling in quiescent /contractile VSMCs. Agonist binding to phosphoinositol-coupled receptor leads to the activation of IP3R resulting in large increase in cytosolic Ca calcium pumps (only SERCA2b, having low turnover and low affinity to Ca depletion leads to translocation of SR Ca sensor STIM1 towards PM, resulting in extracellular Ca influx though opening of Store Operated Channel (CRAC). Resulted steady state Ca transient is critical for activation of proliferation-related transcription factors ‘NFAT).
Abbreviations: PLC – phospholipase C; PM – plasma membrane; PP2B – Ca /calmodulin-activated protein phosphatase 2B (calcineurin); ROC- receptor activated channel; IP3 – inositol-1,4,5-trisphosphate, IP3R – inositol-1,4,5- trisphosphate receptor; RyR – ryanodine receptor; NFAT – nuclear factor of activated T-lymphocytes; VSMC – vascular smooth muscle cells; SERCA – sarco(endo)plasmic reticulum Ca sarcoplasmic reticulum.

 

Time for New DNA Synthesis and Sequencing Cost Curves

By Rob Carlson

I’ll start with the productivity plot, as this one isn’t new. For a discussion of the substantial performance increase in sequencing compared to Moore’s Law, as well as the difficulty of finding this data, please see this post. If nothing else, keep two features of the plot in mind: 1) the consistency of the pace of Moore’s Law and 2) the inconsistency and pace of sequencing productivity. Illumina appears to be the primary driver, and beneficiary, of improvements in productivity at the moment, especially if you are looking at share prices. It looks like the recently announced NextSeq and Hiseq instruments will provide substantially higher productivities (hand waving, I would say the next datum will come in another order of magnitude higher), but I think I need a bit more data before officially putting another point on the plot.

 

cost-of-oligo-and-gene-synthesis

cost-of-oligo-and-gene-synthesis

Illumina’s instruments are now responsible for such a high percentage of sequencing output that the company is effectively setting prices for the entire industry. Illumina is being pushed by competition to increase performance, but this does not necessarily translate into lower prices. It doesn’t behoove Illumina to drop prices at this point, and we won’t see any substantial decrease until a serious competitor shows up and starts threatening Illumina’s market share. The absence of real competition is the primary reason sequencing prices have flattened out over the last couple of data points.

Note that the oligo prices above are for column-based synthesis, and that oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent, which probably means you are getting your dsDNA from Gen9.

Note also that the distinction between the price of oligos and the price of double-stranded sDNA is becoming less useful. Whether you are ordering from Life/Thermo or from your local academic facility, the cost of producing oligos is now, in most cases, independent of their length. That’s because the cost of capital (including rent, insurance, labor, etc) is now more significant than the cost of goods. Consequently, the price reflects the cost of capital rather than the cost of goods. Moreover, the cost of the columns, reagents, and shipping tubes is certainly more than the cost of the atoms in the sDNA you are ostensibly paying for. Once you get into longer oligos (substantially larger than 50-mers) this relationship breaks down and the sDNA is more expensive. But, at this point in time, most people aren’t going to use longer oligos to assemble genes unless they have a tricky job that doesn’t work using short oligos.

Looking forward, I suspect oligos aren’t going to get much cheaper unless someone sorts out how to either 1) replace the requisite human labor and thereby reduce the cost of capital, or 2) finally replace the phosphoramidite chemistry that the industry relies upon.

IDT’s gBlocks come at prices that are constant across quite substantial ranges in length. Moreover, part of the decrease in price for these products is embedded in the fact that you are buying smaller chunks of DNA that you then must assemble and integrate into your organism of choice.

Someone who has purchased and assembled an absolutely enormous amount of sDNA over the last decade, suggested that if prices fell by another order of magnitude, he could switch completely to outsourced assembly. This is a potentially interesting “tipping point”. However, what this person really needs is sDNA integrated in a particular way into a particular genome operating in a particular host. The integration and testing of the new genome in the host organism is where most of the cost is. Given the wide variety of emerging applications, and the growing array of hosts/chassis, it isn’t clear that any given technology or firm will be able to provide arbitrary synthetic sequences incorporated into arbitrary hosts.

 TrackBack URL: http://www.synthesis.cc/cgi-bin/mt/mt-t.cgi/397

 

Startup to Strengthen Synthetic Biology and Regenerative Medicine Industries with Cutting Edge Cell Products

28 Nov 2013 | PR Web

Dr. Jon Rowley and Dr. Uplaksh Kumar, Co-Founders of RoosterBio, Inc., a newly formed biotech startup located in Frederick, are paving the way for even more innovation in the rapidly growing fields of Synthetic Biology and Regenerative Medicine. Synthetic Biology combines engineering principles with basic science to build biological products, including regenerative medicines and cellular therapies. Regenerative medicine is a broad definition for innovative medical therapies that will enable the body to repair, replace, restore and regenerate damaged or diseased cells, tissues and organs. Regenerative therapies that are in clinical trials today may enable repair of damaged heart muscle following heart attack, replacement of skin for burn victims, restoration of movement after spinal cord injury, regeneration of pancreatic tissue for insulin production in diabetics and provide new treatments for Parkinson’s and Alzheimer’s diseases, to name just a few applications.

While the potential of the field is promising, the pace of development has been slow. One main reason for this is that the living cells required for these therapies are cost-prohibitive and not supplied at volumes that support many research and product development efforts. RoosterBio will manufacture large quantities of standardized primary cells at high quality and low cost, which will quicken the pace of scientific discovery and translation to the clinic. “Our goal is to accelerate the development of products that incorporate living cells by providing abundant, affordable and high quality materials to researchers that are developing and commercializing these regenerative technologies” says Dr. Rowley

 

Life at the Speed of Light

http://kcpw.org/?powerpress_pinw=92027-podcast

NHMU Lecture featuring – J. Craig Venter, Ph.D.
Founder, Chairman, and CEO – J. Craig Venter Institute; Co-Founder and CEO, Synthetic Genomics Inc.

J. Craig Venter, Ph.D., is Founder, Chairman, and CEO of the J. Craig Venter Institute (JVCI), a not-for-profit, research organization dedicated to human, microbial, plant, synthetic and environmental research. He is also Co-Founder and CEO of Synthetic Genomics Inc. (SGI), a privately-held company dedicated to commercializing genomic-driven solutions to address global needs.

In 1998, Dr. Venter founded Celera Genomics to sequence the human genome using new tools and techniques he and his team developed.  This research culminated with the February 2001 publication of the human genome in the journal, Science. Dr. Venter and his team at JVCI continue to blaze new trails in genomics.  They have sequenced and a created a bacterial cell constructed with synthetic DNA,  putting humankind at the threshold of a new phase of biological research.  Whereas, we could  previously read the genetic code (sequencing genomes), we can now write the genetic code for designing new species.

The science of synthetic genomics will have a profound impact on society, including new methods for chemical and energy production, human health and medical advances, clean water, and new food and nutritional products. One of the most prolific scientists of the 21st century for his numerous pioneering advances in genomics,  he  guides us through this emerging field, detailing its origins, current challenges, and the potential positive advances.

His work on synthetic biology truly embodies the theme of “pushing the boundaries of life.”  Essentially, Venter is seeking to “write the software of life” to create microbes designed by humans rather than only through evolution. The potential benefits and risks of this new technology are enormous. It also requires us to examine, both scientifically and philosophically, the question of “What is life?”

J Craig Venter wants to digitize DNA and transmit the signal to teleport organisms

http://pharmaceuticalintelligence.com/2013/11/01/j-craig-venter-wants-to-digitize-dna-and-transmit-the-signal-to-teleport-organisms/

2013 Genomics: The Era Beyond the Sequencing of the Human Genome: Francis Collins, Craig Venter, Eric Lander, et al.

http://pharmaceuticalintelligence.com/2013/02/11/2013-genomics-the-era-beyond-the-sequencing-human-genome-francis-collins-craig-venter-eric-lander-et-al/

Human Longevity Inc (HLI) – $70M in Financing of Venter’s New Integrative Omics and Clinical Bioinformatics

http://pharmaceuticalintelligence.com/2014/03/05/human-longevity-inc-hli-70m-in-financing-of-venters-new-integrative-omics-and-clinical-bioinformatics/

 

 

Where Will the Century of Biology Lead Us?

By Randall Mayes

A technology trend analyst offers an overview of synthetic biology, its potential applications, obstacles to its development, and prospects for public approval.

  • In addition to boosting the economy, synthetic biology projects currently in development could have profound implications for the future of manufacturing, sustainability, and medicine.
  • Before society can fully reap the benefits of synthetic biology, however, the field requires development and faces a series of hurdles in the process. Do researchers have the scientific know-how and technical capabilities to develop the field?

Biology + Engineering = Synthetic Biology

Bioengineers aim to build synthetic biological systems using compatible standardized parts that behave predictably. Bioengineers synthesize DNA parts—oligonucleotides composed of 50–100 base pairs—which make specialized components that ultimately make a biological system. As biology becomes a true engineering discipline, bioengineers will create genomes using mass-produced modular units similar to the microelectronics and computer industries.

Currently, bioengineering projects cost millions of dollars and take years to develop products. For synthetic biology to become a Schumpeterian revolution, smaller companies will need to be able to afford to use bioengineering concepts for industrial applications. This will require standardized and automated processes.

A major challenge to developing synthetic biology is the complexity of biological systems. When bioengineers assemble synthetic parts, they must prevent cross talk between signals in other biological pathways. Until researchers better understand these undesired interactions that nature has already worked out, applications such as gene therapy will have unwanted side effects. Scientists do not fully understand the effects of environmental and developmental interaction on gene expression. Currently, bioengineers must repeatedly use trial and error to create predictable systems.

Similar to physics, synthetic biology requires the ability to model systems and quantify relationships between variables in biological systems at the molecular level.

The second major challenge to ensuring the success of synthetic biology is the development of enabling technologies. With genomes having billions of nucleotides, this requires fast, powerful, and cost-efficient computers. Moore’s law, named for Intel co-founder Gordon Moore, posits that computing power progresses at a predictable rate and that the number of components in integrated circuits doubles each year until its limits are reached. Since Moore’s prediction, computer power has increased at an exponential rate while pricing has declined.

DNA sequencers and synthesizers are necessary to identify genes and make synthetic DNA sequences. Bioengineer Robert Carlson calculated that the capabilities of DNA sequencers and synthesizers have followed a pattern similar to computing. This pattern, referred to as the Carlson Curve, projects that scientists are approaching the ability to sequence a human genome for $1,000, perhaps in 2020. Carlson calculated that the costs of reading and writing new genes and genomes are falling by a factor of two every 18–24 months. (see recent Carlson comment on requirement to read and write for a variety of limiting  conditions).

Startup to Strengthen Synthetic Biology and Regenerative Medicine Industries with Cutting Edge Cell Products

http://pharmaceuticalintelligence.com/2013/11/28/startup-to-strengthen-synthetic-biology-and-regenerative-medicine-industries-with-cutting-edge-cell-products/

Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging

http://pharmaceuticalintelligence.com/2013/05/17/synthetic-biology-on-advanced-genome-interpretation-for-gene-variants-and-pathways-what-is-the-genetic-base-of-atherosclerosis-and-loss-of-arterial-elasticity-with-aging/

Synthesizing Synthetic Biology: PLOS Collections

http://pharmaceuticalintelligence.com/2012/08/17/synthesizing-synthetic-biology-plos-collections/

Capturing ten-color ultrasharp images of synthetic DNA structures resembling numerals 0 to 9

http://pharmaceuticalintelligence.com/2014/02/05/capturing-ten-color-ultrasharp-images-of-synthetic-dna-structures-resembling-numerals-0-to-9/

Silencing Cancers with Synthetic siRNAs

http://pharmaceuticalintelligence.com/2013/12/09/silencing-cancers-with-synthetic-sirnas/

Genomics Now—and Beyond the Bubble

Futurists have touted the twenty-first century as the century of biology based primarily on the promise of genomics. Medical researchers aim to use variations within genes as biomarkers for diseases, personalized treatments, and drug responses. Currently, we are experiencing a genomics bubble, but with advances in understanding biological complexity and the development of enabling technologies, synthetic biology is reviving optimism in many fields, particularly medicine.

BY MICHAEL BROOKS    17 APR, 2014     http://www.newstatesman.com/

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is The Secret Anarchy of Science.

The basic idea is that we take an organism – a bacterium, say – and re-engineer its genome so that it does something different. You might, for instance, make it ingest carbon dioxide from the atmosphere, process it and excrete crude oil.

That project is still under construction, but others, such as using synthesised DNA for data storage, have already been achieved. As evolution has proved, DNA is an extraordinarily stable medium that can preserve information for millions of years. In 2012, the Harvard geneticist George Church proved its potential by taking a book he had written, encoding it in a synthesised strand of DNA, and then making DNA sequencing machines read it back to him.

When we first started achieving such things it was costly and time-consuming and demanded extraordinary resources, such as those available to the millionaire biologist Craig Venter. Venter’s team spent most of the past two decades and tens of millions of dollars creating the first artificial organism, nicknamed “Synthia”. Using computer programs and robots that process the necessary chemicals, the team rebuilt the genome of the bacterium Mycoplasma mycoides from scratch. They also inserted a few watermarks and puzzles into the DNA sequence, partly as an identifying measure for safety’s sake, but mostly as a publicity stunt.

What they didn’t do was redesign the genome to do anything interesting. When the synthetic genome was inserted into an eviscerated bacterial cell, the new organism behaved exactly the same as its natural counterpart. Nevertheless, that Synthia, as Venter put it at the press conference to announce the research in 2010, was “the first self-replicating species we’ve had on the planet whose parent is a computer” made it a standout achievement.

Today, however, we have entered another era in synthetic biology and Venter faces stiff competition. The Steve Jobs to Venter’s Bill Gates is Jef Boeke, who researches yeast genetics at New York University.

Boeke wanted to redesign the yeast genome so that he could strip out various parts to see what they did. Because it took a private company a year to complete just a small part of the task, at a cost of $50,000, he realised he should go open-source. By teaching an undergraduate course on how to build a genome and teaming up with institutions all over the world, he has assembled a skilled workforce that, tinkering together, has made a synthetic chromosome for baker’s yeast.

 

Stepping into DIYbio and Synthetic Biology at ScienceHack

Posted April 22, 2014 by Heather McGaw and Kyrie Vala-Webb

We got a crash course on genetics and protein pathways, and then set out to design and build our own pathways using both the “Genomikon: Violacein Factory” kit and Synbiota platform. With Synbiota’s software, we dragged and dropped the enzymes to create the sequence that we were then going to build out. After a process of sketching ideas, mocking up pathways, and writing hypotheses, we were ready to start building!

The night stretched long, and at midnight we were forced to vacate the school. Not quite finished, we loaded our delicate bacteria, incubator, and boxes of gloves onto the bus and headed back to complete our bacterial transformation in one of our hotel rooms. Jammed in between the beds and the mini-fridge, we heat-shocked our bacteria in the hotel ice bucket. It was a surreal moment.

While waiting for our bacteria, we held an “unconference” where we explored bioethics, security and risk related to synthetic biology, 3D printing on Mars, patterns in juggling (with live demonstration!), and even did a Google Hangout with Rob Carlson. Every few hours, we would excitedly check in on our bacteria, looking for bacterial colonies and the purple hue characteristic of violacein.

Most impressive was the wildly successful and seamless integration of a diverse set of people: in a matter of hours, we were transformed from individual experts and practitioners in assorted fields into cohesive and passionate teams of DIY biologists and science hackers. The ability of everyone to connect and learn was a powerful experience, and over the course of just one weekend we were able to challenge each other and grow.

Returning to work on Monday, we were hungry for more. We wanted to find a way to bring the excitement and energy from the weekend into the studio and into the projects we’re working on. It struck us that there are strong parallels between design and DIYbio, and we knew there was an opportunity to bring some of the scientific approaches and curiosity into our studio.

 

 

Read Full Post »

Modeling Targeted Therapy

Reporter: Larry H. Bernstein, MD, FCAP
pharmaceuticalintelligence.com/2013/03/02/modeling-targeted-therapy/

Some Perspectives on Network Modeling in Therapeutic Target Prediction
R Albert, B DasGupta and N Mobasheri
Biomedical Engineering and Computational Biology Insights 2013; 5: 17–24    http://dx.doi.org/BECBI/Albert_DasGupta_ Mobasheri
Key steps of a typical therapeutic target identification problem include synthesizing or inferring the complex network of interactions relevant to the disease, connecting this network to the disease-specific behavior, and predicting which components are key mediators of the behavior
http://www.la-press.com/Some_Perspectives_on_Network_Modeling_in_Therapeutic_Target_Prediction/

Journal of Computational Biology

Journal of Computational Biology (Photo credit: Wikipedia)

Read Full Post »

Author and Reporter: Anamika Sarkar, Ph.D.

Nitric Oxide (NO) is highly regulated in the blood such that it can be released as vasodilator when needed. The importance and pathway of Nitric Oxide has been nicely reviewed by. “Discovery of NO and its effects of vascular biology”. Other articles which are good readings for the importance of NO are  – a) regulation of glycolysis b) NO in cardiovascular disease c) NO and Immune responses Part I and Part II d) NO signaling pathways. The  effects of NO in diseased states have been reviewed by the articles – “Crucial role of Nitric Oxide in Cancer”, “Nitric Oxide and Sepsis, Hemodynamic Collapse, and the Search for Therapeutic Options”.. (Also, please see Source for more articles on NO and its significance).

Computational models are very efficient tools to understand complex reactions like NO towards physiological conditions. Among them wall shear stress is one of the major factors which is reviewed in the article – “Differential Distribution of Nitric Oxide – A 3-D Mathematical Model”.

Moreover, decrease in availability of NO can lead to many complications like pulmonary hypertension. Some of the causes of decrease in NO have been identified as clinical hypertension, right ventricular overload which can lead to cardiac heart failure, low levels of zinc and high levels of cardiac necrosis.

Sickle Cell disease patients, a hereditary disease, are also known to have decreased levels of NO which can become physiologically challenging. In USA alone, there are 90,000 people who are affected by Sickle cell disease.

Sickle cell disease is breakage of red blood cells (RBC) membrane and resulting release of the hemoglobin (Hb) into blood plasma. This process is also known as Hemolysis. Sickle cell disease is caused by single mutation of Hb which changes RBC from round shape to sickle or crescent shapes (Figure 1).

Image

Figure 1 (A) shows normal red blood cells flowing freely through veins. The inset shows a cross section of a normal red blood cell with normal hemoglobin. Figure 1 (B) shows abnormal, sickled red blood cells The inset image shows a cross-section of a sickle cell with long polymerized HbS strands stretching and distorting the cell shape. Image Source: http://en.wikipedia.org/wiki/Sickle-cell_disease

Sickle Cell RBCs has much shorter life span of 10-20 days when compared with normal RBCs 100-120 days lifespan. Shorter life span of Sickle cell disease RBC’s are compensated by bone marrow generation of new RBCs. However, many times new blood generation cannot cope with the small life span of Sickle cell RBCs and causes pathological condition of Anemia.

RBCs generally breakdown and release Hbs in blood plasma after they reach their end of life span. Thus, in case of Sickle cell disease, there is more cell free Hb than normal. Furthermore, it is known that NO has a very high affinity towards Hbs, which is one of the ways free NO is regulated in blood. As a result presence of larger amounts of cell free Hb in Sickle cell disease lead to less availability of NO.

However, the question remained “what is the quantitative relationship between cell free Hb and depletion of NO. Deonikar and Kavdia (J. Appl. Physiol., 2012) addressed this question by developing a 2 dimensional Mathematical Model of a single idealized arteriole, with different layers of blood vessels diffusing nutrients to tissue layers (Figure 2:  Deonikar and Kavdia Figure 1).

Image

cell free Hb in 2 dimensional representations of blood vessels.

The authors used steady state partial differential equation of circular geometry to represent diffusion of NO in blood and in tissues. They used first and second order biochemical reactions to represent the reactions between NO and RBC and NO autooxidation processes. Some of their reaction model parameters were obtained from literature, rest of them were fitted to experimental results from literature. The model and its parameters are explained in the previously published paper by same authors Deonikar and Kavdia, Annals of Biomed., 2010. The authors found that the reaction rate between NO and RBC is 0.2 x 105, M-1 s-1 than 1.4 x 105, M-1 s-1 as reported before by Butler et.al., Biochim. Biophys. Acta, 1998.

Their results show that even small increase in cell free Hb, 0.5uM, can decrease NO concentrations by 3-7 folds approximately (comparing Fig1(b) and 1(d) of Deonikar and Kavdia, 2012, as shown in Figure 2 of this article). Moreover, their mathematical analysis shows that the increase in diffusion resistance of NO from vascular lumen to cell free zone has no effect on NO distribution and concentration with available levels of cell free Hb.

Deonikar and Kavdia’s mathematical model is a simple representation of actual physiological scenario. However, their model results show that for Sickle cell disease patients, decrease in levels of bioavailable NO is an attribute to cell free Hb, which is in abundant for these patients. Their results show that small increase by 0.5 uM in cell free Hb can cause large decrease in NO concentrations.

These interesting insights from the model can help in further understanding in the context of physiological conditions, by replicating experiments in-vivo and then relating them to other known diseases of Sickle cell disease patients like Anemia, Pulmonary Hypertension. Further, drugs can be targeted towards decreasing free cell Hbs to keep balance in availability of NO, which in turn may help in other related disease like Pulmonary Hypertension of Sickle Cell disease patients.

Sources:

Deonikar and Kavdia (2012) :http://www.ncbi.nlm.nih.gov/pubmed/22223452

Previous model explaining mathematical representation and parameters used in the model :Deonikar and Kavdia, Annals of Biomed., 2010.

Previous paper stating reaction rate of Hb and NO: Butler et.al., Biochim. Biophys. Acta, 1998.

Causes of decrease in NO

Clinical Hypertension : http://www.ncbi.nlm.nih.gov/pubmed/11311074

Right ventricular overload : http://www.ncbi.nlm.nih.gov/pubmed/9559613

Low levels of zinc and high levels of cardiac necrosis : http://www.ncbi.nlm.nih.gov/pubmed/11243421

Sickle Cell Source:

http://en.wikipedia.org/wiki/Sickle-cell_disease

http://www.nhlbi.nih.gov/health/health-topics/topics/sca/

NO Source:

Differential Distribution of Nitric Oxide – A 3-D Mathematical Model:

Discovery of NO and its effects of vascular biology

Nitric Oxide has a ubiquitous role in the regulation of glycolysis -with a concomitant influence on mitochondrial function

Nitric oxide: role in Cardiovascular health and disease

NO signaling pathways

Nitric Oxide and Immune Responses: Part 1

Nitric Oxide and Immune Responses: Part 2

Statins’ Nonlipid Effects on Vascular Endothelium through eNOS Activation

http://pharmaceuticalintelligence.com/2012/10/08/statins-nonlipid-effects-on-vascular-endothelium-through-enos-activation/

Inhibition of ET-1, ETA and ETA-ETB, Induction of NO production, stimulation of eNOS and Treatment Regime with PPAR-gamma agonists (TZD): cEPCs Endogenous Augmentation for Cardiovascular Risk Reduction – A Bibliography

Nitric Oxide, Platelets, Endothelium and Hemostasis

Crucial role of Nitric Oxide in Cancer

The rationale and use of inhaled NO in Pulmonary Artery Hypertension and Right Sided Heart Failure

Nitric Oxide and Sepsis, Hemodynamic Collapse, and the Search for Therapeutic Options

NO Nutritional remedies for hypertension and atherosclerosis. It’s 12 am: do you know where your electrons are?

Clinical Trials Results for Endothelin System: Pathophysiological role in Chronic Heart Failure, Acute Coronary Syndromes and MI – Marker of Disease Severity or Genetic Determination?

Endothelial Function and Cardiovascular Disease

Interaction of Nitric Oxide and Prostacyclin in Vascular Endothelium

Endothelial Dysfunction, Diminished Availability of cEPCs,  Increasing  CVD Risk – Macrovascular Disease – Therapeutic Potential of cEPCs

Cardiovascular Disease (CVD) and the Role of agent alternatives in endothelial Nitric Oxide Synthase (eNOS) Activation and Nitric Oxide Production

 

Read Full Post »

Author and Reporter: Anamika Sarkar, Ph.D

Early in the month of September, Nature, published 30 research papers on the results found from the ambitious and one time felt risky project, named, ENCODE (Encyclopedia of DNA Elements). The results of ENCODE revealed that 80% of human genome is not “junk”, as thought before, rather act as  regulatory domains for further signaling events.

When human genome was first sequenced, more than a decade ago, scientists were surprised with the low ratio of coding regions transcribing genes to the number of bases in human DNA. Out of 3 billion bases in human DNA scientists found only 21,000 genes. This unexpected finding led to few basic questions:

  • Why do humans have so many base pairs?
  • How highly regulated complex behaviors of biochemical, cellular and physiological processes can be translated to regulation at genetic levels?

ENCODE project results unveil our limited knowledge about human genome until now. Their results open up new ways of thinking human DNA and its functional domains. It also brings in huge challenges for both experimental developments and data driven computational approaches for better understanding and applications of these new findings.

To gain insight from large scale data and identifying key players from a large pool of data, Bioinformatics approaches will  probably be the only way to move forward. This also means importance of developing new algorithms which will include the capability of including regulatory functions linking with gene regulation. Presently, most algorithms are targeted toward identifying genes and their connections in a linear fashion. However, regulatory domains and their functional activities might be non linear, something which will be revealed with many more experimental results in coming years.

The functional characteristics of human genome will also lead to better understanding of genetic differences between normal states and disease states. Moreover, with proper identification of functional characteristics of a particular gene regulation, drugs can be targeted with much more precision in future. However, to make success of such a complicated problem, it will require visionary design and execution of experiment and computational biology teams working together.

It is well recognized already that Bioinformatics approaches can hugely help in identifying key players in regulation of genes. However many times it is not easy to translate information at the genetic levels directly to cellular or physiological levels. Some of the main reasons are – a) the complex cross talks between proteins which lead to intracellular signaling events and b) highly non linear information sharing among receptors and ligands for extra cellular signaling processes.  To achieve efficient understanding of the functional characteristics of non-coding regions of DNA in context with regulation of genes, an effort should be given to map the functional network of gene regulation to signaling pathways of protein networks. This will require development of experimental as well as computational approaches to capture genetic as well as proteomics analysis together. Furthermore, for better understanding of cellular and physiological decisions,  mapping between regulations of genes and intracellular signaling pathways should be extended for dynamic analysis with time.

The extraordinary findings from ENCODE project pose many challenges in front for getting answers to many unknowns for next decade or so but also give solutions to some basic questions which have haunted scientific world for almost a decade.

Sources:

News and Views- ENCODE explained:  http://www.nature.com/nature/journal/v489/n7414/full/489052a.html

News and Analysis – ENCODE Project writes Eulogy for Junk DNA : http://www.sciencemag.org/content/337/6099/1159.summary?sid=835cf304-a61f-45d5-8d77-ad44b454e448

ENCODE Project (Nature Article): http://www.nature.com/nature/journal/v489/n7414/full/nature11247.html

 

Read Full Post »

Reported by: Dr. Venkat S. Karra, Ph.D.

“Comprehensive computer models of entire cells have the potential to advance our understanding of cellular function and, ultimately, to inform new approaches for the diagnosis and treatment of disease.” Not only does the model allow researchers to address questions that aren’t practical to examine otherwise, it represents a stepping-stone towards its use  in bioengineering and medicine.

A team led by Stanford bioengineering Professor Markus Covert used data from more than 900 scientific papers to account for every molecular interaction that takes place in the life cycle of Mycoplasma genitalium. Mycoplasma genitalium is a humble parasitic bacterium, known mainly for showing up uninvited in human urogenital and respiratory tracts. But the pathogen also has the distinction of containing the smallest genome of any free-living organism – only 525 genes, as opposed to the 4,288 of E. coli, a more traditional laboratory bacterium.

“This is potentially the new Human Genome Project,” Karr said who is a co-first author and Stanford biophysics graduate student. “It’s to understand biology generally.”

“It’s going to take a really large community effort to get close to a human model.”

This is a breakthrough effort for computational biology, the world’s first complete computer model of an organism. “This achievement demonstrates a transforming approach to answering questions about fundamental biological processes,” said James M. Anderson, director of the National Institutes of Health Division of Program Coordination, Planning, and Strategic Initiatives.

Study results were published by Stanford researchers in the journal Cell.

The research was partially funded by an NIH Director’s Pioneer Award from the National Institute of Health Common Fund.

Source:

http://www.dddmag.com/news/2012/07/first-complete-computer-model-bacteria?et_cid=2783229&et_rid=45527476&linkid=http%3a%2f%2fwww.dddmag.com%2fnews%2f2012%2f07%2ffirst-complete-computer-model-bacteria

Read Full Post »