Feeds:
Posts
Comments

Posts Tagged ‘NGS’

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

Structure-guided Drug Discovery: (1) The Coronavirus 3CL hydrolase (Mpro) enzyme (main protease) essential for proteolytic maturation of the virus and (2) viral protease, the RNA polymerase, the viral spike protein, a viral RNA as promising two targets for discovery of cleavage inhibitors of the viral spike polyprotein preventing the Coronavirus Virion the spread of infection

 

Curators and Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN

 

Therapeutical options to coronavirus (2019-nCoV) include consideration of the following:

(a) Monoclonal and polyclonal antibodies

(b)  Vaccines

(c)  Small molecule treatments (e.g., chloroquinolone and derivatives), including compounds already approved for other indications 

(d)  Immuno-therapies derived from human or other sources

 

 

Structure of the nCoV trimeric spike

The World Health Organization has declared the outbreak of a novel coronavirus (2019-nCoV) to be a public health emergency of international concern. The virus binds to host cells through its trimeric spike glycoprotein, making this protein a key target for potential therapies and diagnostics. Wrapp et al. determined a 3.5-angstrom-resolution structure of the 2019-nCoV trimeric spike protein by cryo–electron microscopy. Using biophysical assays, the authors show that this protein binds at least 10 times more tightly than the corresponding spike protein of severe acute respiratory syndrome (SARS)–CoV to their common host cell receptor. They also tested three antibodies known to bind to the SARS-CoV spike protein but did not detect binding to the 2019-nCoV spike protein. These studies provide valuable information to guide the development of medical counter-measures for 2019-nCoV. [Bold Face Added by ALA]

Science, this issue p. 1260

Abstract

The outbreak of a novel coronavirus (2019-nCoV) represents a pandemic threat that has been declared a public health emergency of international concern. The CoV spike (S) glycoprotein is a key target for vaccines, therapeutic antibodies, and diagnostics. To facilitate medical countermeasure development, we determined a 3.5-angstrom-resolution cryo–electron microscopy structure of the 2019-nCoV S trimer in the prefusion conformation. The predominant state of the trimer has one of the three receptor-binding domains (RBDs) rotated up in a receptor-accessible conformation. We also provide biophysical and structural evidence that the 2019-nCoV S protein binds angiotensin-converting enzyme 2 (ACE2) with higher affinity than does severe acute respiratory syndrome (SARS)-CoV S. Additionally, we tested several published SARS-CoV RBD-specific monoclonal antibodies and found that they do not have appreciable binding to 2019-nCoV S, suggesting that antibody cross-reactivity may be limited between the two RBDs. The structure of 2019-nCoV S should enable the rapid development and evaluation of medical countermeasures to address the ongoing public health crisis.

SOURCE
Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation
  1. Department of Molecular Biosciences, The University of Texas at Austin, Austin, TX 78712, USA.

  2. 2Vaccine Research Center, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD 20892, USA.
  1. Corresponding author. Email: jmclellan@austin.utexas.edu
  1. * These authors contributed equally to this work.

Science  13 Mar 2020:
Vol. 367, Issue 6483, pp. 1260-1263
DOI: 10.1126/science.abb2507

 

02/04/2020

New Coronavirus Protease Structure Available

PDB data provide a starting point for structure-guided drug discovery

A high-resolution crystal structure of COVID-19 (2019-nCoV) coronavirus 3CL hydrolase (Mpro) has been determined by Zihe Rao and Haitao Yang’s research team at ShanghaiTech University. Rapid public release of this structure of the main protease of the virus (PDB 6lu7) will enable research on this newly-recognized human pathogen.

Recent emergence of the COVID-19 coronavirus has resulted in a WHO-declared public health emergency of international concern. Research efforts around the world are working towards establishing a greater understanding of this particular virus and developing treatments and vaccines to prevent further spread.

While PDB entry 6lu7 is currently the only public-domain 3D structure from this specific coronavirus, the PDB contains structures of the corresponding enzyme from other coronaviruses. The 2003 outbreak of the closely-related Severe Acute Respiratory Syndrome-related coronavirus (SARS) led to the first 3D structures, and today there are more than 200 PDB structures of SARS proteins. Structural information from these related proteins could be vital in furthering our understanding of coronaviruses and in discovery and development of new treatments and vaccines to contain the current outbreak.

The coronavirus 3CL hydrolase (Mpro) enzyme, also known as the main protease, is essential for proteolytic maturation of the virus. It is thought to be a promising target for discovery of small-molecule drugs that would inhibit cleavage of the viral polyprotein and prevent spread of the infection.

Comparison of the protein sequence of the COVID-19 coronavirus 3CL hydrolase (Mpro) against the PDB archive identified 95 PDB proteins with at least 90% sequence identity. Furthermore, these related protein structures contain approximately 30 distinct small molecule inhibitors, which could guide discovery of new drugs. Of particular significance for drug discovery is the very high amino acid sequence identity (96%) between the COVID-19 coronavirus 3CL hydrolase (Mpro) and the SARS virus main protease (PDB 1q2w). Summary data about these closely-related PDB structures are available (CSV) to help researchers more easily find this information. In addition, the PDB houses 3D structure data for more than 20 unique SARS proteins represented in more than 200 PDB structures, including a second viral protease, the RNA polymerase, the viral spike protein, a viral RNA, and other proteins (CSV).

Public release of the COVID-19 coronavirus 3CL hydrolase (Mpro), at a time when this information can prove most vital and valuable, highlights the importance of open and timely availability of scientific data. The wwPDB strives to ensure that 3D biological structure data remain freely accessible for all, while maintaining as comprehensive and accurate an archive as possible. We hope that this new structure, and those from related viruses, will help researchers and clinicians address the COVID-19 coronavirus global public health emergency.

Update: Released COVID-19-related PDB structures include

  • PDB structure 6lu7 (X. Liu, B. Zhang, Z. Jin, H. Yang, Z. Rao Crystal structure of COVID-19 main protease in complex with an inhibitor N3 doi: 10.2210/pdb6lu7/pdb) Released 2020-02-05
  • PDB structure 6vsb (D. Wrapp, N. Wang, K.S. Corbett, J.A. Goldsmith, C.-L. Hsieh, O. Abiona, B.S. Graham, J.S. McLellan (2020) Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation Science doi: 10.1126/science.abb2507) Released 2020-02-26
  • PDB structure 6lxt (Y. Zhu, F. Sun Structure of post fusion core of 2019-nCoV S2 subunit doi: 10.2210/pdb6lxt/pdb) Released 2020-02-26
  • PDB structure 6lvn (Y. Zhu, F. Sun Structure of the 2019-nCoV HR2 Domain doi: 10.2210/pdb6lvn/pdb) Released 2020-02-26
  • PDB structure 6vw1
    J. Shang, G. Ye, K. Shi, Y.S. Wan, H. Aihara, F. Li Structural basis for receptor recognition by the novel coronavirus from Wuhan doi: 10.2210/pdb6vw1/pdb
    Released 2020-03-04
  • PDB structure 6vww
    Y. Kim, R. Jedrzejczak, N. Maltseva, M. Endres, A. Godzik, K. Michalska, A. Joachimiak, Center for Structural Genomics of Infectious Diseases Crystal Structure of NSP15 Endoribonuclease from SARS CoV-2 doi: 10.2210/pdb6vww/pdb
    Released 2020-03-04
  • PDB structure 6y2e
    L. Zhang, X. Sun, R. Hilgenfeld Crystal structure of the free enzyme of the SARS-CoV-2 (2019-nCoV) main protease doi: 10.2210/pdb6y2e/pdb
    Released 2020-03-04
  • PDB structure 6y2f
    L. Zhang, X. Sun, R. Hilgenfeld Crystal structure (monoclinic form) of the complex resulting from the reaction between SARS-CoV-2 (2019-nCoV) main protease and tert-butyl (1-((S)-1-(((S)-4-(benzylamino)-3,4-dioxo-1-((S)-2-oxopyrrolidin-3-yl)butan-2-yl)amino)-3-cyclopropyl-1-oxopropan-2-yl)-2-oxo-1,2-dihydropyridin-3-yl)carbamate (alpha-ketoamide 13b) doi: 10.2210/pdb6y2f/pdb
    Released 2020-03-04
  • PDB structure 6y2g
    L. Zhang, X. Sun, R. Hilgenfeld Crystal structure (orthorhombic form) of the complex resulting from the reaction between SARS-CoV-2 (2019-nCoV) main protease and tert-butyl (1-((S)-1-(((S)-4-(benzylamino)-3,4-dioxo-1-((S)-2-oxopyrrolidin-3-yl)butan-2-yl)amino)-3-cyclopropyl-1-oxopropan-2-yl)-2-oxo-1,2-dihydropyridin-3-yl)carbamate (alpha-ketoamide 13b) doi: 10.2210/pdb6y2g/pdb
    Released 2020-03-04
First page image

Abstract

Coronavirus disease 2019 (COVID-19) is a global pandemic impacting nearly 170 countries/regions and more than 285,000 patients worldwide. COVID-19 is caused by the Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), which invades cells through the angiotensin converting enzyme 2 (ACE2) receptor. Among those with COVID-19, there is a higher prevalence of cardiovascular disease and more than 7% of patients suffer myocardial injury from the infection (22% of the critically ill). Despite ACE2 serving as the portal for infection, the role of ACE inhibitors or angiotensin receptor blockers requires further investigation. COVID-19 poses a challenge for heart transplantation, impacting donor selection, immunosuppression, and post-transplant management. Thankfully there are a number of promising therapies under active investigation to both treat and prevent COVID-19. Key Words: COVID-19; myocardial injury; pandemic; heart transplant

SOURCE

https://www.ahajournals.org/doi/pdf/10.1161/CIRCULATIONAHA.120.046941

ACE2

  • Towler P, Staker B, Prasad SG, Menon S, Tang J, Parsons T, Ryan D, Fisher M, Williams D, Dales NA, Patane MA, Pantoliano MW (Apr 2004). “ACE2 X-ray structures reveal a large hinge-bending motion important for inhibitor binding and catalysis”The Journal of Biological Chemistry279 (17): 17996–8007. doi:10.1074/jbc.M311191200PMID 14754895.

 

  • Turner AJ, Tipnis SR, Guy JL, Rice G, Hooper NM (Apr 2002). “ACEH/ACE2 is a novel mammalian metallocarboxypeptidase and a homologue of angiotensin-converting enzyme insensitive to ACE inhibitors”Canadian Journal of Physiology and Pharmacology80 (4): 346–53. doi:10.1139/y02-021PMID 12025971.

 

  •  Zhang, Haibo; Penninger, Josef M.; Li, Yimin; Zhong, Nanshan; Slutsky, Arthur S. (3 March 2020). “Angiotensin-converting enzyme 2 (ACE2) as a SARS-CoV-2 receptor: molecular mechanisms and potential therapeutic target”Intensive Care Medicine. Springer Science and Business Media LLC. doi:10.1007/s00134-020-05985-9ISSN 0342-4642PMID 32125455.

 

  • ^ Gurwitz, David (2020). “Angiotensin receptor blockers as tentative SARS‐CoV‐2 therapeutics”Drug Development Researchdoi:10.1002/ddr.21656PMID 32129518.

 

Angiotensin converting enzyme 2 (ACE2)

is an exopeptidase that catalyses the conversion of angiotensin I to the nonapeptide angiotensin[1-9][5] or the conversion of angiotensin II to angiotensin 1-7.[6][7] ACE2 has direct effects on cardiac functiona and is expressed predominantly in vascular endothelial cells of the heart and the kidneys.[8] ACE2 is not sensitive to the ACE inhibitor drugs used to treat hypertension.[9]

ACE2 receptors have been shown to be the entry point into human cells for some coronaviruses, including the SARS virus.[10] A number of studies have identified that the entry point is the same for SARS-CoV-2,[11] the virus that causes COVID-19.[12][13][14][15]

Some have suggested that a decrease in ACE2 could be protective against Covid-19 disease[16], but others have suggested the opposite, that Angiotensin II receptor blocker drugs could be protective against Covid-19 disease via increasing ACE2, and that these hypotheses need to be tested by datamining of clinical patient records.[17]

REFERENCES

https://en.wikipedia.org/wiki/Angiotensin-converting_enzyme_2

 

FOLDING@HOME TAKES UP THE FIGHT AGAINST COVID-19 / 2019-NCOV

We need your help! Folding@home is joining researchers around the world working to better understand the 2019 Coronavirus (2019-nCoV) to accelerate the open science effort to develop new life-saving therapies. By downloading Folding@Home, you can donate your unused computational resources to the Folding@home Consortium, where researchers working to advance our understanding of the structures of potential drug targets for 2019-nCoV that could aid in the design of new therapies. The data you help us generate will be quickly and openly disseminated as part of an open science collaboration of multiple laboratories around the world, giving researchers new tools that may unlock new opportunities for developing lifesaving drugs.

2019-nCoV is a close cousin to SARS coronavirus (SARS-CoV), and acts in a similar way. For both coronaviruses, the first step of infection occurs in the lungs, when a protein on the surface  of the virus binds to a receptor protein on a lung cell. This viral protein is called the spike protein, depicted in red in the image below, and the receptor is known as ACE2. A therapeutic antibody is a type of protein that can block the viral protein from binding to its receptor, therefore preventing the virus from infecting the lung cell. A therapeutic antibody has already been developed for SARS-CoV, but to develop therapeutic antibodies or small molecules for 2019-nCoV, scientists need to better understand the structure of the viral spike protein and how it binds to the human ACE2 receptor required for viral entry into human cells.

Proteins are not stagnant—they wiggle and fold and unfold to take on numerous shapes.  We need to study not only one shape of the viral spike protein, but all the ways the protein wiggles and folds into alternative shapes in order to best understand how it interacts with the ACE2 receptor, so that an antibody can be designed. Low-resolution structures of the SARS-CoV spike protein exist and we know the mutations that differ between SARS-CoV and 2019-nCoV.  Given this information, we are uniquely positioned to help model the structure of the 2019-nCoV spike protein and identify sites that can be targeted by a therapeutic antibody. We can build computational models that accomplish this goal, but it takes a lot of computing power.

This is where you come in! With many computers working towards the same goal, we aim to help develop a therapeutic remedy as quickly as possible. By downloading Folding@home here [LINK] and selecting to contribute to “Any Disease”, you can help provide us with the computational power required to tackle this problem. One protein from 2019-nCoV, a protease encoded by the viral RNA, has already been crystallized. Although the 2019-nCoV spike protein of interest has not yet been resolved bound to ACE2, our objective is to use the homologous structure of the SARS-CoV spike protein to identify therapeutic antibody targets.

This illustration, created at the Centers for Disease Control and Prevention (CDC), reveals ultrastructural morphology exhibited by coronaviruses. Note the spikes that adorn the outer surface of the virus, which impart the look of a corona surrounding the virion, when viewed electron microscopically. A novel coronavirus virus was identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China in 2019.

Image and Caption Credit: Alissa Eckert, MS; Dan Higgins, MAM available at https://phil.cdc.gov/Details.aspx?pid=23311

Structures of the closely related SARS-CoV spike protein bound by therapeutic antibodies may help rapidly design better therapies. The three monomers of the SARS-CoV spike protein are shown in different shades of red; the antibody is depicted in green. [PDB: 6NB7 https://www.rcsb.org/structure/6nb7]

(post authored by Ariana Brenner Clerkin)

References:

PDB 6lu7 structure summary ‹ Protein Data Bank in Europe (PDBe) ‹ EMBL-EBI https://www.ebi.ac.uk/pdbe/entry/pdb/6lu7 (accessed Feb 5, 2020).

Tian, X.; Li, C.; Huang, A.; Xia, S.; Lu, S.; Shi, Z.; Lu, L.; Jiang, S.; Yang, Z.; Wu, Y.; et al. Potent Binding of 2019 Novel Coronavirus Spike Protein by a SARS Coronavirus-Specific Human Monoclonal Antibody; preprint; Microbiology, 2020. https://doi.org/10.1101/2020.01.28.923011.

Walls, A. C.; Xiong, X.; Park, Y. J.; Tortorici, M. A.; Snijder, J.; Quispe, J.; Cameroni, E.; Gopal, R.; Dai, M.; Lanzavecchia, A.; et al. Unexpected Receptor Functional Mimicry Elucidates Activation of Coronavirus Fusion. Cell 2019176, 1026-1039.e15. https://doi.org/10.2210/pdb6nb7/pdb.

SOURCE

https://foldingathome.org/2020/02/27/foldinghome-takes-up-the-fight-against-covid-19-2019-ncov/

UPDATED 3/13/2020

I am reposting the following Science blog post from Derrick Lowe as is and ask people go browse through the comments on his Science blog In the Pipeline because, as Dr. Lowe states that in this current crisis it is important to disseminate good information as quickly as possible so wanted the readers here to have the ability to read his great posting on this matter of Covid-19.  Also i would like to direct readers to the journal Science opinion letter concerning how important it is to rebuild the trust in good science and the scientific process.  The full link for the following In the Pipeline post is: https://blogs.sciencemag.org/pipeline/archives/2020/03/06/covid-19-small-molecule-therapies-reviewed

A Summary of current potential repurposed therapeutics for COVID-19 Infection from In The Pipeline: A Science blog from Derick Lowe

Covid-19 Small Molecule Therapies Reviewed

Let’s take inventory on the therapies that are being developed for the coronavirus epidemic. Here is a very thorough list of at Biocentury, and I should note that (like Stat and several other organizations) they’re making all their Covid-19 content free to all readers during this crisis. I’d like to zoom in today on the potential small-molecule therapies, since some of these have the most immediate prospects for use in the real world.

The ones at the front of the line are repurposed drugs that are already approved for human use, for a lot of obvious reasons. The Biocentury list doesn’t cover these, but here’s an article at Nature Biotechnology that goes into detail. Clinical trials are a huge time sink – they sort of have to be, in most cases, if they’re going to be any good – and if you’ve already done all that stuff it’s a huge leg up, even if the drug itself is not exactly a perfect fit for the disease. So what do we have? The compound that is most advanced is probably remdesivir from Gilead, at right. This has been in development for a few years as an RNA virus therapy – it was originally developed for Ebola, and has been tried out against a whole list of single-strand RNA viruses. That includes the related coronaviruses SARS and MERS, so Covid-19 was an obvious fit.

The compound is a prodrug – that phosphoramide gets cleaved off completely, leaving the active 5-OH compound GS-44-1524. It mechanism of action is to get incorporated into viral RNA, since it’s taken up by RNA polymerase and it largely seems to evade proofreading. This causes RNA termination trouble later on, since that alpha-nitrile C-nucleoside is not exactly what the virus is expecting in its genome at that point, and thus viral replication is inhibited.

There are five clinical trials underway (here’s an overview at Biocentury). The NIH has an adaptive-design Phase II trial that has already started in Nebraska, with doses to be changed according to Bayesian readouts along the way. There are two Phase III trials underway at China-Japan Friendship Hospital in Hubei, double-blinded and placebo-controlled (since placebo is, as far as drug therapy goes, the current standard of care). And Gilead themselves are starting two open-label trials, one with no control arm and one with an (unblinded) standard-of-care comparison arm. Those might read out first, depending on when they get off the ground, but will be only rough readouts due to the fast-and-loose trial design. The two Hubei trials and the NIH one will add some rigor to the process, but I’m not sure when they’re going to report. My personal opinion is that I like the chances of this drug more than anything else on this list, but it’s still unlikely to be a game-changer.

There’s an RNA polymerase inhibitor (favipiravir) from Toyama, at right, that’s in a trial in China. It’s a thought – a broad-spectrum agent of this sort would be the sort of thing to try. But unfortunately, from what I can see, it has already turned up as ineffective in in vitro tests. The human trial that’s underway is honestly the sort of thing that would only happen under circumstances like the present: a developing epidemic with a new pathogen and no real standard of care. I hold out little hope for this one, but given that there’s nothing else at present, it probably should be tried. As you’ll see, this is far from the only situation like this.

One of the screens of known drugs in China that also flagged remdesivir noted that the old antimalarial drug chloroquine seemed to be effective in vitro. It had been reported some years back as a possible antiviral, working through more than one mechanism, probably both at viral entry and intracellularly thereafter. That part shouldn’t be surprising – chloroquine’s actual mode(s) of action against malaria parasites are still not completely worked out, either, and some of what people thought they knew about it has turned out to be wrong. There are several trials underway with it at Chinese facilities, some in combination with other agents like remdesivir. Chloroquine has of course been taken for many decades as an antimalarial, but it has a number of liabilities, including seizures, hearing damage, retinopathy and sudden effects on blood glucose. So it’s going to be important to establish just how effective it is and what doses will be needed. Just as with vaccine candidates, it’s possible to do more harm with a rushed treatment than the disease is doing itself

There are several other known antiviral drugs are being tried in China, but I don’t have too much hope for those, either. The neuraminidase inhibitors such as oseltamivir (better known as Tamiflu) were tried against SARS and were ineffective; there is no reason to expect anything versus Covid-19 although these drugs are a component of some drug cocktail trials. The HIV protease therapies such as darunavir and the combination therapy Kaletra are in trials, but that’s also a rather desperate long shot, since there’s no particular reason to think that they will have any such protease inhibition against what this new virus has to offer (and indeed, such agents weren’t much help against SARS in the end, either). The classic interferon/ribavirin combination seems to have had some activity against SARS and MERS, and is in two trials from what I can see. That’s not an awful idea by any means, but it’s not a great one, either: if your viral disease has interferon/ribavirin as a front line therapy, it generally means that there’s nothing really good available. No, unless we get really lucky none of these ideas are going to slow the disease down much.

There are a few other repurposed-protease-inhibitors ideas out there, such as this one. (Edit: I had seen this paper but couldn’t track it down, so thanks to those who sent it along). This paper suggests that the TMPRSS2 protease is important for viral entry on the human-cell-side of the process, a pathway that has been noted for other coronaviruses. And it points out that there is a an approved inhibitor (in Japan) for this enzyme (camostat), so that would definitely seem to be worth a trial, probably in combination with remdesivir.

That’s about it for the existing small molecules, from what I can see. What about new ones? Don’t hold your breath, is all I can say. A drug discovery program from scratch against a new pathogen is, as many readers here well know, not a trivial exercise. As this Bloomberg article details, many such efforts in the past (small molecules and vaccines alike) have come to grief because by the time they had anything to deliver the epidemic itself had passed. Indeed, Gilead’s remdesivir had already been dropped as a potential Ebola therapy.

You will either need to have a target in mind up front or go phenotypic. For the former, what you’d see are better characterizations of the viral protease and more extensive screens against it. Two other big target areas are viral entry (which involves the “spike” proteins on the virus surface and the ACE2 protein on human cells) and viral replication. To the former, it’s worth quickly noting that ACE2 is so much unlike the more familiar ACE protein that none of the cardiovascular ACE inhibitors do anything to it at all. And targeting the latter mechanisms is how remdesivir was developed as a possible Ebola agent, but as you can see, that took time, too. Phenotypic screens are perfectly reasonable against viral pathogens as well, but you’ll need to put time and effort into that assay up front, just as with any phenotypic effort, because as anyone who does that sort of work will tell you, a bad phenotypic screen is a complete waste of everyone’s time.

One of the key steps for either route is identifying an animal model. While animal models of infectious disease can be extremely well translated to human therapy, that doesn’t happen by accident: you need to choose the right animal. Viruses in general (and coronaviruses are no exception) vary widely in their effects in different species, and not just across the gaps of bird/reptile/human and the like. No, you’ll run into things where even the usual set of small mammals are acting differently from each other, with some of them not even getting sick at all. This current virus may well have gone through a couple of other mammalian species before landing on us, but you’ll note that dogs (to pick one) don’t seem to have any problem with it.

All this means that any new-target new-chemical-matter effort against Covid-19 (or any new pathogen) is going to take years, and there is just no way around that. Update: see here for just such an effort to start finding fragment hits for the viral protease. This puts small molecules in a very bimodal distribution: you have the existing drugs that might be repurposed, and are presumably available right now. Nothing else is! At the other end, for completely new therapies you have the usual prospects of drug discovery: years from now, lots of money, low success rate, good luck to all of us. The gap between these two could in theory be filled by vaccines and antibody therapies (if everything goes really, really well) but those are very much their own area and will be dealt with in a separate post.

Either way, the odds are that we (and I mean “we as a species” here) are going to be fighting this epidemic without any particularly amazing pharmacological weapons. Eventually we’ll have some, but I would advise people, pundits, and politicians not to get all excited about the prospects for some new therapies to come riding up over the hill to help us out. The odds of that happening in time to do anything about the current outbreak are very small. We will be going for months, years, with the therapeutic options we have right now. Look around you: what we have today is what we have to work with.

Other related articles published in this Open Access Online Scientific Journal include the following:

 

Group of Researchers @ University of California, Riverside, the University of Chicago, the U.S. Department of Energy’s Argonne National Laboratory, and Northwestern University solve COVID-19 Structure and Map Potential Therapeutics

Reporters: Stephen J Williams, PhD and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2020/03/06/group-of-researchers-solve-covid-19-structure-and-map-potential-therapeutic/

Predicting the Protein Structure of Coronavirus: Inhibition of Nsp15 can slow viral replication and Cryo-EM – Spike protein structure (experimentally verified) vs AI-predicted protein structures (not experimentally verified) of DeepMind (Parent: Google) aka AlphaFold

Curators: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2020/03/08/predicting-the-protein-structure-of-coronavirus-inhibition-of-nsp15-can-slow-viral-replication-and-cryo-em-spike-protein-structure-experimentally-verified-vs-ai-predicted-protein-structures-not/

 

Coronavirus facility opens at Rambam Hospital using new Israeli tech

https://www.jpost.com/Israel-News/Coronavirus-facility-opens-at-Rambam-Hospital-using-new-Israeli-tech-619681

 

Read Full Post »

Free Bio-IT World Webinar: Machine Learning to Detect Cancer Variants

Reporter: Stephen J. Williams, PhD

 

     


SomaticSeq: An Ensemble Approach with Machine Learning to Detect Cancer Variants

June 16 at 1pm EDT Register for this Webinar |  View All Webinars

Accurate detection of somatic mutations has proven to be challenging in cancer NGS analysis, due to tumor heterogeneity and cross-contamination between tumor and matched normal samples. Oftentimes, a somatic caller that performs well for one tumor may not for another.

In this webinar we will introduce SomaticSeq, a tool within the Bina Genomic Management Solution (Bina GMS) designed to boost the accuracy of somatic mutation detection with a machine learning approach. You will learn:

  • Benchmarking of leading somatic callers, namely MuTect, SomaticSniper, VarScan2, JointSNVMix2, and VarDict
  • Integration of such tools and how accuracy is achieved using a machine learning classifier that incorporates over 70 features with SomaticSeq
  • Accuracy validation including results from the ICGC-TCGA DREAM Somatic Mutation Calling Challenge, in which Bina placed 1st in indel calling and 2nd in SNV calling in stage 5
  • Creation of a new SomaticSeq classifier utilizing your own dataset
  • Review of the somatic workflow within the Bina Genomic Management Solution

Speakers:

Li Tai Fang

Li Tai Fang
Sr. Bioinformatics Scientist
Bina Technologies, Part of
Roche Sequencing

Anoop Grewal

Anoop Grewal
Product Marketing Manager
Bina Technologies, Part of
Roche Sequencing

<Read full speaker bios here>

Cost: No cost!

Schedule conflict? Register now and you’ll receive a copy of the recording.

This webinar is compliments of: 

Bio-ITWorld.com/Bio-IT-Webinars

Read Full Post »

Invivoscribe, Thermo Fisher Ink Cancer Dx Development Deal, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Invivoscribe, Thermo Fisher Ink Cancer Dx Development Deal

Reporter: Stephen J. Williams, PhD

 

NEW YORK (GenomeWeb) – Invivoscribe Technologies announced today that it has formed a strategic partnership with Thermo Fisher Scientific to develop multiple next-generation sequencing-based in vitro cancer diagnostics.

Under the deal, Invivoscribe will develop and commercialize immune-oncology molecular diagnostics that run on Thermo’s Ion PGM Dx system, as well as associated bioinformatics software for applications in liquid biopsies. The tests will be specifically designed for both the diagnosis and minimal residual disease (MRD) monitoring of various hematologic cancers.

Additional terms of the arrangement were not disclosed.

“We are … very excited to provide our optimized NGS tests with comprehensive bioinformatics software so our customers can perform the entire testing and reporting process, including MRD testing, within their laboratories,” Invivoscribe CEO Jeffrey Miller said in a statement.

Read Full Post »

Cambridge Healthtech Institute’s Third Annual

Clinical NGS Assays

Addressing Validation, Standards, and Clinical Relevance for Improved Outcomes

August 23-24, 2016 | Grand Hyatt Hotel | Washington, DC

Reporter: Stephen J. Williams, PhD


View Preliminary Agenda
 

Molecular diagnostics, particularly next-generation sequencing (NGS), have become an integral component of disease diagnosis. Still, there is work to be done to establish these tools as the standard of care. The Third Annual Clinical NGS Assays event will address NGS assay validation, establishing NGS standards, and determining clinical relevance. The pros and cons of various techniques such as gene panels, whole exome, and whole genome sequencing will also be debated with regards to depth of coverage, clinical utility, and reimbursement. Overall, this event will address the needs of both researchers and clinicians while exploring strategies to increase collaboration for improved patient outcomes.

Special Early Registration Savings Available
Register Now to Save up to $450

Preliminary Agenda

ASSAY VALIDATION AND ANALYSIS

Best Practices for Using Genome in a Bottle Reference Materials to Benchmark Variant Calls
Justin Zook, National Institute of Standards and Technology

NGS in Clinical Diagnosis: Aspects of Quality Management
Pinar Bayrak-Toydemir, M.D., Ph.D., FACMG, Associate Professor, Pathology, University of Utah; Medical Director, Molecular Genetics and Genomics, ARUP Laboratories

Thorough Validation and Implementation of Preimplantation Genetic Screening for Aneuploidy by NGS
Rebekah Zimmerman, Ph.D., Laboratory Director, Clinical Genetics, Foundation for Embryonic Competence

EXOME INTERPRETATION CHALLENGES

Are We There Yet? The Odyssey of Exome Analysis and Interpretation
Avni B. Santani, Ph.D., Director, Genomic Diagnostics, Pathology and Lab Medicine, The Children’s Hospital of Philadelphia

Challenges in Exome Interpretation: Intronic Variants
Rong Mao, M.D., Associate Professor, Pathology, University of Utah; Medical Director, Molecular Genetics and Genomics, ARUP Laboratories

Exome Sequencing: Case Studies of Diagnostic and Ethical Challenges
Lora J. H. Bean, Ph.D., Assistant Professor, Human Genetics, Emory University

ESTABLISHING STANDARDS

Implementing Analytical and Process Standards
Karl V. Voelkerding, M.D., Professor, Pathology, University of Utah; Medical Director for Genomics and Bioinformatics, ARUP Laboratories

Assuring the Quality of Next-Generation Sequencing in Clinical Laboratory Practice
Shashikant Kulkarni, M.S., Ph.D., Professor, Pathology and Immunology; Head of Clinical Genomics, Genomics and Pathology Services; Director, Cytogenomics and Molecular Pathology, Washington University at St. Louis

Sponsored Presentation to be Announced by Genection

PANEL DISCUSSION: GENE PANEL VS. WHOLE EXOME VS. WHOLE GENOME

Panelists:
John Chiang, Ph.D., Director, Casey Eye Institute, Oregon Health & Science University
Avni B. Santani, Ph.D., Director, Genomic Diagnostics, Pathology and Lab Medicine, The Children’s Hospital of Philadelphia
Additional Panelist to be Announced

DETERMINING CLINICAL SIGNIFICANCE AND RETURNING RESULTS

Utility of Implementing Clinical NGS Assays as Standard-of-Care in Oncology
Helen Fernandes, Ph.D., Pathology & Laboratory Medicine, Weill Cornell Medical College

An NGS Inter-Laboratory Study to Assess Performance and QC – Sponsored by Seracare
Andrea Ferreira-Gonzalez, Ph.D., Chair, Molecular Diagnostics Division, Pathology, Virginia Commonwealth University Medical School

This conference is part of the Eighth Annual Next-Generation Dx Summit.


Track Sponsor: SeraCare


For exhibit & sponsorship opportunities, please contact:

Joseph Vacca, M.Sc.
Associate Director, Business Development
Cambridge Healthtech Institute
T: (+1) 781-972-5431
E: jvacca@healthtech.com

Read Full Post »

Roche is developing a high-throughput low cost sequencer for NGS, How NGS Will Revolutionize Reproductive Diagnostics: November Meeting, Boston MA, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Roche is developing a high-throughput low cost sequencer for NGS

Reporter: Stephen J. Williams, PhD

 

Reported from Diagnostic World News

Long-Read Sequencing in the Age of Genomic Medicine

 

 

By Aaron Krol

December 16, 2015 | This September, Pacific Biosciences announced the creation of the Sequel, a DNA sequencer half the cost and seven times as powerful as its previous RS II instrument. PacBio, with its unique long-read sequencing technology, had already secured a place in high-end research labs, producing finished, highly accurate genomes and helping to explore the genetic “dark matter” that other next-generation sequencing (NGS) instruments miss. Now, in partnership with Roche Diagnostics, PacBio is repositioning itself as a company that can serve hospitals as well.

“Pseudogenes, large structural variants, validation, repeat disorders, polymorphic regions of the genome―all those are categories where you practically need PacBio,” says Bobby Sebra, Director of Technology Development at the Icahn School of Medicine at Mount Sinai. “Those are gaps in the system right now for short-read NGS.”

Mount Sinai’s genetic testing lab owns three RS II sequencers, running almost around the clock, and was the first lab to announce it had bought a Sequel just weeks after the new instruments were launched. (It arrived earlier this month and has been successfully tested.) Sebra’s group uses these sequencers to read parts of the genome that, thanks to their structural complexity, can only be assembled from long, continuous DNA reads.

There are a surprising number of these blind spots in the human genome. “HLA is a huge one,” Sebra says, referring to a highly variable region of the genome involved in the immune system. “It impacts everything from immune response, to pharmacogenomics, to transplant medicine. It’s a pretty important and really hard-to-genotype locus.”

Nonetheless, few clinical organizations are studying PacBio or other long-read technologies. PacBio’s instruments, even the Sequel, come with a relatively high price tag, and research on their value in treating patients is still tentative. Mount Sinai’s confidence in the technology is surely at least partly due to the influence of Sebra―an employee of PacBio for five years before coming to New York―and Genetics Department Chair Eric Schadt, at one time PacBio’s Chief Scientific Officer.

Even here, the sequencers typically can’t be used to help treat patients, as the instruments are sold for research use only. Mount Sinai is still working on a limited number of tests to submit as diagnostics to New York State regulators.

Physician Use

Roche Diagnostics, which invested $75 million in the development of the Sequel, wants to change that. The company is planning to release its own, modified version of the instrument in the second half of 2016, specifically for diagnostic use. Roche will initially promote the device for clinical studies, and eventually seek FDA clearance to sell it for routine diagnosis of patients.

In an email to Diagnostics World, Paul Schaffer, Lifecycle Leader for Roche’s sequencing platforms division, wrote that the new device will feature an integrated software pipeline to interpret test results, in support of assays that Roche will design and validate for clinical indications. The instrument will also have at least minor hardware modifications, like near field communication designed to track Roche-branded reagents used during sequencing.

This new version of the Sequel will probably not be the first instrument clinical labs turn to when they decide to start running NGS. Short-read sequencers are sure to outcompete the Roche machine on price, and can offer a pretty useful range of assays, from co-diagnostics in cancer to carrier testing for rare genetic diseases. But Roche can clear away some of the biggest barriers to entry for hospitals that want to pursue long-read sequencing.

Today, institutions like Mount Sinai that use PacBio typically have to write a lot of their own software to interpret the data that comes off the machines. Off-the-shelf analysis, with readable diagnostic reports for doctors, will make it easier for hospitals with less research focus to get on board. To this end, Roche acquired Bina, an NGS analysis company that handles structural variants and other PacBio specialties, in late 2014.

The next question will be whether Roche can design a suite of tests that clinical labs will want to run. Long-read sequencing is beloved by researchers because it can capture nearly complete genomes, finding the correct order and orientation of DNA reads. “The long-read technologies like PacBio’s are going to be, in the future, the showcase that ties it all together,” Sebra says. “You need those long reads as scaffolds to bring it together.”

But that envisions a future in which doctors will want to sequence their patients’ entire genomes. When it comes to specific medical tests, targeting just a small part of the genome connected to disease, Roche will have to content itself with some niche applications where PacBio stands out.

Early Applications

“At this time we are not releasing details regarding the specific assays under development,” Schaffer told Diagnostics World in his email. “However, virology and genetics are a key focus, as they align with other high-priority Roche Diagnostics products.”

Genetic disease is the obvious place to go with any sequencing technology. Rare hereditary disorders are much easier to understand on a genetic level than conditions like diabetes or heart disease; typically, the pathology can be traced back to a single mutation, making it easy to interpret test results.

Some of these mutations are simply intractable for short-read sequencers. A whole class of diseases, the PolyQ disorders and other repeat disorders, develop when a patient has too many copies of a single, repetitive sequence in a gene region. The gene Huntingtin, for example, contains a long stretch of the DNA code CAG; people born with 40 or more CAG repeats in a row will develop Huntington’s disease as they reach early adulthood.

These disorders would be a prime target for Roche’s sequencer. The Sequel’s long reads, spanning thousands of DNA letters at a stretch, can capture the entire repeat region of Huntingtin at a stretch, unlike short-read sequencers that would tend to produce a garbled mess of CAG reads impossible to count or put in order.

Nonetheless, the length of reads is not the only obstacle to understanding these very obstinate diseases. “The entire category of PolyQ disorders, and Fragile X and Huntington’s, is really important,” says Sebra. “But to be frank, they’re the most challenging even with PacBio.” He suggests that, even without venturing into the darkest realms of the genome, a long-read sequencer might actually be useful for diagnosing many of the same genetic diseases routinely covered by other instruments.

That’s because, even when the gene region involved in a disease is well known, there’s rarely only one way for it to go awry. “An example of that is Gaucher’s disease, in a gene called GBA,” Sebra says. “In that gene, there are hundreds of known mutations, some of which you can absolutely genotype using short reads. But others, you would need to phase the entire block to really understand.” Long-read sequencing, which is better at distinguishing maternal from paternal DNA and highlighting complex rearrangements within a gene, can offer a more thorough look at diseases with many genetic permutations, especially when tracking inheritance through a family.

“You can think of long-read sequencing as a really nice way to supplement some of the inherited panels or carrier screening panels,” Sebra says. “You can also use PacBio to verify variants that are called with short-read sequencing.”

Virology is, perhaps, a more surprising focus for Roche. Diagnosing a viral (or bacterial, or fungal) infection with NGS only requires finding a DNA read unique to a particular species or strain, something short-read sequencers are perfectly capable of.

But Mount Sinai, which has used PacBio in pathogen surveillance projects, has seen advantages to getting the full, completely assembled genomes of the organisms it’s tracking. With bacteria, for instance, key genes that confer resistance to antibiotics might be found either in the native genome, or inside plasmids, small packets of DNA that different species of bacteria freely pass between each other. If your sequencer can assemble these plasmids in one piece, it’s easier to tell when there’s a risk of antibiotic resistance spreading through the hospital, jumping from one infectious species to another.

Viruses don’t share their genetic material so freely, but a similar logic can still apply to viral infections, even in a single person. “A virus is really a mixture of different quasi-species,” says Sebra, so a patient with HIV or influenza likely has a whole constellation of subtly different viruses circulating in their body. A test that assembles whole viral genomes—which, given their tiny size, PacBio can often do in a single read—could give physicians a more comprehensive view of what they’re dealing with, and highlight any quasi-species that affect the course of treatment or how the virus is likely to spread.

The Broader View

These applications are well suited to the diagnostic instrument Roche is building. A test panel for rare genetic diseases can offer clear-cut answers, pointing physicians to any specific variants linked to a disorder, and offering follow-up information on the evidence that backs up that call.

That kind of report fits well into the workflows of smaller hospital labs, and is relatively painless to submit to the FDA for approval. It doesn’t require geneticists to puzzle over ambiguous results. As Schaffer says of his company’s overall NGS efforts, “In the past two years, Roche has been actively engaged in more than 25 partnerships, collaborations and acquisitions with the goal of enabling us to achieve our vision of sample in to results out.”

But some of the biggest ways medicine could benefit from long-read sequencing will continue to require the personal touch of labs like Mount Sinai’s.

Take cancer, for example, a field in which complex gene fusions and genetic rearrangements have been studied for decades. Tumors contain multitudes of cells with unique patchworks of mutations, and while long-read sequencing can pick up structural variants that may play a role in prognosis and treatment, many of these variants are rarely seen, little documented, and hard to boil down into a physician-friendly answer.

An ideal way to unravel a unique cancer case would be to sequence the RNA molecules produced in the tumor, creating an atlas of the “transcriptome” that shows which genes are hyperactive, which are being silenced, and which have been fused together. “When you run something like IsoSeq on PacBio and you can see truly the whole transcriptome, you’re going to figure out all possible fusions, all possible splicing events, and the true atlas of reads,” says Sebra. “Cancer is so diverse that it’s important to do that on an individual level.”

Occasionally, looking at the whole transcriptome, and seeing how a mutation in one gene affects an entire network of related genes, can reveal an unexpected treatment option―repurposing a drug usually reserved for other cancer types. But that takes a level of attention and expertise that is hard to condense into a mass-market assay.

And, Sebra suggests, there’s another reason for medical centers not to lean too heavily on off-the-shelf tests from vendors like Roche.

Devoted as he is to his onetime employer, Sebra is also a fan of other technologies now emerging to capture some of the same long-range, structural information on the genome. “You’ve now got 10X Genomics, BioNano, and Oxford Nanopore,” he says. “Often, any two or even three of those technologies, when you merge them together, can get you a much more comprehensive story, sometimes faster and sometimes cheaper.” At Mount Sinai, for example, combining BioNano and PacBio data has produced a whole human genome much more comprehensive than either platform can achieve on its own.

The same is almost certainly true of complex cases like cancer. Yet, while companies like Roche might succeed in bringing NGS diagnostics to a much larger number of patients, they have few incentives to make their assays work with competing technologies the way a research-heavy institute like Mount Sinai does.

“It actually drives the commercialization of software packages against the ability to integrate the data,” Sebra says.

Still, he’s hopeful that the Sequel can lead the industry to pay more attention to long-read sequencing in the clinic. “The RS II does a great job of long-read sequencing, but the throughput for the Sequel is so much higher that you can start to achieve large genomes faster,” he says. “It makes it more accessible for people who don’t own the RS II to get going.” And while the need for highly specialized genetics labs won’t be falling off anytime soon, most patients don’t have the luxury of being treated in a hospital with the resources of Mount Sinai. NGS companies increasingly see physicians as some of their most important customers, and as our doctors start checking into the health of our genomes, it would be a shame if ubiquitous short-read sequencing left them with blind spots.

Source: http://diagnosticsworldnews.com/2015/12/16/long-read-sequencing-age-genomic-medicine.aspx

 

 

Read Full Post »

How Will FDA’s new precision FDA Science 2.0 Collaboration Platform Protect Data? Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

How Will FDA’s new precisionFDA Science 2.0 Collaboration Platform Protect Data?

Reporter: Stephen J. Williams, Ph.D.

As reported in MassDevice.com

FDA launches precisionFDA to harness the power of scientific collaboration

FDA VoiceBy: Taha A. Kass-Hout, M.D., M.S. and Elaine Johanson

Imagine a world where doctors have at their fingertips the information that allows them to individualize a diagnosis, treatment or even a cure for a person based on their genes. That’s what President Obama envisioned when he announced his Precision Medicine Initiative earlier this year. Today, with the launch of FDA’s precisionFDA web platform, we’re a step closer to achieving that vision.

PrecisionFDA is an online, cloud-based, portal that will allow scientists from industry, academia, government and other partners to come together to foster innovation and develop the science behind a method of “reading” DNA known as next-generation sequencing (or NGS). Next Generation Sequencing allows scientists to compile a vast amount of data on a person’s exact order or sequence of DNA. Recognizing that each person’s DNA is slightly different, scientists can look for meaningful differences in DNA that can be used to suggest a person’s risk of disease, possible response to treatment and assess their current state of health. Ultimately, what we learn about these differences could be used to design a treatment tailored to a specific individual.

The precisionFDA platform is a part of this larger effort and through its use we want to help scientists work toward the most accurate and meaningful discoveries. precisionFDA users will have access to a number of important tools to help them do this. These tools include reference genomes, such as “Genome in the Bottle,” a reference sample of DNA for validating human genome sequences developed by the National Institute of Standards and Technology. Users will also be able to compare their results to previously validated reference results as well as share their results with other users, track changes and obtain feedback.

Over the coming months we will engage users in improving the usability, openness and transparency of precisionFDA. One way we’ll achieve that is by placing the code for the precisionFDA portal on the world’s largest open source software repository, GitHub, so the community can further enhance precisionFDA’s features.Through such collaboration we hope to improve the quality and accuracy of genomic tests – work that will ultimately benefit patients.

precisionFDA leverages our experience establishing openFDA, an online community that provides easy access to our public datasets. Since its launch in 2014, openFDA has already resulted in many novel ways to use, integrate and analyze FDA safety information. We’re confident that employing such a collaborative approach to DNA data will yield important advances in our understanding of this fast-growing scientific field, information that will ultimately be used to develop new diagnostics, treatments and even cures for patients.

fda-voice-taha-kass-1x1Taha A. Kass-Hout, M.D., M.S., is FDA’s Chief Health Informatics Officer and Director of FDA’s Office of Health Informatics. Elaine Johanson is the precisionFDA Project Manager.

 

The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of MassDevice.com or its employees.

So What Are the Other Successes With Such Open Science 2.0 Collaborative Networks?

In the following post there are highlighted examples of these Open Scientific Networks and, as long as

  • transparancy
  • equal contributions (lack of heirarchy)

exists these networks can flourish and add interesting discourse.  Scientists are already relying on these networks to collaborate and share however resistance by certain members of an “elite” can still exist.  Social media platforms are now democratizing this new science2.0 effort.  In addition the efforts of multiple biocurators (who mainly work for love of science) have organized the plethora of data (both genomic, proteomic, and literature) in order to provide ease of access and analysis.

Science and Curation: The New Practice of Web 2.0

Curation: an Essential Practice to Manage “Open Science”

The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.

Curation and Scientific and Technical Culture: Creating Hybrid Networks

Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

  • Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor

which has created a need for more context-driven scientific search and discourse.

However another issue would be Individual Bias if these networks are closed and protocols need to be devised to reduce bias from individual investigators, clinicians.  This is where CONSENSUS built from OPEN ACCESS DISCOURSE would be beneficial as discussed in the following post:

Risk of Bias in Translational Science

As per the article

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

This post highlights many important points related to bias but in summarry there can be methodologies and protocols devised to eliminate such bias.  Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

  • Information dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation
  • threats to internal and external validity  represent specific aspects of systematic errors (i.e., bias)in design, methodology and analysis

So what about the safety and privacy of Data?

A while back I did a post and some interviews on how doctors in developing countries are using social networks to communicate with patients, either over established networks like Facebook or more private in-house networks.  In addition, these doctor-patient relationships in developing countries are remote, using the smartphone to communicate with rural patients who don’t have ready access to their physicians.

Located in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

I discuss some of these problems in the following paragraph and associated posts below:

Mobile Health Applications on Rise in Developing World: Worldwide Opportunity

According to International Telecommunication Union (ITU) statistics, world-wide mobile phone use has expanded tremendously in the past 5 years, reaching almost 6 billion subscriptions. By the end of this year it is estimated that over 95% of the world’s population will have access to mobile phones/devices, including smartphones.

This presents a tremendous and cost-effective opportunity in developing countries, and especially rural areas, for physicians to reach patients using mHealth platforms.

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

In Summary, although there are restrictions here in the US governing what information can be disseminated over social media networks, developing countries appear to have either defined the regulations as they are more dependent on these types of social networks given the difficulties in patient-physician access.

Therefore the question will be Who Will Protect The Data?

For some interesting discourse please see the following post

Atul Butte Talks on Big Data, Open Data and Clinical Trials

 

Read Full Post »

How NGS Will Revolutionize Reproductive Diagnostics: November Meeting, Boston MA, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
Reproductive Genetic Dx | Nov. 18-19 | Boston, MA
Reporter: Stephen J. Williams, Ph.D.
Reproductive Genetic Diagnostics
Advances in Carrier Screening, Preimplantation Diagnostics, and POC Testing
November 18-19, 2015  |  Boston, MA
healthtech.com/reproductive-genetic-diagnosticsMount Sinai Hospital’s Dr. Tanmoy Mukherjee to Present at Reproductive Genetic Diagnostics ConferenceTanmoy MukherjeePodcastNumerical Chromosomal Abnormalities after PGS and D&C
Tanmoy Mukherjee, M.D., Assistant Clinical Professor, Obstetrics, Gynecology and Reproductive Science, Mount Sinai Hospital
This review provides an analysis of the most commonly identified numerical chromosome abnormalities following PGS and first trimester D&C samples in an infertile population utilizing ART. Although monosomies comprised >50% of all cytogenetic anomalies identified following PGS, there were very few identified in the post D&C samples. This suggests that while monosomies occur frequently in the IVF population, they commonly do not implant.

In a CHI podcast, Dr. Mukherjee discusses the current challenges facing reproductive specialists in regards to genetic diagnosis of recurrent pregnancy loss, as well as how NGS is affecting this type of testing > Listen to Podcast

Register  SAVE up to $200, Register by October 9

Learn More  |  Present a Poster  |  Sponsorship & Exhibit Information  |  View Brochure

CONFERENCE-AT-A-GLANCE

ADVANCES IN NGS AND OTHER TECHNOLOGIES

Keynote Presentation: Current and Expanding Invitations for Preimplantation Genetic Diagnosis (PGD)
Joe Leigh Simpson, MD, President for Research and Global Programs, March of Dimes Foundation

Next-Generation Sequencing: Its Role in Reproductive Medicine
Brynn Levy, Professor of Pathology & Cell Biology, CUMC; Director, Clinical Cytogenetics Laboratory, Co-Director, Division of Personalized Genomic Medicine, College of Physicians and Surgeons, Columbia University Medical Center, and the New York Presbyterian Hospital

CCS without WGA
Nathan Treff, Director, Molecular Biology Research, Reproductive Medicine Associates of New Jersey, Associate Professor, Department of Obstetrics, Gynecology, and Reproductive Sciences, Rutgers-Robert Wood Johnson Medical School, Adjunct Faculty Member, Department of Genetics, Rutgers-The State University of New Jersey

Concurrent PGD for Single Gene Disorders and Aneuploidy on a Single Trophectoderm Biopsy
Rebekah S. Zimmerman, Ph.D., FACMG, Director, Clinical Genetics, Foundation for Embryonic Competence

Live Birth of Two Healthy Babies with Monogenic Diseases and Chromosome Abnormality Simultaneously Avoided by MALBAC-based Combined PGD and PGS
Xiaoliang Sunney Xie, Ph.D., Mallinckrodt Professor of Chemistry and Chemical Biology, Department of Chemistry and Chemical Biology, Harvard University

Good Start GeneticsAnalytical Validation of a Novel NGS-Based Pre-implantation Genetic Screening Technology
Mark Umbarger, Ph.D., Director, Research and Development, Good Start Genetics


CLINICAL APPLICATIONS FOR ADVANCED TESTING TECHNOLOGIES

Expanded Carrier Screening for Monogenic Disorders
Peter Benn, Professor, Department of Genetics and Genome Sciences, University of Connecticut Health Center

Oocyte Mitochondrial Function and Testing: Implications for Assisted Reproduction
Emre Seli, MD, Yale School of Medicine

Preventing the Transmission of Mitochondrial Diseases through Germline Genome Editing
Alejandro Ocampo, Ph.D., Research Associate, Gene Expression Laboratory – Belmonte, Salk Institute for Biological Studies

Silicon BiosystemsRecovery and Analysis of Single (Fetal) Cells: DEPArray Based Strategy to Examine CPM and POC
Farideh Bischoff, Ph.D., Executive Director, Scientific Affairs, Silicon Biosystems, Inc.

> Sponsored Presentation (Opportunities Available)

Numerical Chromosomal Abnormalities after PGS and D&C
Tanmoy Mukherjee, M.D., Assistant Clinical Professor, Obstetrics, Gynecology and Reproductive Science, Mount Sinai Hospital

EMBRYO PREPARATION, ASSESSMENT, AND TREATMENT

Guidelines and Standards for Embryo Preparation: Embryo Culture, Growth and Biopsy Guidelines for Successful Genetic Diagnosis
Michael A. Lee, MS, TS, ELD (ABB), Director, Laboratories, Fertility Solutions

Current Status of Time-Lapse Imaging for Embryo Assessment and Selection in Clinical IVF
Catherine Racowsky, Professor, Department of Obstetrics, Gynecology & Reproductive Biology, Harvard Medical School; Director, IVF Laboratory, Brigham & Women’s Hospital

The Curious Case of Fresh versus Frozen Transfer
Denny Sakkas, Ph.D., Scientific Director, Boston IVF

Why Does IVF Fail? Finding a Single Euploid Embryo is Harder than You Think
Jamie Grifo, M.D., Ph.D., Program Director, New York University Fertility Center; Professor, New York University Langone Medical Center

BEST PRACTICES AND ETHICS

Genetic Counseling Bridges the Gap between Complex Genetic Information and Patient Care
MaryAnn W. Campion, Ed.D., MS, CGC; Director, Master’s Program in Genetic Counseling; Assistant Dean, Graduate Medical Sciences; Assistant Professor, Obstetrics and Gynecology, Boston University School of Medicine

Ethical Issues of Next-Generation Sequencing and Beyond
Eugene Pergament, M.D., Ph.D., FACMG, Professor, Obstetrics and Gynecology, Northwestern; Attending, Northwestern University Medical School Memorial Hospital

Closing Panel: The Future of Reproductive Genetic Diagnostics: Is Reproductive Technology Straining the Seams of Ethics?
Moderator:
Mache Seibel, M.D., Professor, OB/GYN, University of Massachusetts Medical School; Editor, My Menopause Magazine; Author, The Estrogen Window
Panelists:
Rebekah S. Zimmerman, Ph.D., FACMG, Director, Clinical Genetics, Foundation for Embryonic Competence
Denny Sakkas, Ph.D., Scientific Director, Boston IVF
Michael A. Lee, MS, TS, ELD (ABB), Director of Laboratories, Fertility Solutions
Nicholas Collins, MS, CGC, Manager, Reproductive Health Specialists, Counsyl

Arrive Early and Attend Advances in Prenatal Molecular Diagnostics – Register for Both Events and SAVE!

Prenatal Molecular Dx | Nov. 16-18 | Boston, MA

CHI, 250 First Avenue, Suite 300, Needham, MA, 02494, Tel: 781-972-5400 | Fax: 781-972-5425

 

 

Read Full Post »

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

Reporter/Curator: Stephen J. Williams, Ph.D.

 

UPDATED on 9/6/2017

On 9/6/2017, Aviva Lev-Ari, PhD, RN had attend a talk by Paul Nioi, PhD, Amgen, at HMS, Harvard BioTechnology Club (GSAS).

Nioi discussed his 2016 paper in NEJM, 2016, 374:2131-2141

Variant ASGR1 Associated with a Reduced Risk of Coronary Artery Disease

Paul Nioi, Ph.D., Asgeir Sigurdsson, B.Sc., Gudmar Thorleifsson, Ph.D., Hannes Helgason, Ph.D., Arna B. Agustsdottir, B.Sc., Gudmundur L. Norddahl, Ph.D., Anna Helgadottir, M.D., Audur Magnusdottir, Ph.D., Aslaug Jonasdottir, M.Sc., Solveig Gretarsdottir, Ph.D., Ingileif Jonsdottir, Ph.D., Valgerdur Steinthorsdottir, Ph.D., Thorunn Rafnar, Ph.D., Dorine W. Swinkels, M.D., Ph.D., Tessel E. Galesloot, Ph.D., Niels Grarup, Ph.D., Torben Jørgensen, D.M.Sc., Henrik Vestergaard, D.M.Sc., Torben Hansen, Ph.D., Torsten Lauritzen, D.M.Sc., Allan Linneberg, Ph.D., Nele Friedrich, Ph.D., Nikolaj T. Krarup, Ph.D., Mogens Fenger, Ph.D., Ulrik Abildgaard, D.M.Sc., Peter R. Hansen, D.M.Sc., Anders M. Galløe, Ph.D., Peter S. Braund, Ph.D., Christopher P. Nelson, Ph.D., Alistair S. Hall, F.R.C.P., Michael J.A. Williams, M.D., Andre M. van Rij, M.D., Gregory T. Jones, Ph.D., Riyaz S. Patel, M.D., Allan I. Levey, M.D., Ph.D., Salim Hayek, M.D., Svati H. Shah, M.D., Muredach Reilly, M.B., B.Ch., Gudmundur I. Eyjolfsson, M.D., Olof Sigurdardottir, M.D., Ph.D., Isleifur Olafsson, M.D., Ph.D., Lambertus A. Kiemeney, Ph.D., Arshed A. Quyyumi, F.R.C.P., Daniel J. Rader, M.D., William E. Kraus, M.D., Nilesh J. Samani, F.R.C.P., Oluf Pedersen, D.M.Sc., Gudmundur Thorgeirsson, M.D., Ph.D., Gisli Masson, Ph.D., Hilma Holm, M.D., Daniel Gudbjartsson, Ph.D., Patrick Sulem, M.D., Unnur Thorsteinsdottir, Ph.D., and Kari Stefansson, M.D., Ph.D.

N Engl J Med 2016; 374:2131-2141June 2, 2016DOI: 10.1056/NEJMoa1508419

Abstract
Article
References
Citing Articles (22)
Metrics

BACKGROUND

Several sequence variants are known to have effects on serum levels of non–high-density lipoprotein (HDL) cholesterol that alter the risk of coronary artery disease.

METHODS

We sequenced the genomes of 2636 Icelanders and found variants that we then imputed into the genomes of approximately 398,000 Icelanders. We tested for association between these imputed variants and non-HDL cholesterol levels in 119,146 samples. We then performed replication testing in two populations of European descent. We assessed the effects of an implicated loss-of-function variant on the risk of coronary artery disease in 42,524 case patients and 249,414 controls from five European ancestry populations. An augmented set of genomes was screened for additional loss-of-function variants in a target gene. We evaluated the effect of an implicated variant on protein stability.

RESULTS

We found a rare noncoding 12-base-pair (bp) deletion (del12) in intron 4 of ASGR1, which encodes a subunit of the asialoglycoprotein receptor, a lectin that plays a role in the homeostasis of circulating glycoproteins. The del12 mutation activates a cryptic splice site, leading to a frameshift mutation and a premature stop codon that renders a truncated protein prone to degradation. Heterozygous carriers of the mutation (1 in 120 persons in our study population) had a lower level of non-HDL cholesterol than noncarriers, a difference of 15.3 mg per deciliter (0.40 mmol per liter) (P=1.0×10−16), and a lower risk of coronary artery disease (by 34%; 95% confidence interval, 21 to 45; P=4.0×10−6). In a larger set of sequenced samples from Icelanders, we found another loss-of-function ASGR1 variant (p.W158X, carried by 1 in 1850 persons) that was also associated with lower levels of non-HDL cholesterol (P=1.8×10−3).

CONCLUSIONS

ASGR1 haploinsufficiency was associated with reduced levels of non-HDL cholesterol and a reduced risk of coronary artery disease. (Funded by the National Institutes of Health and others.)

 

Amgen’s deCODE Genetics Publishes Largest Human Genome Population Study to Date

Mark Terry, BioSpace.com Breaking News Staff reported on results of one of the largest genome sequencing efforts to date, sequencing of the genomes of 2,636 people from Iceland by deCODE genetics, Inc., a division of Thousand Oaks, Calif.-based Amgen (AMGN).

Amgen had bought deCODE genetics Inc. in 2012, saving the company from bankruptcy.

There were a total of four studies, published on March 25, 2015 on the online version of Nature Genetics; titled “Large-scale whole-genome sequencing of the Icelandic population[1],” “Identification of a large set of rare complete human knockouts[2],” “The Y-chromosome point mutation rate in humans[3]” and “Loss-of-function variants in ABCA7 confer risk of Alzheimer’s disease[4].”

The project identified some new genetic variants which increase risk of Alzheimer’s disease and confirmed some variants known to increase risk of diabetes and atrial fibrillation. A more in-depth post will curate these findings but there was an interesting discrete geographic distribution of certain rare variants located around Iceland. The dataset offers a treasure trove of meaningful genetic information not only about the Icelandic population but offers numerous new targets for breast, ovarian cancer as well as Alzheimer’s disease.

View Mark Terry’s article here on Biospace.com.

“This work is a demonstration of the unique power sequencing gives us for learning more about the history of our species,” said Kari Stefansson, founder and chief executive officer of deCode and one of the lead authors in a statement, “and for contributing to new means of diagnosing, treating and preventing disease.”

The scale and ambition of the study is impressive, but perhaps more important, the research identified a new genetic variant that increases the risk of Alzheimer’s disease and already had identified an APP variant that is associated with decreased risk of Alzheimer’s Disease. It also confirmed variants that increase the risk of diabetes and a variant that results in atrial fibrillation.
The database of human genetic variation (dbSNP) contained over 50 million unique sequence variants yet this database only represents a small proportion of single nucleotide variants which is thought to exist. These “private” or rare variants undoubtedly contribute to important phenotypes, such as disease susceptibility. Non-SNV variants, like indels and structural variants, are also under-represented in public databases. The only way to fully elucidate the genetic basis of a trait is to consider all of these types of variants, and the only way to find them is by large-scale sequencing.

Curation of Population Genomic Sequencing Programs/Corporate Partnerships

Click on “Curation of genomic studies” below for full Table

Curation of genomic studies
Study Partners Population Enrolled Disease areas Analysis
Icelandic Genome

Project

deCODE/Amgen Icelandic 2,636 Variants related to: Alzheimer’s, cardiovascular, diabetes WES + EMR; blood samples
Genome Sequencing Study Geisinger Health System/Regeneron Northeast PA, USA 100,000 Variants related to hypercholestemia, autism, obesity, other diseases WES +EMR +MyCode;

– Blood samples

The 100,000 Genomes Project National Health Service/NHS Genome Centers/ 10 companies forming Gene Consortium including Abbvie, Alexion, AstraZeneca, Biogen, Dimension, GSK, Helomics, Roche,   Takeda, UCB Rare disorders population UK Starting to recruit 100,000 Initially rare diseases, cancer, infectious diseases WES of blood, saliva and tissue samples

Ref paper

Saudi Human Genome Program 7 centers across Saudi Arabia in conjunction with King Abdulaziz City Science & Tech., King Faisal Hospital & Research Centre/Life Technologies General population Saudi Arabia 20,000 genomes over three years First focus on rare severe early onset diseases: diabetes, deafness, cardiovascular, skeletal deformation Whole genome sequence blood samples + EMR
Genome of the Netherlands (GoNL) Consortium consortium of the UMCG,LUMCErasmus MCVU university and UMCU. Samples where contributed by LifeLinesThe Leiden Longevity StudyThe Netherlands Twin Registry (NTR), The Rotterdam studies, and The Genetic Research in Isolated Populations program. All the sequencing work is done by BGI Hong Kong. Families in Netherlands 769 Variants, SNV, indels, deletions from apparently healthy individuals, family trios Whole genome NGS of whole blood no EMR

Ref paper in Nat. Genetics

Ref paper describing project

Faroese FarGen project Privately funded Faroe Islands Faroese population 50,000 Small population allows for family analysis Combine NGS with EMR and genealogy reports
Personal Genome Project Canada $4000.00 fee from participants; collaboration with University of Toronto and SickKids Organization; technical assistance with Harvard Canadian Health System Goal: 100,000 ? just started no defined analysis goals yet Whole exome and medical records
Singapore Sequencing Malay Project (SSMP) Singapore Genome Variation Project

Singapore Pharmacogenomics Project

Malaysian 100 healthy Malays from Singapore Pop. Health Study Variant analysis Deep whole genome sequencing
GenomeDenmark four Danish universities (KU, AU, DTU and AAU), two hospitals (Herlev and Vendsyssel) and two private firms (Bavarian Nordic and BGI-Europe). 150 complete genomes; first 30 published in Nature Comm. ? See link
Neuromics Consortium University of Tübingen and 18 academic and industrial partners (see link for description) European and Australian 1,100 patients with neuro-

degenerative and neuro-

muscular disease

Moved from SNP to whole exome analysis Whole Exome, RNASeq

References

  1. Gudbjartsson DF, Helgason H, Gudjonsson SA, Zink F, Oddson A, Gylfason A, Besenbacher S, Magnusson G, Halldorsson BV, Hjartarson E et al: Large-scale whole-genome sequencing of the Icelandic population. Nature genetics 2015, advance online publication.
  2. Sulem P, Helgason H, Oddson A, Stefansson H, Gudjonsson SA, Zink F, Hjartarson E, Sigurdsson GT, Jonasdottir A, Jonasdottir A et al: Identification of a large set of rare complete human knockouts. Nature genetics 2015, advance online publication.
  3. Helgason A, Einarsson AW, Gumundsdottir VB, Sigursson A, Gunnarsdottir ED, Jagadeesan A, Ebenesersdottir SS, Kong A, Stefansson K: The Y-chromosome point mutation rate in humans. Nature genetics 2015, advance online publication.
  4. Steinberg S, Stefansson H, Jonsson T, Johannsdottir H, Ingason A, Helgason H, Sulem P, Magnusson OT, Gudjonsson SA, Unnsteinsdottir U et al: Loss-of-function variants in ABCA7 confer risk of Alzheimer’s disease. Nature genetics 2015, advance online publication.

Other post related to DECODE, population genomics, and NGS on this site include:

Illumina Says 228,000 Human Genomes Will Be Sequenced in 2014

CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics and Computational Genomics – Part IIB

Human genome: UK to become world number 1 in DNA testing

Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging

Genomic Promise for Neurodegenerative Diseases, Dementias, Autism Spectrum, Schizophrenia, and Serious Depression

Sequencing the exomes of 1,100 patients with neurodegenerative and neuromuscular diseases: A consortium of 18 European and Australian institutions

University of California Santa Cruz’s Genomics Institute will create a Map of Human Genetic Variations

Three Ancestral Populations Contributed to Modern-day Europeans: Ancient Genome Analysis

Impact of evolutionary selection on functional regions: The imprint of evolutionary selection on ENCODE regulatory elements is manifested between species and within human populations

Read Full Post »

Older Posts »

%d bloggers like this: