Feeds:
Posts
Comments

Archive for the ‘Biological Networks, Gene Regulation and Evolution’ Category

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort

 

Curator: Stephen J. Williams, PhD

Ever since the hard work by Rosalind Franklin to deduce structures of DNA and the coincidental work by Francis Crick and James Watson who modeled the basic building blocks of DNA, DNA has been considered as the basic unit of heredity and life, with the “Central Dogma” (DNA to RNA to Protein) at its core.  These were the discoveries in the early twentieth century, and helped drive the transformational shift of biological experimentation, from protein isolation and characterization to cloning protein-encoding genes to characterizing how the genes are expressed temporally, spatially, and contextually.

Rosalind Franklin, who’s crystolagraphic data led to determination of DNA structure. Shown as 1953 Time cover as Time person of the Year

Dr Francis Crick and James Watson in front of their model structure of DNA

 

 

 

 

 

 

 

 

 

Up to this point (1970s-mid 80s) , it was felt that genetic information was rather static, and the goal was still to understand and characterize protein structure and function while an understanding of the underlying genetic information was more important for efforts like linkage analysis of genetic defects and tools for the rapidly developing field of molecular biology.  But the development of the aforementioned molecular biology tools including DNA cloning, sequencing and synthesis, gave scientists the idea that a whole recording of the human genome might be possible and worth the effort.

How the Human Genome Project  Expanded our View of Genes Genetic Material and Biological Processes

 

 

From the Human Genome Project Information Archive

Source:  https://web.ornl.gov/sci/techresources/Human_Genome/project/hgp.shtml

History of the Human Genome Project

The Human Genome Project (HGP) refers to the international 13-year effort, formally begun in October 1990 and completed in 2003, to discover all the estimated 20,000-25,000 human genes and make them accessible for further biological study. Another project goal was to determine the complete sequence of the 3 billion DNA subunits (bases in the human genome). As part of the HGP, parallel studies were carried out on selected model organisms such as the bacterium E. coli and the mouse to help develop the technology and interpret human gene function. The DOE Human Genome Program and the NIH National Human Genome Research Institute (NHGRI) together sponsored the U.S. Human Genome Project.

 

Please see the following for goals, timelines, and funding for this project

 

History of the Project

It is interesting to note that multiple government legislation is credited for the funding of such a massive project including

Project Enabling Legislation

  • The Atomic Energy Act of 1946 (P.L. 79-585) provided the initial charter for a comprehensive program of research and development related to the utilization of fissionable and radioactive materials for medical, biological, and health purposes.
  • The Atomic Energy Act of 1954 (P.L. 83-706) further authorized the AEC “to conduct research on the biologic effects of ionizing radiation.”
  • The Energy Reorganization Act of 1974 (P.L. 93-438) provided that responsibilities of the Energy Research and Development Administration (ERDA) shall include “engaging in and supporting environmental, biomedical, physical, and safety research related to the development of energy resources and utilization technologies.”
  • The Federal Non-nuclear Energy Research and Development Act of 1974 (P.L. 93-577) authorized ERDA to conduct a comprehensive non-nuclear energy research, development, and demonstration program to include the environmental and social consequences of the various technologies.
  • The DOE Organization Act of 1977 (P.L. 95-91) mandated the Department “to assure incorporation of national environmental protection goals in the formulation and implementation of energy programs; and to advance the goal of restoring, protecting, and enhancing environmental quality, and assuring public health and safety,” and to conduct “a comprehensive program of research and development on the environmental effects of energy technology and program.”

It should also be emphasized that the project was not JUST funded through NIH but also Department of Energy

Project Sponsors

For a great read on Dr. Craig Ventnor with interviews with the scientist see Dr. Larry Bernstein’s excellent post The Human Genome Project

 

By 2003 we had gained much information about the structure of DNA, genes, exons, introns and allowed us to gain more insights into the diversity of genetic material and the underlying protein coding genes as well as many of the gene-expression regulatory elements.  However there was much uninvestigated material dispersed between genes, the then called “junk DNA” and, up to 2003 not much was known about the function of this ‘junk DNA’.  In addition there were two other problems:

  • The reference DNA used was actually from one person (Craig Ventor who was the lead initiator of the project)
  • Multiple gaps in the DNA sequence existed, and needed to be filled in

It is important to note that a tremendous amount of diversity of protein has been realized from both transcriptomic and proteomic studies.  Although about 20 to 25,000 coding genes exist the human proteome contains about 600,000 proteoforms (due to alternative splicing, posttranslational modifications etc.)

This expansion of the proteoform via alternate splicing into isoforms, gene duplication to paralogs has been shown to have major effects on, for example, cellular signaling pathways (1)

However just recently it has been reported that the FULL human genome has been sequenced and is complete and verified.  This was the focus of a recent issue in the journal Science.

Source: https://www.science.org/doi/10.1126/science.abj6987

Abstract

Since its initial release in 2000, the human reference genome has covered only the euchromatic fraction of the genome, leaving important heterochromatic regions unfinished. Addressing the remaining 8% of the genome, the Telomere-to-Telomere (T2T) Consortium presents a complete 3.055 billion–base pair sequence of a human genome, T2T-CHM13, that includes gapless assemblies for all chromosomes except Y, corrects errors in the prior references, and introduces nearly 200 million base pairs of sequence containing 1956 gene predictions, 99 of which are predicted to be protein coding. The completed regions include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric chromosomes, unlocking these complex regions of the genome to variational and functional studies.

 

The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013 and most recently patched in 2019 (GRCh38.p13) (1). This reference traces its origin to the publicly funded Human Genome Project (2) and has been continually improved over the past two decades. Unlike the competing Celera effort (3) and most modern sequencing projects based on “shotgun” sequence assembly (4), the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps. However, limitations of BAC cloning led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished or incorrectly assembled (5).

 

Fig. 1. Summary of the complete T2T-CHM13 human genome assembly.
(A) Ideogram of T2T-CHM13v1.1 assembly features. For each chromosome (chr), the following information is provided from bottom to top: gaps and issues in GRCh38 fixed by CHM13 overlaid with the density of genes exclusive to CHM13 in red; segmental duplications (SDs) (42) and centromeric satellites (CenSat) (30); and CHM13 ancestry predictions (EUR, European; SAS, South Asian; EAS, East Asian; AMR, ad-mixed American). Bottom scale is measured in Mbp. (B and C) Additional (nonsyntenic) bases in the CHM13 assembly relative to GRCh38 per chromosome, with the acrocentrics highlighted in black (B) and by sequence type (C). (Note that the CenSat and SD annotations overlap.) RepMask, RepeatMasker. (D) Total nongap bases in UCSC reference genome releases dating back to September 2000 (hg4) and ending with T2T-CHM13 in 2021. Mt/Y/Ns, mitochondria, chrY, and gaps.

Note in Figure 1D the exponential growth in genetic information.

Also very important is the ability to determine all the paralogs, isoforms, areas of potential epigenetic regulation, gene duplications, and transposable elements that exist within the human genome.

Analyses and resources

A number of companion studies were carried out to characterize the complete sequence of a human genome, including comprehensive analyses of centromeric satellites (30), segmental duplications (42), transcriptional (49) and epigenetic profiles (29), mobile elements (49), and variant calls (25). Up to 99% of the complete CHM13 genome can be confidently mapped with long-read sequencing, opening these regions of the genome to functional and variational analysis (23) (fig. S38 and table S14). We have produced a rich collection of annotations and omics datasets for CHM13—including RNA sequencing (RNA-seq) (30), Iso-seq (21), precision run-on sequencing (PRO-seq) (49), cleavage under targets and release using nuclease (CUT&RUN) (30), and ONT methylation (29) experiments—and have made these datasets available via a centralized University of California, Santa Cruz (UCSC), Assembly Hub genome browser (54).

 

To highlight the utility of these genetic and epigenetic resources mapped to a complete human genome, we provide the example of a segmentally duplicated region of the chromosome 4q subtelomere that is associated with facioscapulohumeral muscular dystrophy (FSHD) (55). This region includes FSHD region gene 1 (FRG1), FSHD region gene 2 (FRG2), and an intervening D4Z4 macrosatellite repeat containing the double homeobox 4 (DUX4) gene that has been implicated in the etiology of FSHD (56). Numerous duplications of this region throughout the genome have complicated past genetic analyses of FSHD.

The T2T-CHM13 assembly reveals 23 paralogs of FRG1 spread across all acrocentric chromosomes as well as chromosomes 9 and 20 (Fig. 5A). This gene appears to have undergone recent amplification in the great apes (57), and approximate locations of FRG1 paralogs were previously identified by FISH (58). However, only nine FRG1 paralogs are found in GRCh38, hampering sequence-based analysis.

Future of the human reference genome

The T2T-CHM13 assembly adds five full chromosome arms and more additional sequence than any genome reference release in the past 20 years (Fig. 1D). This 8% of the genome has not been overlooked because of a lack of importance but rather because of technological limitations. High-accuracy long-read sequencing has finally removed this technological barrier, enabling comprehensive studies of genomic variation across the entire human genome, which we expect to drive future discovery in human genomic health and disease. Such studies will necessarily require a complete and accurate human reference genome.

CHM13 lacks a Y chromosome, and homozygous Y-bearing CHMs are nonviable, so a different sample type will be required to complete this last remaining chromosome. However, given its haploid nature, it should be possible to assemble the Y chromosome from a male sample using the same methods described here and supplement the T2T-CHM13 reference assembly with a Y chromosome as needed.

Extending beyond the human reference genome, large-scale resequencing projects have revealed genomic variation across human populations. Our reanalyses of the 1KGP (25) and SGDP (42) datasets have already shown the advantages of T2T-CHM13, even for short-read analyses. However, these studies give only a glimpse of the extensive structural variation that lies within the most repetitive regions of the genome assembled here. Long-read resequencing studies are now needed to comprehensively survey polymorphic variation and reveal any phenotypic associations within these regions.

Although CHM13 represents a complete human haplotype, it does not capture the full diversity of human genetic variation. To address this bias, the Human Pangenome Reference Consortium (59) has joined with the T2T Consortium to build a collection of high-quality reference haplotypes from a diverse set of samples. Ideally, all genomes could be assembled at the quality achieved here, but automated T2T assembly of diploid genomes presents a difficult challenge that will require continued development. Until this goal is realized, and any human genome can be completely sequenced without error, the T2T-CHM13 assembly represents a more complete, representative, and accurate reference than GRCh38.

 

This paper was the focus of a Time article and their basis for making the lead authors part of their Time 100 people of the year.

From TIME

The Human Genome Is Finally Fully Sequenced

Source: https://time.com/6163452/human-genome-fully-sequenced/

 

The first human genome was mapped in 2001 as part of the Human Genome Project, but researchers knew it was neither complete nor completely accurate. Now, scientists have produced the most completely sequenced human genome to date, filling in gaps and correcting mistakes in the previous version.

The sequence is the most complete reference genome for any mammal so far. The findings from six new papers describing the genome, which were published in Science, should lead to a deeper understanding of human evolution and potentially reveal new targets for addressing a host of diseases.

A more precise human genome

“The Human Genome Project relied on DNA obtained through blood draws; that was the technology at the time,” says Adam Phillippy, head of genome informatics at the National Institutes of Health’s National Human Genome Research Institute (NHGRI) and senior author of one of the new papers. “The techniques at the time introduced errors and gaps that have persisted all of these years. It’s nice now to fill in those gaps and correct those mistakes.”

“We always knew there were parts missing, but I don’t think any of us appreciated how extensive they were, or how interesting,” says Michael Schatz, professor of computer science and biology at Johns Hopkins University and another senior author of the same paper.

The work is the result of the Telomere to Telomere consortium, which is supported by NHGRI and involves genetic and computational biology experts from dozens of institutes around the world. The group focused on filling in the 8% of the human genome that remained a genetic black hole from the first draft sequence. Since then, geneticists have been trying to add those missing portions bit by bit. The latest group of studies identifies about an entire chromosome’s worth of new sequences, representing 200 million more base pairs (the letters making up the genome) and 1,956 new genes.

 

NOTE: In 2001 many scientists postulated there were as much as 100,000 coding human genes however now we understand there are about 20,000 to 25,000 human coding genes.  This does not however take into account the multiple diversity obtained from alternate splicing, gene duplications, SNPs, and chromosomal rearrangements.

Scientists were also able to sequence the long stretches of DNA that contained repeated sequences, which genetic experts originally thought were similar to copying errors and dismissed as so-called “junk DNA”. These repeated sequences, however, may play roles in certain human diseases. “Just because a sequence is repetitive doesn’t mean it’s junk,” says Eichler. He points out that critical genes are embedded in these repeated regions—genes that contribute to machinery that creates proteins, genes that dictate how cells divide and split their DNA evenly into their two daughter cells, and human-specific genes that might distinguish the human species from our closest evolutionary relatives, the primates. In one of the papers, for example, researchers found that primates have different numbers of copies of these repeated regions than humans, and that they appear in different parts of the genome.

“These are some of the most important functions that are essential to live, and for making us human,” says Eichler. “Clearly, if you get rid of these genes, you don’t live. That’s not junk to me.”

Deciphering what these repeated sections mean, if anything, and how the sequences of previously unsequenced regions like the centromeres will translate to new therapies or better understanding of human disease, is just starting, says Deanna Church, a vice president at Inscripta, a genome engineering company who wrote a commentary accompanying the scientific articles. Having the full sequence of a human genome is different from decoding it; she notes that currently, of people with suspected genetic disorders whose genomes are sequenced, about half can be traced to specific changes in their DNA. That means much of what the human genome does still remains a mystery.

The investigators in the Telomere to Telomere Consortium made the Time 100 People of the Year.

Michael Schatz, Karen Miga, Evan Eichler, and Adam Phillippy

Illustration by Brian Lutz for Time (Source Photos: Will Kirk—Johns Hopkins University; Nick Gonzales—UC Santa Cruz; Patrick Kehoe; National Human Genome Research Institute)

BY JENNIFER DOUDNA

MAY 23, 2022 6:08 AM EDT

Ever since the draft of the human genome became available in 2001, there has been a nagging question about the genome’s “dark matter”—the parts of the map that were missed the first time through, and what they contained. Now, thanks to Adam Phillippy, Karen Miga, Evan Eichler, Michael Schatz, and the entire Telomere-to-Telomere Consortium (T2T) of scientists that they led, we can see the full map of the human genomic landscape—and there’s much to explore.

In the scientific community, there wasn’t a consensus that mapping these missing parts was necessary. Some in the field felt there was already plenty to do using the data in hand. In addition, overcoming the technical challenges to getting the missing information wasn’t possible until recently. But the more we learn about the genome, the more we understand that every piece of the puzzle is meaningful.

I admire the

T2T group’s willingness to grapple with the technical demands of this project and their persistence in expanding the genome map into uncharted territory. The complete human genome sequence is an invaluable resource that may provide new insights into the origin of diseases and how we can treat them. It also offers the most complete look yet at the genetic script underlying the very nature of who we are as human beings.

Doudna is a biochemist and winner of the 2020 Nobel Prize in Chemistry

Source: https://time.com/collection/100-most-influential-people-2022/6177818/evan-eichler-karen-miga-adam-phillippy-michael-schatz/

Other articles on the Human Genome Project and Junk DNA in this Open Access Scientific Journal Include:

 

International Award for Human Genome Project

 

Cracking the Genome – Inside the Race to Unlock Human DNA – quotes in newspapers

 

The Human Genome Project

 

Junk DNA and Breast Cancer

 

A Perspective on Personalized Medicine

 

 

 

 

 

 

 

Additional References

 

  1. P. Scalia, A. Giordano, C. Martini, S. J. Williams, Isoform- and Paralog-Switching in IR-Signaling: When Diabetes Opens the Gates to Cancer. Biomolecules 10, (Nov 30, 2020).

 

 

Read Full Post »

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility has been primarily treated as a female predicament but around one-half of infertility cases can be tracked to male factors. Clinically, male infertility is typically determined using measures of semen quality recommended by World Health Organization (WHO). A major limitation, however, is that standard semen analyses are relatively poor predictors of reproductive capacity and success. Despite major advances in understanding the molecular and cellular functions in sperm over the last several decades, semen analyses remain the primary method to assess male fecundity and fertility.

Chronological age is a significant determinant of human fecundity and fertility. The disease burden of infertility is likely to continue to rise as parental age at the time of conception has been steadily increasing. While the emphasis has been on the effects of advanced maternal age on adverse reproductive and offspring health, new evidence suggests that, irrespective of maternal age, higher male age contributes to longer time-to-conception, poor pregnancy outcomes and adverse health of the offspring in later life. The effect of chronological age on the genomic landscape of DNA methylation is profound and likely occurs through the accumulation of maintenance errors of DNA methylation over the lifespan, which have been originally described as epigenetic drift.

In recent years, the strong relation between age and DNA methylation profiles has enabled the development of statistical models to estimate biological age in most somatic tissue via different epigenetic ‘clock’ metrics, such as DNA methylation age and epigenetic age acceleration, which describe the degree to which predicted biological age deviates from chronological age. In turn, these epigenetic clock metrics have emerged as novel biomarkers of a host of phenotypes such as allergy and asthma in children, early menopause, increased incidence of cancer types and cardiovascular-related diseases, frailty and cognitive decline in adults. They also display good predictive ability for cancer, cardiovascular and all-cause mortality.

Epigenetic clock metrics are powerful tools to better understand the aging process in somatic tissue as well as their associations with adverse disease outcomes and mortality. Only a few studies have constructed epigenetic clocks specific to male germ cells and only one study reported that smokers trended toward an increased epigenetic age compared to non-smokers. These results indicate that sperm epigenetic clocks hold promise as a novel biomarker for reproductive health and/or environmental exposures. However, the relation between sperm epigenetic clocks and reproductive outcomes has not been examined.

There is a critical need for new measures of male fecundity for assessing overall reproductive success among couples in the general population. Data shows that sperm epigenetic clocks may fulfill this need as a novel biomarker that predicts pregnancy success among couples not seeking fertility treatment. Such a summary measure of sperm biological age is of clinical importance as it allows couples in the general population to realize their probability of achieving pregnancy during natural intercourse, thereby informing and expediting potential infertility treatment decisions. With the ability to customize high throughput DNA methylation arrays and capture sequencing approaches, the integration of the epigenetic clocks as part of standard clinical care can enhance our understanding of idiopathic infertility and the paternal contribution to reproductive success and offspring health.

References:

https://academic.oup.com/humrep/advance-article/doi/10.1093/humrep/deac084/6583111?login=false

https://pubmed.ncbi.nlm.nih.gov/33317634/

https://clinicalepigeneticsjournal.biomedcentral.com/articles/10.1186/s13148-019-0656-7

https://pubmed.ncbi.nlm.nih.gov/19319879/

https://pubmed.ncbi.nlm.nih.gov/31901222/

https://pubmed.ncbi.nlm.nih.gov/25928123/

Read Full Post »

New studies link cell cycle proteins to immunosurveillance of premalignant cells

Curator: Stephen J. Williams, Ph.D.

The following is from a Perspectives article in the journal Science by Virinder Reen and Jesus Gil called “Clearing Stressed Cells: Cell cycle arrest produces a p21-dependent secretome that initaites immunosurveillance of premalignant cells”. This is a synopsis of the Sturmlechener et al. research article in the same issue (2).

Complex organisms repair stress-induced damage to limit the replication of faulty cells that could drive cancer. When repair is not possible, tissue homeostasis is maintained by the activation of stress response programs such as apoptosis, which eliminates the cells, or senescence, which arrests them (1). Cellular senescence causes the arrest of damaged cells through the induction of cyclin-dependent kinase inhibitors (CDKIs) such as p16 and p21 (2). Senescent cells also produce a bioactive secretome (the senescence-associated secretory phenotype, SASP) that places cells under immunosurveillance, which is key to avoiding the detrimental inflammatory effects caused by lingering senescent cells on surrounding tissues. On page 577 of this issue, Sturmlechner et al. (3) report that induction of p21 not only contributes to the arrest of senescent cells, but is also an early signal that primes stressed cells for immunosurveillance.Senescence is a complex program that is tightly regulated at the epigenetic and transcriptional levels. For example, exit from the cell cycle is controlled by the induction of p16 and p21, which inhibit phosphorylation of the retinoblastoma protein (RB), a transcriptional regulator and tumor suppressor. Hypophosphorylated RB represses transcription of E2F target genes, which are necessary for cell cycle progression. Conversely, production of the SASP is regulated by a complex program that involves super-enhancer (SE) remodeling and activation of transcriptional regulators such as nuclear factor κB (NF-κB) or CCAAT enhancer binding protein–β (C/EBPβ) (4).

Senescence is a complex program that is tightly regulated at the epigenetic and transcriptional levels. For example, exit from the cell cycle is controlled by the induction of p16 and p21, which inhibit phosphorylation of the retinoblastoma protein (RB), a transcriptional regulator and tumor suppressor. Hypophosphorylated RB represses transcription of E2F target genes, which are necessary for cell cycle progression. Conversely, production of the SASP is regulated by a complex program that involves super-enhancer (SE) remodeling and activation of transcriptional regulators such as nuclear factor κB (NF-κB) or CCAAT enhancer binding protein–β (C/EBPβ) (4).

Sturmlechner et al. found that activation of p21 following stress rapidly halted cell cycle progression and triggered an internal biological timer (of ∼4 days in hepatocytes), allowing time to repair and resolve damage (see the figure). In parallel, C-X-C motif chemokine 14 (CXCL14), a component of the PASP, attracted macrophages to surround and closely surveil these damaged cells. Stressed cells that recovered and normalized p21 expression suspended PASP production and circumvented immunosurveillance. However, if the p21-induced stress was unmanageable, the repair timer expired, and the immune cells transitioned from surveillance to clearance mode. Adjacent macrophages mounted a cytotoxic T lymphocyte response that destroyed damaged cells. Notably, the overexpression of p21 alone was sufficient to orchestrate immune killing of stressed cells, without the need of a senescence phenotype. Overexpression of other CDKIs, such as p16 and p27, did not trigger immunosurveillance, likely because they do not induce CXCL14 expression.In the context of cancer, senescent cell clearance was first observed following reactivation of the tumor suppressor p53 in liver cancer cells. Restoring p53 signaling induced senescence and triggered the elimination of senescent cells by the innate immune system, prompting tumor regression (5). Subsequent work has revealed that the SASP alerts the immune system to target preneoplastic senescent cells. Hepatocytes expressing the oncogenic mutant NRASG12V (Gly12→Val) become senescent and secrete chemokines and cytokines that trigger CD4+ T cell–mediated clearance (6). Despite the relevance for tumor suppression, relatively little is known about how immunosurveillance of oncogene-induced senescent cells is initiated and controlled.

Source of image: Reen, V. and Gil, J. Clearing Stressed Cells. Science Perspectives 2021;Vol 374(6567) p 534-535.

References

2. Sturmlechner I, Zhang C, Sine CC, van Deursen EJ, Jeganathan KB, Hamada N, Grasic J, Friedman D, Stutchman JT, Can I, Hamada M, Lim DY, Lee JH, Ordog T, Laberge RM, Shapiro V, Baker DJ, Li H, van Deursen JM. p21 produces a bioactive secretome that places stressed cells under immunosurveillance. Science. 2021 Oct 29;374(6567):eabb3420. doi: 10.1126/science.abb3420. Epub 2021 Oct 29. PMID: 34709885.

More Articles on Cancer, Senescence and the Immune System in this Open Access Online Scientific Journal Include

Bispecific and Trispecific Engagers: NK-T Cells and Cancer Therapy

Natural Killer Cell Response: Treatment of Cancer

Issues Need to be Resolved With ImmunoModulatory Therapies: NK cells, mAbs, and adoptive T cells

New insights in cancer, cancer immunogenesis and circulating cancer cells

Insight on Cell Senescence

Immune System Stimulants: Articles of Note @pharmaceuticalintelligence.com

Read Full Post »

UK Biobank Makes Available 200,000 whole genomes Open Access

Reporter: Stephen J. Williams, Ph.D.

The following is a summary of an article by Jocelyn Kaiser, published in the November 26, 2021 issue of the journal Science.

To see the full article please go to https://www.science.org/content/article/200-000-whole-genomes-made-available-biomedical-studies-uk-effort

The UK Biobank (UKBB) this week unveiled to scientists the entire genomes of 200,000 people who are part of a long-term British health study.

The trove of genomes, each linked to anonymized medical information, will allow biomedical scientists to scour the full 3 billion base pairs of human DNA for insights into the interplay of genes and health that could not be gleaned from partial sequences or scans of genome markers. “It is thrilling to see the release of this long-awaited resource,” says Stephen Glatt, a psychiatric geneticist at the State University of New York Upstate Medical University.

Other biobanks have also begun to compile vast numbers of whole genomes, 100,000 or more in some cases (see table, below). But UKBB stands out because it offers easy access to the genomic information, according to some of the more than 20,000 researchers in 90 countries who have signed up to use the data. “In terms of availability and data quality, [UKBB] surpasses all others,” says physician and statistician Omar Yaxmehen Bello-Chavolla of the National Institute for Geriatrics in Mexico City.

Enabling your vision to improve public health

Data drives discovery. We have curated a uniquely powerful biomedical database that can be accessed globally for public health research. Explore data from half a million UK Biobank participants to enable new discoveries to improve public health.

Data Showcase

Future data releases

This UKBB biobank represents genomes collected from 500,000 middle-age and elderly participants for 2006 to 2010. The genomes are mostly of a European descent. Other large scale genome sequencing ventures like Iceland’s DECODE, which collected over 100,000 genomes, is now a subsidiary of Amgen, and mostly behind IP protection, not Open Access as this database represents.

UK Biobank is a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million UK participants. The database is regularly augmented with additional data and is globally accessible to approved researchers undertaking vital research into the most common and life-threatening diseases. It is a major contributor to the advancement of modern medicine and treatment and has enabled several scientific discoveries that improve human health.

A summary of some large scale genome sequencing projects are show in the table below:

BiobankCompleted Whole GenomesRelease Information
UK Biobank200,000300,000 more in early 2023
TransOmics for
Precision Medicien
161,000NIH requires project
specific request
Million Veterans
Program
125,000Non-Veterans Affairs
researchers get first access
100,000 Genomes
Project
120,000Researchers must join Genomics
England collaboration
All of Us90,000NIH expects to release 2022

Other Related Articles on Genome Biobank Projects in this Open Access Online Scientific Journal Include the Following:

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

Exome Aggregation Consortium (ExAC), generated the largest catalogue so far of variation in human protein-coding regions: Sequence data of 60,000 people, NOW is a publicly accessible database

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Read Full Post »

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

The NIH-funded adjuvant improves the efficacy of India’s COVID-19 vaccine.

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Anthony S. Fauci, Director of the National Institute of Allergy and Infectious Diseases (NIAID), Part of National Institute of Health (NIH) said,

Ending a global pandemic demands a global response. I am thrilled that a novel vaccine adjuvant developed in the United States with NIAID support is now included in an effective COVID-19 vaccine that is available to individuals in India.”

Adjuvants are components that are created as part of a vaccine to improve immune responses and increase the efficiency of the vaccine. COVAXIN was developed and is manufactured in India, which is currently experiencing a terrible health catastrophe as a result of COVID-19. An adjuvant designed with NIH funding has contributed to the success of the extremely effective COVAXIN-COVID-19 vaccine, which has been administered to about 25 million individuals in India and internationally.

Alhydroxiquim-II is the adjuvant utilized in COVAXIN, was discovered and validated in the laboratory by the biotech company ViroVax LLC of Lawrence, Kansas, with funding provided solely by the NIAID Adjuvant Development Program. The adjuvant is formed of a small molecule that is uniquely bonded to Alhydrogel, often known as alum and the most regularly used adjuvant in human vaccines. Alhydroxiquim-II enters lymph nodes, where it detaches from alum and triggers two cellular receptors. TLR7 and TLR8 receptors are essential in the immunological response to viruses. Alhydroxiquim-II is the first adjuvant to activate TLR7 and TLR8 in an approved vaccine against an infectious disease. Additionally, the alum in Alhydroxiquim-II activates the immune system to look for an infiltrating pathogen.

Although molecules that activate TLR receptors strongly stimulate the immune system, the adverse effects of Alhydroxiquim-II are modest. This is due to the fact that after COVAXIN is injected, the adjuvant travels directly to adjacent lymph nodes, which contain white blood cells that are crucial in recognizing pathogens and combating infections. As a result, just a minimal amount of Alhydroxiquim-II is required in each vaccination dosage, and the adjuvant does not circulate throughout the body, avoiding more widespread inflammation and unwanted side effects.

This scanning electron microscope image shows SARS-CoV-2 (round gold particles) emerging from the surface of a cell cultured in the lab. SARS-CoV-2, also known as 2019-nCoV, is the virus that causes COVID-19. Image Source: NIAID

COVAXIN is made up of a crippled version of SARS-CoV-2 that cannot replicate but yet encourages the immune system to produce antibodies against the virus. The NIH stated that COVAXIN is “safe and well tolerated,” citing the results of a phase 2 clinical investigation. COVAXIN safety results from a Phase 3 trial with 25,800 participants in India will be released later this year. Meanwhile, unpublished interim data from the Phase 3 trial show that the vaccine is 78% effective against symptomatic sickness, 100% effective against severe COVID-19, including hospitalization, and 70% effective against asymptomatic infection with SARS-CoV-2, the virus that causes COVID-19. Two tests of blood serum from persons who had received COVAXIN suggest that the vaccine creates antibodies that efficiently neutralize the SARS-CoV-2 B.1.1.7 (Alpha) and B.1.617 (Delta) variants (1) and (2), which were originally identified in the United Kingdom and India, respectively.

Since 2009, the NIAID Adjuvant Program has supported the research of ViroVax’s founder and CEO, Sunil David, M.D., Ph.D. His research has focused on the emergence of new compounds that activate innate immune receptors and their application as vaccination adjuvants.

Dr. David’s engagement with Bharat Biotech International Ltd. of Hyderabad, which manufactures COVAXIN, began during a 2019 meeting in India organized by the NIAID Office of Global Research under the auspices of the NIAID’s Indo-US Vaccine Action Program. Five NIAID-funded adjuvant investigators, including Dr. David, two representatives of the NIAID Division of Allergy, Immunology, and Transplantation, and the NIAID India representative, visited 4 top biotechnology companies to learn about their work and discuss future collaborations. The delegation also attended a consultation in New Delhi, which was co-organized by the NIAID and India’s Department of Biotechnology and hosted by the National Institute of Immunology.

Among the scientific collaborations spawned by these endeavors was a licensing deal between Bharat Biotech and Dr. David to use Alhydroxiquim-II in their candidate vaccines. During the COVID-19 outbreak, this license was expanded to cover COVAXIN, which has Emergency Use Authorization in India and more than a dozen additional countries. COVAXIN was developed by Bharat Biotech in partnership with the Indian Council of Medical Research’s National Institute of Virology. The company conducted thorough safety research on Alhydroxiquim-II and undertook the arduous process of scaling up production of the adjuvant in accordance with Good Manufacturing Practice standards. Bharat Biotech aims to generate 700 million doses of COVAXIN by the end of 2021.

NIAID conducts and supports research at the National Institutes of Health, across the United States, and across the world to better understand the causes of infectious and immune-mediated diseases and to develop better methods of preventing, detecting, and treating these illnesses. The NIAID website contains news releases, info sheets, and other NIAID-related materials.

Main Source:

https://www.miragenews.com/adjuvant-developed-with-nih-funding-enhances-587090/

References

  1. https://academic.oup.com/cid/advance-article-abstract/doi/10.1093/cid/ciab411/6271524?redirectedFrom=fulltext
  2. https://academic.oup.com/jtm/article/28/4/taab051/6193609

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Comparing COVID-19 Vaccine Schedule Combinations, or “Com-COV” – First-of-its-Kind Study will explore the Impact of using eight different Combinations of Doses and Dosing Intervals for Different COVID-19 Vaccines

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/08/comparing-covid-19-vaccine-schedule-combinations-or-com-cov-first-of-its-kind-study-will-explore-the-impact-of-using-eight-different-combinations-of-doses-and-dosing-intervals-for-diffe/

Thriving Vaccines and Research: Weizmann Institute Coronavirus Research Development

Reporter:Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/04/thriving-vaccines-and-research-weizmann-coronavirus-research-development/

National Public Radio interview with Dr. Anthony Fauci on his optimism on a COVID-19 vaccine by early 2021

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2020/07/19/national-public-radio-interview-with-dr-anthony-fauci-on-his-optimism-on-a-covid-19-vaccine-by-early-2021/

Cryo-EM disclosed how the D614G mutation changes SARS-CoV-2 spike protein structure

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/04/10/cryo-em-disclosed-how-the-d614g-mutation-changes-sars-cov-2-spike-protein-structure/

Updates on the Oxford, AstraZeneca COVID-19 Vaccine

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2020/06/16/updates-on-the-oxford-astrazeneca-covid-19-vaccine/

Read Full Post »

C.D.C. Reviewing Cases of Heart Problem in Youngsters After Getting Vaccinated and AHA Reassures that Benefits Overwhelm the Risks of Vaccination

Reporter: Amandeep Kaur, B.Sc. , M.Sc.

The latest article in New York times reported by Apoorva Mandavilli outlines the statement of officials that C.D.C. agency is investigating few cases of young adults and teenagers who might have developed myocarditis after getting vaccinated. It is not confirmed by the agency that whether this condition is caused by vaccine or not.

According to the vaccine safety group of the Centers for Disease Control and Prevention, the reports of heart problems experienced by youngsters is relatively very small in number. The group stated that these cases could be unlinked to vaccination. The condition of inflammation of heart muscle which can occur due to certain infections is known as myocarditis.

Moreover, the agency still has to determine any evidence related to vaccines causing the heart issues. The C.D.C. has posted on its website the updated guidance for doctors and clinicians, urging them to be alert to uncommon symptoms related to heart cases among teenagers who are vaccine recipients.

In New York, Dr. Celine Gounder, an infectious disease specialist at Bellevue Hospital Center stated that “It may simply be a coincidence that some people are developing myocarditis after vaccination. It’s more likely for something like that to happen by chance, because so many people are getting vaccinated right now.”

The article reported that the cases appeared mainly in young adults after about four days of their second shot of mRNA vaccines, made by Moderna and Pfizer-BioNTech. Such cases are more prevalent in males as compared to females.

The vaccine safety group stated “Most cases appear to be mild, and follow-up of cases is ongoing.” It is strongly recommended by C.D.C. that American young adults from the age of 12 and above should get vaccinated against COVID-19.

Dr. Yvonne Maldonado, chair of the American Academy of Pediatrics’s Committee on Infectious Diseases stated “We look forward to seeing more data about these cases, so we can better understand if they are related to the vaccine or if they are coincidental. Meanwhile, it’s important for pediatricians and other clinicians to report any health concerns that arise after vaccination.”

Experts affirmed that the potentially uncommon side effects of myocarditis get insignificant compared to the potential risks of SARS-CoV-2 infection, including the continuous syndrome known as “long Covid.” It is reported in the article that acute Covid can lead to myocarditis.

According to the data collected by A.A.P, about 16 thousand children were hospitalized and more than 3.9 million children were infected by coronavirus till the second week of May. In the United States, about 300 children died of SARS-CoV-2 infection, which makes it among the top 10 death causes in children since the start of pandemic.

Dr. Jeremy Faust, an emergency medicine physician at Brigham and Women’s Hospital in Boston stated that “And that’s in the context of all the mitigation measures taken.”

According to researchers, about 10 to 20 of every 1 lakh people each year develop myocarditis in the general population, facing symptoms from fatigue and chest pain to arrhythmias and cardiac arrest, whereas some have mild symptoms which remain undiagnosed.

Currently, the number of reports of myocarditis after vaccination is less than that reported normally in young adults, confirmed by C.D.C. The article reported that the members of vaccine safety group felt to communicate the information about upcoming cases of myocarditis to the providers.

The C.D.C. has not yet specified the ages of the patients involved in reporting. Since December 2020, the Pfizer-BioNTech vaccine was authorized for young people of age 16 and above. The Food and Drug Administration extended the authorization to children of age 12 to 15 years, by the starting of this month.

On 14th May, the clinicians have been alerted by C.D.C. regarding the probable link between myocarditis and vaccination. Within three days, the team started reviewing data on myocarditis, reports filed with the Vaccine Adverse Event Reporting System and others from the Department of Defense.

A report on seven cases has been submitted to the journal Pediatrics for review and State health departments in Washington, Oregon and California have notified emergency providers and cardiologists about the potential problem.

In an interview, Dr. Liam Yore, past president of the Washington State chapter of the American College of Emergency Physicians detailed a case of teenager with myocarditis after vaccination. The patient was provided treatment for mild inflammation of the inner lining of the heart and was discharged afterwards. Later, the young adult returned for care due to decrease in the heart’s output. Dr. Yore reported that still he had come across worse cases in youngsters with Covid, including in a 9-year-old child who arrived at the hospital after a cardiac arrest last winter.

He stated that “The relative risk is a lot in favor of getting the vaccine, especially considering how coronavirus vaccine have been administered.”

In the United States, more than 161 million people have received their first shot of vaccine in which about 4.5 million people were between the age 12 to 18 years.

Benefits Overwhelm Risks of COVID Vaccination, AHA Reassures

The latest statement of American Heart Association (AHA)/ American Stroke Association (ASA) on May 23rd states that the benefits of COVID-19 vaccination enormously outweigh the rare risk for myocarditis cases, which followed the C.D.C. report that the agency is tracking the Vaccine Adverse Events Reporting System (VAERS) and the Vaccine Safety Datalink (VSD) for myocarditis cases linked with mRNA vaccines against coronavirus.

The myocarditis cases in young adults are more often observed after the second dose of vaccine rather than the first one, and have more cases of males than females. The CDC’s COVID-19 Vaccine Safety Technical Work Group (VaST) observed such heart complications after 4 days of vaccination.

CDC reported that “Within CDC safety monitoring systems, rates of myocarditis reports in the window following COVID-19 vaccination have not differed from expected baseline rates.”

The CDC team stated that “The evidence continues to indicate that the COVID-19 vaccines are nearly 100% effective at preventing death and hospitalization due to COVID-19 infection, and Strongly urged all young adults and children 12 years and above to get vaccinated as soon as possible.”

Even though the analysis of myocarditis reports related to coronavirus vaccine is in progress, the AHA/ASA stated that “myocarditis is typically the result of an actual viral infection, and it is yet to be determined if these cases have any correlation to receiving a COVID-19 vaccine.”

Richard Besser, MD, president and CEO of the Robert Wood Johnson Foundation (RWJF) and former acting director of the CDC stated on ABC’s Good Morning America “We’ve lost hundreds of children and there have been thousands who have been hospitalized, thousands who developed an inflammatory syndrome, and one of the pieces of that can be myocarditis.” He added “still, from my perspective, the risk of COVID is so much greater than any theoretical risk from the vaccine.”

After COVID-19 vaccination the symptoms that occur include tiredness, muscle pain, headaches, chills, nausea and fever. The AHA/ASA stated that “typically appear within 24 to 48 hours and usually pass within 36-48 hours after receiving the vaccine.”

All healthcare providers are suggested to be aware of the rare adverse symptoms such as myocarditis, low platelets, blood clots, and severe inflammation. The agency stated that “Healthcare professionals should strongly consider inquiring about the timing of any recent COVID vaccination among patients presenting with these conditions, as needed, in order to provide appropriate treatment quickly.”

President Mitchell S.V. Elkind, M.D., M.S., FAHA, FAAN, Immediate Past President Robert A. Harrington, M.D., FAHA, President-Elect Donald M. Lloyd-Jones, M.D., Sc.M., FAHA, Chief Science and Medical Officer Mariell Jessup, M.D., FAHA, and Chief Medical Officer for Prevention Eduardo Sanchez, M.D, M.P.H., FAAFP are science leaders of AHA/ASA and reflected their views in the following statements:

We strongly urge all adults and children ages 12 and older in the U.S. to receive a COVID vaccine as soon as they can receive it, as recently approved by the U.S. Food and Drug Administration and the CDC. The evidence continues to indicate that the COVID-19 vaccines are nearly 100% effective at preventing death and hospitalization due to COVID-19 infection. According to the CDC as of May 22, 2021, over 283 million doses of COVID-19 vaccines have been administered in the U.S. since December 14, 2020, and more than 129 million Americans are fully vaccinated (i.e., they have received either two doses of the Pfizer-BioNTech or Moderna COVID-19 vaccine, or the single-dose Johnson & Johnson/Janssen COVID-19 vaccine).

We remain confident that the benefits of vaccination far exceed the very small, rare risks. The risks of vaccination are also far smaller than the risks of COVID-19 infection itself, including its potentially fatal consequences and the potential long-term health effects that are still revealing themselves, including myocarditis. The recommendation for vaccination specifically includes people with cardiovascular risk factors such as high blood pressure, obesity and type 2 diabetes, those with heart disease, and heart attack and stroke survivors, because they are at much greater risk of an adverse outcome from the COVID-19 virus than they are from the vaccine.

We commend the CDC’s continual monitoring for adverse events related to the COVID-19 vaccines through VAERS and VSD, and the consistent meetings of ACIP’s VaST Work Group, demonstrating transparent and robust attention to any and all health events possibly related to a COVID-19 vaccine. The few cases of myocarditis that have been reported after COVID-19 vaccination are being investigated. However, myocarditis is usually the result of a viral infection, and it is yet to be determined if these cases have any correlation to receiving a COVID-19 vaccine, especially since the COVID-19 vaccines authorized in the U.S. do not contain any live virus.

We also encourage everyone to keep in touch with their primary care professionals and seek care immediately if they have any of these symptoms in the weeks after receiving the COVID-19 vaccine: chest pain including sudden, sharp, stabbing pains; difficulty breathing/shortness of breath; abnormal heartbeat; severe headache; blurry vision; fainting or loss of consciousness; weakness or sensory changes; confusion or trouble speaking; seizures; unexplained abdominal pain; or new leg pain or swelling.

We will stay up to date with the CDC’s recommendations regarding all potential complications related to COVID-19 vaccines, including myocarditis, pericarditis, central venous sinus thrombosis (CVST) and other blood clotting events, thrombosis thrombocytopenia syndrome (TTS), and vaccine-induced immune thrombosis thrombocytopenia (VITT).

The American Heart Associationrecommends all health care professionals be aware of these very rare adverse events that may be related to a COVID-19 vaccine, including myocarditis, blood clots, low platelets, or symptoms of severe inflammation. Health care professionals should strongly consider inquiring about the timing of any recent COVID vaccination among patients presenting with these conditions, as needed, in order to provide appropriate treatment quickly. As detailed in last month’s AHA/ASA statement, all suspected CVST or blood clots associated with the COVID-19 vaccine should be treated initially using non-heparin anticoagulants. Heparin products should not be administered in any dose if TTS/VITT is suspected, until appropriate testing can be done to exclude heparin-induced antibodies. In addition, health care professionals are required to report suspected vaccine-related adverse events to the Vaccine Adverse Event Reporting System, in accordance with federal regulations.

Individuals should refer to their local and state health departments for specific information about when and where they can get vaccinated. We implore everyone ages 12 and older to get vaccinated so we can return to being together, in person – enjoying life with little to no risk of severe COVID-19 infection, hospitalization or death.

We also support the CDC recommendations last week that loosen restrictions on mask wearing and social distancing for people who are fully vaccinated. For those who are unable to be vaccinated, we reiterate the importance of handwashing, social distancing and wearing masks, particularly for people at high risk of infection and/or severe COVID-19. These simple precautions remain crucial to protecting people who are not vaccinated from the virus that causes COVID-19.

Source:

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Thriving Vaccines and Research: Weizmann Institute Coronavirus Research Development

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/04/thriving-vaccines-and-research-weizmann-coronavirus-research-development/

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/04/19/identification-of-novel-genes-in-human-that-fight-covid-19-infection/

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur, B.Sc., M.Sc. 

https://pharmaceuticalintelligence.com/2021/04/13/fighting-chaos-with-care/

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Read Full Post »

Thriving Vaccines and Research: Weizmann Institute Coronavirus Research Development

Reporter: Amandeep Kaur, B.Sc., M.Sc.

In early February, Prof. Eran Segal updated in one of his tweets and mentioned that “We say with caution, the magic has started.”

The article reported that this statement by Prof. Segal was due to decreasing cases of COVID-19, severe infection cases and hospitalization of patients by rapid vaccination process throughout Israel. Prof. Segal emphasizes in another tweet to remain cautious over the country and informed that there is a long way to cover and searching for scientific solutions.

A daylong webinar entitled “COVID-19: The epidemic that rattles the world” was a great initiative by Weizmann Institute to share their scientific knowledge about the infection among the Israeli institutions and scientists. Prof. Gideon Schreiber and Dr. Ron Diskin organized the event with the support of the Weizmann Coronavirus Response Fund and Israel Society for Biochemistry and Molecular Biology. The speakers were invited from the Hebrew University of Jerusalem, Tel-Aviv University, the Israel Institute for Biological Research (IIBR), and Kaplan Medical Center who addressed the molecular structure and infection biology of the virus, treatments and medications for COVID-19, and the positive and negative effect of the pandemic.

The article reported that with the emergence of pandemic, the scientists at Weizmann started more than 60 projects to explore the virus from different range of perspectives. With the help of funds raised by communities worldwide for the Weizmann Coronavirus Response Fund supported scientists and investigators to elucidate the chemistry, physics and biology behind SARS-CoV-2 infection.

Prof. Avi Levy, the coordinator of the Weizmann Institute’s coronavirus research efforts, mentioned “The vaccines are here, and they will drastically reduce infection rates. But the coronavirus can mutate, and there are many similar infectious diseases out there to be dealt with. All of this research is critical to understanding all sorts of viruses and to preempting any future pandemics.”

The following are few important projects with recent updates reported in the article.

Mapping a hijacker’s methods

Dr. Noam Stern-Ginossar studied the virus invading strategies into the healthy cells and hijack the cell’s systems to divide and reproduce. The article reported that viruses take over the genetic translation system and mainly the ribosomes to produce viral proteins. Dr. Noam used a novel approach known as ‘ribosome profiling’ as her research objective and create a map to locate the translational events taking place inside the viral genome, which further maps the full repertoire of viral proteins produced inside the host.

She and her team members grouped together with the Weizmann’s de Botton Institute and researchers at IIBR for Protein Profiling and understanding the hijacking instructions of coronavirus and developing tools for treatment and therapies. Scientists generated a high-resolution map of the coding regions in the SARS-CoV-2 genome using ribosome-profiling techniques, which allowed researchers to quantify the expression of vital zones along the virus genome that regulates the translation of viral proteins. The study published in Nature in January, explains the hijacking process and reported that virus produces more instruction in the form of viral mRNA than the host and thus dominates the translation process of the host cell. Researchers also clarified that it is the misconception that virus forced the host cell to translate its viral mRNA more efficiently than the host’s own translation, rather high level of viral translation instructions causes hijacking. This study provides valuable insights for the development of effective vaccines and drugs against the COVID-19 infection.

Like chutzpah, some things don’t translate

Prof. Igor Ulitsky and his team worked on untranslated region of viral genome. The article reported that “Not all the parts of viral transcript is translated into protein- rather play some important role in protein production and infection which is unknown.” This region may affect the molecular environment of the translated zones. The Ulitsky group researched to characterize that how the genetic sequence of regions that do not translate into proteins directly or indirectly affect the stability and efficiency of the translating sequences.

Initially, scientists created the library of about 6,000 regions of untranslated sequences to further study their functions. In collaboration with Dr. Noam Stern-Ginossar’s lab, the researchers of Ulitsky’s team worked on Nsp1 protein and focused on the mechanism that how such regions affect the Nsp1 protein production which in turn enhances the virulence. The researchers generated a new alternative and more authentic protocol after solving some technical difficulties which included infecting cells with variants from initial library. Within few months, the researchers are expecting to obtain a more detailed map of how the stability of Nsp1 protein production is getting affected by specific sequences of the untranslated regions.

The landscape of elimination

The article reported that the body’s immune system consists of two main factors- HLA (Human Leukocyte antigen) molecules and T cells for identifying and fighting infections. HLA molecules are protein molecules present on the cell surface and bring fragments of peptide to the surface from inside the infected cell. These peptide fragments are recognized and destroyed by the T cells of the immune system. Samuels’ group tried to find out the answer to the question that how does the body’s surveillance system recognizes the appropriate peptide derived from virus and destroy it. They isolated and analyzed the ‘HLA peptidome’- the complete set of peptides bound to the HLA proteins from inside the SARS-CoV-2 infected cells.

After the analysis of infected cells, they found 26 class-I and 36 class-II HLA peptides, which are present in 99% of the population around the world. Two peptides from HLA class-I were commonly present on the cell surface and two other peptides were derived from coronavirus rare proteins- which mean that these specific coronavirus peptides were marked for easy detection. Among the identified peptides, two peptides were novel discoveries and seven others were shown to induce an immune response earlier. These results from the study will help to develop new vaccines against new coronavirus mutation variants.

Gearing up ‘chain terminators’ to battle the coronavirus

Prof. Rotem Sorek and his lab discovered a family of enzymes within bacteria that produce novel antiviral molecules. These small molecules manufactured by bacteria act as ‘chain terminators’ to fight against the virus invading the bacteria. The study published in Nature in January which reported that these molecules cause a chemical reaction that halts the virus’s replication ability. These new molecules are modified derivates of nucleotide which integrates at the molecular level in the virus and obstruct the works.

Prof. Sorek and his group hypothesize that these new particles could serve as a potential antiviral drug based on the mechanism of chain termination utilized in antiviral drugs used recently in the clinical treatments. Yeda Research and Development has certified these small novel molecules to a company for testing its antiviral mechanism against SARS-CoV-2 infection. Such novel discoveries provide evidences that bacterial immune system is a potential repository of many natural antiviral particles.

Resolving borderline diagnoses

Currently, Real-time Polymerase chain reaction (RT-PCR) is the only choice and extensively used for diagnosis of COVID-19 patients around the globe. Beside its benefits, there are problems associated with RT-PCR, false negative and false positive results and its limitation in detecting new mutations in the virus and emerging variants in the population worldwide. Prof. Eran Elinavs’ lab and Prof. Ido Amits’ lab are working collaboratively to develop a massively parallel, next-generation sequencing technique that tests more effectively and precisely as compared to RT-PCR. This technique can characterize the emerging mutations in SARS-CoV-2, co-occurring viral, bacterial and fungal infections and response patterns in human.

The scientists identified viral variants and distinctive host signatures that help to differentiate infected individuals from non-infected individuals and patients with mild symptoms and severe symptoms.

In Hadassah-Hebrew University Medical Center, Profs. Elinav and Amit are performing trails of the pipeline to test the accuracy in borderline cases, where RT-PCR shows ambiguous or incorrect results. For proper diagnosis and patient stratification, researchers calibrated their severity-prediction matrix. Collectively, scientists are putting efforts to develop a reliable system that resolves borderline cases of RT-PCR and identify new virus variants with known and new mutations, and uses data from human host to classify patients who are needed of close observation and extensive treatment from those who have mild complications and can be managed conservatively.

Moon shot consortium refining drug options

The ‘Moon shot’ consortium was launched almost a year ago with an initiative to develop a novel antiviral drug against SARS-CoV-2 and was led by Dr. Nir London of the Department of Chemical and Structural Biology at Weizmann, Prof. Frank von Delft of Oxford University and the UK’s Diamond Light Source synchroton facility.

To advance the series of novel molecules from conception to evidence of antiviral activity, the scientists have gathered support, guidance, expertise and resources from researchers around the world within a year. The article reported that researchers have built an alternative template for drug-discovery, full transparency process, which avoids the hindrance of intellectual property and red tape.

The new molecules discovered by scientists inhibit a protease, a SARS-CoV-2 protein playing important role in virus replication. The team collaborated with the Israel Institute of Biological Research and other several labs across the globe to demonstrate the efficacy of molecules not only in-vitro as well as in analysis against live virus.

Further research is performed including assaying of safety and efficacy of these potential drugs in living models. The first trial on mice has been started in March. Beside this, additional drugs are optimized and nominated for preclinical testing as candidate drug.

Source: https://www.weizmann.ac.il/WeizmannCompass/sections/features/the-vaccines-are-here-and-research-abounds

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc. (ept. 5/2021)

https://pharmaceuticalintelligence.com/2021/04/19/identification-of-novel-genes-in-human-that-fight-covid-19-infection/

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur, B.Sc., M.Sc. (ept. 5/2021)

https://pharmaceuticalintelligence.com/2021/04/13/fighting-chaos-with-care/

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Mechanistic link between SARS-CoV-2 infection and increased risk of stroke using 3D printed models and human endothelial cells

Reporter: Adina Hazan, PhD

https://pharmaceuticalintelligence.com/2020/12/28/mechanistic-link-between-sars-cov-2-infection-and-increased-risk-of-stroke-using-3d-printed-models-and-human-endothelial-cells/

Read Full Post »

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc. (ept. 5/2021)

Scientists have recognized human genes that fight against the SARS-CoV-2 viral infection. The information about genes and their function can help to control infection and aids the understanding of crucial factors that causes severe infection. These novel genes are related to interferons, the frontline fighter in our body’s defense system and provide options for therapeutic strategies.

The research was published in the journal Molecular Cell.

Sumit K. Chanda, Ph.D., professor and director of the Immunity and Pathogenesis Program at Sanford Burnham Prebys reported in the article that they focused on better understanding of the cellular response and downstream mechanism in cells to SARS-CoV-2, including the factors which causes strong or weak response to viral infection. He is the lead author of the study and explained that in this study they have gained new insights into how the human cells are exploited by invading virus and are still working towards finding any weak point of virus to develop new antivirals against SARS-CoV-2.

With the surge of pandemic, researchers and scientists found that in severe cases of COVID-19, the response of interferons to SARS-CoV-2 viral infection is low. This information led Chanda and other collaborators to search for interferon-stimulated genes (ISGs), are genes in human which are triggered by interferons and play important role in confining COVID-19 infection by controlling their viral replication in host.

The investigators have developed laboratory experiments to identify ISGs based on the previous knowledge gathered by the outbreak of SARS-CoV-1 from 2002-2004 which was similar to COVID-19 pandemic caused by SARS-CoV-2 virus.

The article reports that Chanda mentioned “we found that 65 ISGs controlled SAR-CoV-2 infection, including some that inhibited the virus’ ability to enter cells, some that suppressed manufacture of the RNA that is the virus’s lifeblood, and a cluster of genes that inhibited assembly of the virus.” They also found an interesting fact about ISGs that some of these genes revealed control over unrelated viruses, such as HIV, West Nile and seasonal flu.

Laura Martin-Sancho, Ph.D., a senior postdoctoral associate in the Chanda lab and first author of the study reported in the article that they identified 8 different ISGs that blocked the replication of both SARS-CoV-1 and CoV-2 in the subcellular compartments responsible for packaging of proteins, which provide option to exploit these vulnerable sites to restrict infection. They are further investigating whether the genetic variability within the ISGs is associated with COVID-19 severity.

The next step for researchers will be investigating and observing the biology of variants of SARS-CoV-2 that are evolving and affecting vaccine efficacy. Martin-Sancho mentioned that their lab has already started gathering all the possible variants for further investigation.

“It’s vitally important that we don’t take our foot off the pedal of basic research efforts now that vaccines are helping control the pandemic,” reported in the article by Chanda.

“We’ve come so far so fast because of investment in fundamental research at Sanford Burnham Prebys and elsewhere, and our continued efforts will be especially important when, not if, another viral outbreak occurs,” concluded Chanda.

Source: https://medicalxpress.com/news/2021-04-covid-scientists-human-genes-infection.html

Reference: Laura Martin-Sancho et al. Functional Landscape of SARS-CoV-2 Cellular Restriction, Molecular Cell (2021). DOI: 10.1016/j.molcel.2021.04.008

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur

https://pharmaceuticalintelligence.com/2021/04/13/fighting-chaos-with-care/

Mechanism of Thrombosis with AstraZeneca and J & J Vaccines: Expert Opinion by Kate Chander Chiang & Ajay Gupta, MD

Reporter & Curator: Dr. Ajay Gupta, MD

https://pharmaceuticalintelligence.com/2021/04/14/mechanism-of-thrombosis-with-astrazeneca-and-j-j-vaccines-expert-opinion-by-kate-chander-chiang-ajay-gupta-md/

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Mechanistic link between SARS-CoV-2 infection and increased risk of stroke using 3D printed models and human endothelial cells

Reporter: Adina Hazan, PhD

https://pharmaceuticalintelligence.com/2020/12/28/mechanistic-link-between-sars-cov-2-infection-and-increased-risk-of-stroke-using-3d-printed-models-and-human-endothelial-cells/

Read Full Post »

Older Posts »

%d bloggers like this: