Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
There is an increasing demand for new antibiotics that effectively treat patients with refractory bacteremia, do not evoke bacterial resistance, and can be readily modified to address current and anticipated patient needs. Recently scientists described a promising compound of COE (conjugated oligo electrolytes) family, COE2-2hexyl, that exhibited broad-spectrum antibacterial activity. COE2-2hexyl effectively-treated mice infected with bacteria derived from sepsis patients with refractory bacteremia, including a CRE K. pneumoniae strain resistant to nearly all clinical antibiotics tested. Notably, this lead compound did not evoke drug resistance in several pathogens tested. COE2-2hexyl has specific effects on multiple membrane-associated functions (e.g., septation, motility, ATP synthesis, respiration, membrane permeability to small molecules) that may act together to abrogate bacterial cell viability and the evolution of drug-resistance. Impeding these bacterial properties may occur through alteration of vital protein–protein or protein-lipid membrane interfaces – a mechanism of action distinct from many membrane disrupting antimicrobials or detergents that destabilize membranes to induce bacterial cell lysis. The diversity and ease of COE design and chemical synthesis have the potential to establish a new standard for drug design and personalized antibiotic treatment.
Recent studies have shown that small molecules can preferentially target bacterial membranes due to significant differences in lipid composition, presence of a cell wall, and the absence of cholesterol. The inner membranes of Gram-negative bacteria are generally more negatively charged at their surface because they contain more anionic lipids such as cardiolipin and phosphatidylglycerol within their outer leaflet compared to mammalian membranes. In contrast, membranes of mammalian cells are largely composed of more-neutral phospholipids, sphingomyelins, as well as cholesterol, which affords membrane rigidity and ability to withstand mechanical stresses; and may stabilize the membrane against structural damage to membrane-disrupting agents such as COEs. Consistent with these studies, COE2-2hexyl was well tolerated in mice, suggesting that COEs are not intrinsically toxic in vivo, which is often a primary concern with membrane-targeting antibiotics. The COE refinement workflow potentially accelerates lead compound optimization by more rapid screening of novel compounds for the iterative directed-design process. It also reduces the time and cost of subsequent biophysical characterization, medicinal chemistry and bioassays, ultimately facilitating the discovery of novel compounds with improved pharmacological properties.
Additionally, COEs provide an approach to gain new insights into microbial physiology, including membrane structure/function and mechanism of drug action/resistance, while also generating a suite of tools that enable the modulation of bacterial and mammalian membranes for scientific or manufacturing uses. Notably, further COE safety and efficacy studies are required to be conducted on a larger scale to ensure adequate understanding of the clinical benefits and risks to assure clinical efficacy and toxicity before COEs can be added to the therapeutic armamentarium. Despite these limitations, the ease of molecular design, synthesis and modular nature of COEs offer many advantages over conventional antimicrobials, making synthesis simple, scalable and affordable. It enables the construction of a spectrum of compounds with the potential for development as a new versatile therapy for the emergence and rapid global spread of pathogens that are resistant to all, or nearly all, existing antimicrobial medicines.
The following paper in Cells describes the discovery of protein interactors of endoglin, which is recruited to membranes at the TGF-β receptor complex upon TGF-β signaling. Interesting a carbohydrate binding protein, galectin-3, and an E3-ligase, TRIM21, were found to be unique interactors within this complex.
Gallardo-Vara E, Ruiz-Llorente L, Casado-Vela J, Ruiz-Rodríguez MJ, López-Andrés N, Pattnaik AK, Quintanilla M, Bernabeu C. Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners. Cells. 2019 Sep 13;8(9):1082. doi: 10.3390/cells8091082. PMID: 31540324; PMCID: PMC6769930.
Abstract
Endoglin is a 180-kDa glycoprotein receptor primarily expressed by the vascular endothelium and involved in cardiovascular disease and cancer. Heterozygous mutations in the endoglin gene (ENG) cause hereditary hemorrhagic telangiectasia type 1, a vascular disease that presents with nasal and gastrointestinal bleeding, skin and mucosa telangiectases, and arteriovenous malformations in internal organs. A circulating form of endoglin (alias soluble endoglin, sEng), proteolytically released from the membrane-bound protein, has been observed in several inflammation-related pathological conditions and appears to contribute to endothelial dysfunction and cancer development through unknown mechanisms. Membrane-bound endoglin is an auxiliary component of the TGF-β receptor complex and the extracellular region of endoglin has been shown to interact with types I and II TGF-β receptors, as well as with BMP9 and BMP10 ligands, both members of the TGF-β family. To search for novel protein interactors, we screened a microarray containing over 9000 unique human proteins using recombinant sEng as bait. We find that sEng binds with high affinity, at least, to 22 new proteins. Among these, we validated the interaction of endoglin with galectin-3, a secreted member of the lectin family with capacity to bind membrane glycoproteins, and with tripartite motif-containing protein 21 (TRIM21), an E3 ubiquitin-protein ligase. Using human endothelial cells and Chinese hamster ovary cells, we showed that endoglin co-immunoprecipitates and co-localizes with galectin-3 or TRIM21. These results open new research avenues on endoglin function and regulation.
Endoglin is an auxiliary TGF-β co-receptor predominantly expressed in endothelial cells, which is involved in vascular development, repair, homeostasis, and disease [1,2,3,4]. Heterozygous mutations in the human ENDOGLIN gene (ENG) cause hereditary hemorrhagic telangiectasia (HHT) type 1, a vascular disease associated with nasal and gastrointestinal bleeds, telangiectases on skin and mucosa and arteriovenous malformations in the lung, liver, and brain [4,5,6]. The key role of endoglin in the vasculature is also illustrated by the fact that endoglin-KO mice die in utero due to defects in the vascular system [7]. Endoglin expression is markedly upregulated in proliferating endothelial cells involved in active angiogenesis, including the solid tumor neovasculature [8,9]. For this reason, endoglin has become a promising target for the antiangiogenic treatment of cancer [10,11,12]. Endoglin is also expressed in cancer cells where it can behave as both a tumor suppressor in prostate, breast, esophageal, and skin carcinomas [13,14,15,16] and a promoter of malignancy in melanoma and Ewing’s sarcoma [17]. Ectodomain shedding of membrane-bound endoglin may lead to a circulating form of the protein, also known as soluble endoglin (sEng) [18,19,20]. Increased levels of sEng have been found in several vascular-related pathologies, including preeclampsia, a disease of high prevalence in pregnant women which, if left untreated, can lead to serious and even fatal complications for both mother and baby [2,18,19,21]. Interestingly, several lines of evidence support a pathogenic role of sEng in the vascular system, including endothelial dysfunction, antiangiogenic activity, increased vascular permeability, inflammation-associated leukocyte adhesion and transmigration, and hypertension [18,22,23,24,25,26,27]. Because of its key role in vascular pathology, a large number of studies have addressed the structure and function of endoglin at the molecular level, in order to better understand its mechanism of action.
Galectin-3 Interacts with Endoglin in Cells
Galectin-3 is a secreted member of the lectin family with the capacity to bind membrane glycoproteins like endoglin and is involved in the pathogenesis of many human diseases [52]. We confirmed the protein screen data for galectin-3, as evidenced by two-way co-immunoprecipitation of endoglin and galectin-3 upon co-transfection in CHO-K1 cells. As shown in Figure 1A, galectin-3 and endoglin were efficiently transfected, as demonstrated by Western blot analysis in total cell extracts. No background levels of endoglin were observed in control cells transfected with the empty vector (Ø). By contrast, galectin-3 could be detected in all samples but, as expected, showed an increased signal in cells transfected with the galectin-3 expression vector. Co-immunoprecipitation studies of these cell lysates showed that galectin-3 was present in endoglin immunoprecipitates (Figure 1B). Conversely, endoglin was also detected in galectin-3 immunoprecipitates (Figure 1C).
Figure 1. Protein–protein association between galectin-3 and endoglin. (A–C). Co-immunoprecipitation of galectin-3 and endoglin. CHO-K1 cells were transiently transfected with pcEXV-Ø (Ø), pcEXV–HA–EngFL (Eng) and pcDNA3.1–Gal-3 (Gal3) expression vectors. (A) Total cell lysates (TCL) were analyzed by SDS-PAGE under reducing conditions, followed by Western blot (WB) analysis using specific antibodies to endoglin, galectin-3 and β-actin (loading control). Cell lysates were subjected to immunoprecipitation (IP) with anti-endoglin (B) or anti-galectin-3 (C) antibodies, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin or anti-galectin-3 antibodies, as indicated. Negative controls with an IgG2b (B) and IgG1 (C) were included. (D) Protein-protein interactions between galectin-3 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 7.3 µM recombinant human galectin-3/6xHis at the C-terminus (LGALS3), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Figure 2.Galectin-3 and endoglin co-localize in human endothelial cells. Human umbilical vein-derived endothelial cell (HUVEC) monolayers were fixed with paraformaldehyde, permeabilized with Triton X-100, incubated with the mouse mAb P4A4 anti-endoglin, washed, and incubated with a rabbit polyclonal anti-galectin-3 antibody (PA5-34819). Galectin-3 and endoglin were detected by immunofluorescence upon incubation with Alexa 647 goat anti-rabbit IgG (red staining) and Alexa 488 goat anti-mouse IgG (green staining) secondary antibodies, respectively. (A) Single staining of galectin-3 (red) and endoglin (green) at the indicated magnifications. (B) Merge images plus DAPI (nuclear staining in blue) show co-localization of galectin-3 and endoglin (yellow color). Representative images of five different experiments are shown.
Endoglin associates with the cullin-type E3 ligase TRIM21
Figure 3.Protein–protein association between TRIM21 and endoglin. (A–E) Co-immunoprecipitation of TRIM21 and endoglin. A,B. HUVEC monolayers were lysed and total cell lysates (TCL) were subjected to SDS-PAGE under reducing (for TRIM21 detection) or nonreducing (for endoglin detection) conditions, followed by Western blot (WB) analysis using antibodies to endoglin, TRIM21 or β-actin (A). HUVECs lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or negative control antibodies, followed by WB analysis with anti-endoglin (B). C,D. CHO-K1 cells were transiently transfected with pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (E) or pcDNA3.1–HA–hTRIM21 (T) expression vectors, as indicated. Total cell lysates (TCL) were subjected to SDS-PAGE under nonreducing conditions and WB analysis using specific antibodies to endoglin, TRIM21, and β-actin (C). Cell lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or anti-endoglin antibodies, followed by SDS-PAGE under reducing (upper panel) or nonreducing (lower panel) conditions and WB analysis with anti-TRIM21 or anti-endoglin antibodies. Negative controls of appropriate IgG were included (D). E. CHO-K1 cells were transiently transfected with pcDNA3.1–HA–hTRIM21 and pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (FL; full-length), pDisplay–HA–EngEC (EC; cytoplasmic-less) or pDisplay–HA–EngTMEC (TMEC; cytoplasmic-less) expression vectors, as indicated. Cell lysates were subjected to immunoprecipitation with anti-TRIM21, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin antibodies, as indicated. The asterisk indicates the presence of a nonspecific band. Mr, molecular reference; Eng, endoglin; TRIM, TRIM21. (F) Protein–protein interactions between TRIM21 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 5.4 µM recombinant human TRIM21/6xHis at the N-terminus (R052), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Table 1. Human protein-array analysis of endoglin interactors1.
1 Microarrays containing over 9000 unique human proteins were screened using recombinant sEng as a probe. Protein interactors showing the highest scores (Z-score ≥2.0) are listed. GeneBank (https://www.ncbi.nlm.nih.gov/genbank/) and UniProtKB (https://www.uniprot.org/help/uniprotkb) accession numbers are indicated with a yellow or green background, respectively. The cellular compartment of each protein was obtained from the UniProtKB webpage. Proteins selected for further studies (TRIM21 and galectin-3) are indicated in bold type with blue background.
Note: the following are from NCBI Genbank and Genecards on TRIM21
Official Symbol TRIM21provided by HGNC Official Full Name tripartite motif containing 21provided by HGNC Primary source HGNC:HGNC:11312 See related Ensembl:ENSG00000132109MIM:109092;AllianceGenome:HGNC:11312 Gene type protein coding RefSeq status REVIEWED Organism Homo sapiens Lineage Eukaryota; Metazoa; Chordata; Craniata; Vertebrata; Euteleostomi; Mammalia; Eutheria; Euarchontoglires; Primates; Haplorrhini; Catarrhini; Hominidae; Homo Also known as SSA; RO52; SSA1; RNF81; Ro/SSA Summary This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008] Expression Ubiquitous expression in spleen (RPKM 15.5), appendix (RPKM 13.2) and 24 other tissues See more Orthologs mouseall NEW Try the new Gene table Try the new Transcript table
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression. Enhances the decapping activity of DCP2. Exists as a ribonucleoprotein particle present in all mammalian cells studied and composed of a single polypeptide and one of four small RNA molecules. At least two isoforms are present in nucleated and red blood cells, and tissue specific differences in RO/SSA proteins have been identified. The common feature of these proteins is their ability to bind HY RNAs.2. Involved in the regulation of innate immunity and the inflammatory response in response to IFNG/IFN-gamma. Organizes autophagic machinery by serving as a platform for the assembly of ULK1, Beclin 1/BECN1 and ATG8 family members and recognizes specific autophagy targets, thus coordinating target recognition with assembly of the autophagic apparatus and initiation of autophagy. Acts as an autophagy receptor for the degradation of IRF3, hence attenuating type I interferon (IFN)-dependent immune responses (PubMed:26347139, 16297862, 16316627, 16472766, 16880511, 18022694, 18361920, 18641315, 18845142, 19675099). Represses the innate antiviral response by facilitating the formation of the NMI-IFI35 complex through ‘Lys-63’-linked ubiquitination of NMI (PubMed:26342464). ( RO52_HUMAN,P19474 )
Molecular function for TRIM21 Gene according to UniProtKB/Swiss-Prot
Function:
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression.
Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners
Gallardo-Vara E, Ruiz-Llorente L, Casado-Vela J, Ruiz-Rodríguez MJ, López-Andrés N, Pattnaik AK, Quintanilla M, Bernabeu C. Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners. Cells. 2019 Sep 13;8(9):1082. doi: 10.3390/cells8091082. PMID: 31540324; PMCID: PMC6769930.
Abstract
Endoglin is a 180-kDa glycoprotein receptor primarily expressed by the vascular endothelium and involved in cardiovascular disease and cancer. Heterozygous mutations in the endoglin gene (ENG) cause hereditary hemorrhagic telangiectasia type 1, a vascular disease that presents with nasal and gastrointestinal bleeding, skin and mucosa telangiectases, and arteriovenous malformations in internal organs. A circulating form of endoglin (alias soluble endoglin, sEng), proteolytically released from the membrane-bound protein, has been observed in several inflammation-related pathological conditions and appears to contribute to endothelial dysfunction and cancer development through unknown mechanisms. Membrane-bound endoglin is an auxiliary component of the TGF-β receptor complex and the extracellular region of endoglin has been shown to interact with types I and II TGF-β receptors, as well as with BMP9 and BMP10 ligands, both members of the TGF-β family. To search for novel protein interactors, we screened a microarray containing over 9000 unique human proteins using recombinant sEng as bait. We find that sEng binds with high affinity, at least, to 22 new proteins. Among these, we validated the interaction of endoglin with galectin-3, a secreted member of the lectin family with capacity to bind membrane glycoproteins, and with tripartite motif-containing protein 21 (TRIM21), an E3 ubiquitin-protein ligase. Using human endothelial cells and Chinese hamster ovary cells, we showed that endoglin co-immunoprecipitates and co-localizes with galectin-3 or TRIM21. These results open new research avenues on endoglin function and regulation.
Endoglin is an auxiliary TGF-β co-receptor predominantly expressed in endothelial cells, which is involved in vascular development, repair, homeostasis, and disease [1,2,3,4]. Heterozygous mutations in the human ENDOGLIN gene (ENG) cause hereditary hemorrhagic telangiectasia (HHT) type 1, a vascular disease associated with nasal and gastrointestinal bleeds, telangiectases on skin and mucosa and arteriovenous malformations in the lung, liver, and brain [4,5,6]. The key role of endoglin in the vasculature is also illustrated by the fact that endoglin-KO mice die in utero due to defects in the vascular system [7]. Endoglin expression is markedly upregulated in proliferating endothelial cells involved in active angiogenesis, including the solid tumor neovasculature [8,9]. For this reason, endoglin has become a promising target for the antiangiogenic treatment of cancer [10,11,12]. Endoglin is also expressed in cancer cells where it can behave as both a tumor suppressor in prostate, breast, esophageal, and skin carcinomas [13,14,15,16] and a promoter of malignancy in melanoma and Ewing’s sarcoma [17]. Ectodomain shedding of membrane-bound endoglin may lead to a circulating form of the protein, also known as soluble endoglin (sEng) [18,19,20]. Increased levels of sEng have been found in several vascular-related pathologies, including preeclampsia, a disease of high prevalence in pregnant women which, if left untreated, can lead to serious and even fatal complications for both mother and baby [2,18,19,21]. Interestingly, several lines of evidence support a pathogenic role of sEng in the vascular system, including endothelial dysfunction, antiangiogenic activity, increased vascular permeability, inflammation-associated leukocyte adhesion and transmigration, and hypertension [18,22,23,24,25,26,27]. Because of its key role in vascular pathology, a large number of studies have addressed the structure and function of endoglin at the molecular level, in order to better understand its mechanism of action.
Galectin-3 Interacts with Endoglin in Cells
Galectin-3 is a secreted member of the lectin family with the capacity to bind membrane glycoproteins like endoglin and is involved in the pathogenesis of many human diseases [52]. We confirmed the protein screen data for galectin-3, as evidenced by two-way co-immunoprecipitation of endoglin and galectin-3 upon co-transfection in CHO-K1 cells. As shown in Figure 1A, galectin-3 and endoglin were efficiently transfected, as demonstrated by Western blot analysis in total cell extracts. No background levels of endoglin were observed in control cells transfected with the empty vector (Ø). By contrast, galectin-3 could be detected in all samples but, as expected, showed an increased signal in cells transfected with the galectin-3 expression vector. Co-immunoprecipitation studies of these cell lysates showed that galectin-3 was present in endoglin immunoprecipitates (Figure 1B). Conversely, endoglin was also detected in galectin-3 immunoprecipitates (Figure 1C).
Figure 1. Protein–protein association between galectin-3 and endoglin. (A–C). Co-immunoprecipitation of galectin-3 and endoglin. CHO-K1 cells were transiently transfected with pcEXV-Ø (Ø), pcEXV–HA–EngFL (Eng) and pcDNA3.1–Gal-3 (Gal3) expression vectors. (A) Total cell lysates (TCL) were analyzed by SDS-PAGE under reducing conditions, followed by Western blot (WB) analysis using specific antibodies to endoglin, galectin-3 and β-actin (loading control). Cell lysates were subjected to immunoprecipitation (IP) with anti-endoglin (B) or anti-galectin-3 (C) antibodies, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin or anti-galectin-3 antibodies, as indicated. Negative controls with an IgG2b (B) and IgG1 (C) were included. (D) Protein-protein interactions between galectin-3 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 7.3 µM recombinant human galectin-3/6xHis at the C-terminus (LGALS3), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Figure 2.Galectin-3 and endoglin co-localize in human endothelial cells. Human umbilical vein-derived endothelial cell (HUVEC) monolayers were fixed with paraformaldehyde, permeabilized with Triton X-100, incubated with the mouse mAb P4A4 anti-endoglin, washed, and incubated with a rabbit polyclonal anti-galectin-3 antibody (PA5-34819). Galectin-3 and endoglin were detected by immunofluorescence upon incubation with Alexa 647 goat anti-rabbit IgG (red staining) and Alexa 488 goat anti-mouse IgG (green staining) secondary antibodies, respectively. (A) Single staining of galectin-3 (red) and endoglin (green) at the indicated magnifications. (B) Merge images plus DAPI (nuclear staining in blue) show co-localization of galectin-3 and endoglin (yellow color). Representative images of five different experiments are shown.
Endoglin associates with the cullin-type E3 ligase TRIM21
Figure 3.Protein–protein association between TRIM21 and endoglin. (A–E) Co-immunoprecipitation of TRIM21 and endoglin. A,B. HUVEC monolayers were lysed and total cell lysates (TCL) were subjected to SDS-PAGE under reducing (for TRIM21 detection) or nonreducing (for endoglin detection) conditions, followed by Western blot (WB) analysis using antibodies to endoglin, TRIM21 or β-actin (A). HUVECs lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or negative control antibodies, followed by WB analysis with anti-endoglin (B). C,D. CHO-K1 cells were transiently transfected with pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (E) or pcDNA3.1–HA–hTRIM21 (T) expression vectors, as indicated. Total cell lysates (TCL) were subjected to SDS-PAGE under nonreducing conditions and WB analysis using specific antibodies to endoglin, TRIM21, and β-actin (C). Cell lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or anti-endoglin antibodies, followed by SDS-PAGE under reducing (upper panel) or nonreducing (lower panel) conditions and WB analysis with anti-TRIM21 or anti-endoglin antibodies. Negative controls of appropriate IgG were included (D). E. CHO-K1 cells were transiently transfected with pcDNA3.1–HA–hTRIM21 and pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (FL; full-length), pDisplay–HA–EngEC (EC; cytoplasmic-less) or pDisplay–HA–EngTMEC (TMEC; cytoplasmic-less) expression vectors, as indicated. Cell lysates were subjected to immunoprecipitation with anti-TRIM21, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin antibodies, as indicated. The asterisk indicates the presence of a nonspecific band. Mr, molecular reference; Eng, endoglin; TRIM, TRIM21. (F) Protein–protein interactions between TRIM21 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 5.4 µM recombinant human TRIM21/6xHis at the N-terminus (R052), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Table 1. Human protein-array analysis of endoglin interactors1.
1 Microarrays containing over 9000 unique human proteins were screened using recombinant sEng as a probe. Protein interactors showing the highest scores (Z-score ≥2.0) are listed. GeneBank (https://www.ncbi.nlm.nih.gov/genbank/) and UniProtKB (https://www.uniprot.org/help/uniprotkb) accession numbers are indicated with a yellow or green background, respectively. The cellular compartment of each protein was obtained from the UniProtKB webpage. Proteins selected for further studies (TRIM21 and galectin-3) are indicated in bold type with blue background.
Note: the following are from NCBI Genbank and Genecards on TRIM21
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
Expression
Ubiquitous expression in spleen (RPKM 15.5), appendix (RPKM 13.2) and 24 other tissues See more
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression. Enhances the decapping activity of DCP2. Exists as a ribonucleoprotein particle present in all mammalian cells studied and composed of a single polypeptide and one of four small RNA molecules. At least two isoforms are present in nucleated and red blood cells, and tissue specific differences in RO/SSA proteins have been identified. The common feature of these proteins is their ability to bind HY RNAs.2. Involved in the regulation of innate immunity and the inflammatory response in response to IFNG/IFN-gamma. Organizes autophagic machinery by serving as a platform for the assembly of ULK1, Beclin 1/BECN1 and ATG8 family members and recognizes specific autophagy targets, thus coordinating target recognition with assembly of the autophagic apparatus and initiation of autophagy. Acts as an autophagy receptor for the degradation of IRF3, hence attenuating type I interferon (IFN)-dependent immune responses (PubMed:26347139, 16297862, 16316627, 16472766, 16880511, 18022694, 18361920, 18641315, 18845142, 19675099). Represses the innate antiviral response by facilitating the formation of the NMI-IFI35 complex through ‘Lys-63’-linked ubiquitination of NMI (PubMed:26342464). ( RO52_HUMAN,P19474 )
Molecular function for TRIM21 Gene according to UniProtKB/Swiss-Prot
Function:
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression.
Other Articles in this Open Access Scientific Journal on Galectins and Proteosome Include
Recent genetic studies have identified variants associated with bipolar disorder (BD), but it remains unclear how brain gene expression is altered in BD and how genetic risk for BD may contribute to these alterations. Here, we obtained transcriptomes from subgenual anterior cingulate cortex and amygdala samples from post-mortem brains of individuals with BD and neurotypical controls, including 511 total samples from 295 unique donors. We examined differential gene expression between cases and controls and the transcriptional effects of BD-associated genetic variants. We found two coexpressed modules that were associated with transcriptional changes in BD: one enriched for immune and inflammatory genes and the other with genes related to the postsynaptic membrane. Over 50% of BD genome-wide significant loci contained significant expression quantitative trait loci (QTL) (eQTL), and these data converged on several individual genes, including SCN2A and GRIN2A. Thus, these data implicate specific genes and pathways that may contribute to the pathology of BP.
Gene Expression Markers for Bipolar Disorder Pinpointed
The work was led by researchers at Johns Hopkins’ Lieber Institute for Brain Development. The findings, published this week in Nature Neuroscience, represent the first time that researchers have been able to apply large-scale genetic research to brain samples from hundreds of patients with bipolar disorder (BD). They used 511 total samples from 295 unique donors.
“This is the first deep dive into the molecular biology of the brain in people who died with bipolar disorder—studying actual genes, not urine, blood or skin samples,” said Thomas Hyde of the Lieber Institute and a lead author of the paper. “If we can figure out the mechanisms behind BD, if we can figure out what’s wrong in the brain, then we can begin to develop new targeted treatments of what has long been a mysterious condition.”
Bipolar disorder is characterized by extreme mood swings, with episodes of mania alternating with episodes of depression. It usually emerges in people in their 20s and 30s and remains with them for life. This condition affects approximately 2.8% of the adult American population, or about 7 million people. Patients face higher rates of suicide, poorer quality of life, and lower productivity than the general population. Some estimates put the annual cost of the condition in the U.S. alone at $219.1 billion.
While drugs can be useful in treating BD, many patients find they have bothersome side effects, and for some patients, current medications don’t work at all.
In this study, researchers measured levels of messenger RNA in the brain samples. They observed almost eight times more differentially expressed gene features in the sACC versus the amygdala, suggesting that the sACC may play an especially prominent role—both in mood regulation in general and BD specifically.
In patients who died with BD, the researchers found abnormalities in two families of genes: one containing genes related to the synapse and the second related to immune and inflammatory function.
“There finally is a study using modern technology and our current understanding of genetics to uncover how the brain is doing,” Hyde said. “We know that BD tends to run in families, and there is strong evidence that there are inherited genetic abnormalities that put an individual at risk for bipolar disorder. Unlike diseases such as sickle-cell anemia, bipolar disorder does not result from a single genetic abnormality. Rather, most patients have inherited a group of variants spread across a number of genes.”
“Bipolar disorder, also known as manic-depressive disorder, is a highly damaging and paradoxical condition,” said Daniel R. Weinberger, chief executive and director of the Lieber Institute and a co-author of the study. “It can make people very productive so they can lead countries and companies, but it can also hurl them into the meat grinder of dysfunction and depression. Patients with BD may live on two hours of sleep a night, saving the world with their abundance of energy, and then become so self-destructive that they spend their family’s fortune in a week and lose all friends as they spiral downward. Bipolar disorder also has some shared genetic links to other psychiatric disorders, such as schizophrenia, and is implicated in overuse of drugs and alcohol.”
The Vibrant Philly Biotech Scene: Proteovant Therapeutics Using Artificial Intelligence and Machine Learning to Develop PROTACs
Reporter:Stephen J. Williams, Ph.D.
It has been a while since I have added to this series but there have been a plethora of exciting biotech startups in the Philadelphia area, and many new startups combining technology, biotech, and machine learning. One such exciting biotech is Proteovant Therapeutics, which is combining the new PROTAC (Proteolysis-Targeting Chimera) technology with their in house ability to utilize machine learning and artificial intelligence to design these types of compounds to multiple intracellular targets.
PROTACs (which actually is under a trademark name of Arvinus Operations, but is also refered to as Protein Degraders. These PROTACs take advantage of the cell protein homeostatic mechanism of ubiquitin-mediated protein degradation, which is a very specific targeted process which regulates protein levels of various transcription factors, protooncogenes, and receptors. In essence this regulated proteolyic process is needed for normal cellular function, and alterations in this process may lead to oncogenesis, or a proteotoxic crisis leading to mitophagy, autophagy and cellular death. The key to this technology is using chemical linkers to associate an E3 ligase with a protein target of interest. E3 ligases are the rate limiting step in marking the proteins bound for degradation by the proteosome with ubiquitin chains.
A review of this process as well as PROTACs can be found elsewhere in articles (and future articles) on this Open Access Journal.
Protevant have made two important collaborations:
Oncopia Therapeutics: came out of University of Michigan Innovation Hub and lab of Shaomeng Wang, who developed a library of BET and MDM2 based protein degraders. In 2020 was aquired by Riovant Sciences.
Riovant Sciences: uses computer aided design of protein degraders
Proteovant Company Description:
Proteovant is a newly launched development-stage biotech company focusing on discovery and development of disease-modifying therapies by harnessing natural protein homeostasis processes. We have recently acquired numerous assets at discovery and development stages from Oncopia, a protein degradation company. Our lead program is on track to enter IND in 2021. Proteovant is building a strong drug discovery engine by combining deep drugging expertise with innovative platforms including Roivant’s AI capabilities to accelerate discovery and development of protein degraders to address unmet needs across all therapeutic areas. The company has recently secured $200M funding from SK Holdings in addition to investment from Roivant Sciences. Our current therapeutic focus includes but is not limited to oncology, immunology and neurology. We remain agnostic to therapeutic area and will expand therapeutic focus based on opportunity. Proteovant is expanding its discovery and development teams and has multiple positions in biology, chemistry, biochemistry, DMPK, bioinformatics and CMC at many levels. Our R&D organization is located close to major pharmaceutical companies in Eastern Pennsylvania with a second site close to biotech companies in Boston area.
The ubiquitin proteasome system (UPS) is responsible for maintaining protein homeostasis. Targeted protein degradation by the UPS is a cellular process that involves marking proteins and guiding them to the proteasome for destruction. We leverage this physiological cellular machinery to target and destroy disease-causing proteins.
Unlike traditional small molecule inhibitors, our approach is not limited by the classic “active site” requirements. For example, we can target transcription factors and scaffold proteins that lack a catalytic pocket. These classes of proteins, historically, have been very difficult to drug. Further, we selectively degrade target proteins, rather than isozymes or paralogous proteins with high homology. Because of the catalytic nature of the interactions, it is possible to achieve efficacy at lower doses with prolonged duration while decreasing dose-limiting toxicities.
Biological targets once deemed “undruggable” are now within reach.
Roivant develops transformative medicines faster by building technologies and developing talent in creative ways, leveraging the Roivant platform to launch “Vants” – nimble and focused biopharmaceutical and health technology companies. These Vants include Proteovant but also Dermovant, ImmunoVant,as well as others.
Roivant’s drug discovery capabilities include the leading computational physics-based platform for in silico drug design and optimization as well as machine learning-based models for protein degradation.
The integration of our computational and experimental engines enables the rapid design of molecules with high precision and fidelity to address challenging targets for diseases with high unmet need.
Our current modalities include small molecules, heterobifunctionals and molecular glues.
Roivant Unveils Targeted Protein Degradation Platform
– First therapeutic candidate on track to enter clinical studies in 2021
– Computationally-designed degraders for six targets currently in preclinical development
– Acquisition of Oncopia Therapeutics and research collaboration with lab of Dr. Shaomeng Wang at the University of Michigan to add diverse pipeline of current and future compounds
– Clinical-stage degraders will provide foundation for multiple new Vants in distinct disease areas
– Platform supported by $200 million strategic investment from SK Holdings
Other articles in this Vibrant Philly Biotech Scene on this Online Open Access Journal include:
From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery
Curator: Stephen J. Williams, PhD
Marc W. Kirschner*
Department of Systems Biology Harvard Medical School
Boston, Massachusetts 02115
With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like species, arise by descent with modification, so in their earliest forms even the founders of great dynasties are only marginally different than their sister fields and species. It is only in retrospect that we can recognize the significant founding events. Before embarking on a definition of systems biology, it may be worth remembering that confusion and controversy surrounded the introduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retrospect molecular biology was new and different. It introduced both new subject matter and new technological approaches, in addition to a new style.
As a point of departure for systems biology, consider the quintessential experiment in the founding of molecular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonlethal mutation in these genes in a multicellular organism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiological. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.
That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been perfected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new continent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the simplistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organisms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combinations, something that is not assured in chemistry. It also downplays the significant regulatory features that involve interactions between gene products, their localization, binding, posttranslational modification, degradation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the conserved genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different phenotypes in different tissues of metazoan organisms. These circuits may have certain robustness, but more important they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes themselves. Among other things it loads the deck in evolutionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.
Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the phenotype. One aspect of systems biology is the development of techniques to examine broadly the level of protein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.
High-throughput biology has opened up another important area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and reproducible environment. The real world of ecology, evolution, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later extended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some geneticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and protein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quantitative effects, partially masked or accentuated by other genetic and environmental conditions. To understand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environmental variation.
Extracts and explants are relatively accessible to synthetic manipulation. Next there is the explicit reconstruction of circuits within cells or the deliberate modification of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of describing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, proteins, cells in tissues, and whole organisms in their environment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.
You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biological organization and processes in terms of the molecular constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the importance of defining a succession of physiological states in that process, and on evolutionary biology and ecology for the appreciation that all aspects of the organism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology generates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of systems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.
Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.
5.1.5. Large-Scale Proteomics
While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.
All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.
5.2. Genetic Approaches
Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.
Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.
5.2.1. Resistance Cloning
The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.
In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].
While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].
When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].
While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.
5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens
When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).
An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].
The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].
Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].
An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].
From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.
SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY
Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence
The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question
1. Introduction
The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.
Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)
Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).
Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16].
2. Systems Biology in Cancer Research
Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].
In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.
2.1. Biological Network Analysis for Biomarker Validation
The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].
2.2. De Novo Construction of Biological Networks
While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].
2.3. Network Based Machine Learning
A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].
In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.
3. Network-Based Learning in Cancer Research
As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.
3.1. Molecular Characterization with Network Information
Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].
Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.
3.2. Tumor Heterogeneity Study with Network Information
The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].
In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].
3.3. Drug Target Identification with Network Information
In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].
Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.
4. Deep Learning in Cancer Research
DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].
In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].
4.1. Challenges for Deep Learning in Cancer Research
Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].
Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).
4.2. Molecular Charactization with Network and DNN Model
DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].
Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].
4.3. Tumor Heterogeneity with Network and DNN Model
As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].
4.4. Drug Target Identification with Networks and DNN Models
In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].
The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].
4.5. Graph Neural Network Model
In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].
In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].
4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge
The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].
As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].
Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.
“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”
Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell2014, 158, 929–944. [Google Scholar] [CrossRef] [PubMed]
Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell2018, 173, 283–285. [Google Scholar] [CrossRef]
Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol.2007, 3, 140. [Google Scholar] [CrossRef]
Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol.2017, 1, 25. [Google Scholar] [CrossRef] [PubMed]
Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol.2019, 20, e262–e273. [Google Scholar] [CrossRef]
Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods2015, 12, 615. [Google Scholar]
Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun.2020, 11, 729. [Google Scholar] [CrossRef]
Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet.2019, 10, 13. [Google Scholar] [CrossRef]
Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun.2020, 11, 728. [Google Scholar] [CrossRef]
Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res.2018, 24, 1248–1259. [Google Scholar] [CrossRef]
Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis2019, 8, 44. [Google Scholar] [CrossRef]
Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics2019, 35, 5191–5198. [Google Scholar] [CrossRef]
Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature2020, 578, 82. [Google Scholar] [CrossRef] [PubMed]
King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science2003, 302, 643–646. [Google Scholar] [CrossRef] [PubMed]
Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol.2010, 28, 1075. [Google Scholar] [CrossRef] [PubMed]
Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol.2009, 27, 1160. [Google Scholar] [CrossRef]
Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol.2014, 5, 412. [Google Scholar] [CrossRef] [PubMed]
Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform.2019, 20, 572–584. [Google Scholar] [CrossRef] [PubMed]
Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet.2016, 17, 630. [Google Scholar] [CrossRef] [PubMed]
Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet.2017, 8, 84. [Google Scholar] [CrossRef]
Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med.2011, 17, 297. [Google Scholar] [CrossRef] [PubMed]
Use of Systems Biology in Anti-Microbial Drug Development
Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965
In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.
Genome Sequences and Proteomic Structural Databases
In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.
Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.
There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.
We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018; Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.
FIGURE 2
Figure 2.(A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).
Examples of Understanding and Combatting Resistance
The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.
FIGURE 3
Figure 3.(A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).
Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:
Non-toxic antiviral nanoparticles with a broad spectrum of virus inhibition
Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc
Non-toxic antiviral nanoparticles with a broad spectrum of virus inhibition
Infectious diseases account for 20% of global deaths, with viruses accounting for over a third of these deaths (1). Lower respiratory effects and human immunodeficiency viruses (HIV) are among the top ten causes of death worldwide, both of which contribute significantly to health-care costs (2). Every year, new viruses (such as Ebola) increase the mortality toll. Vaccinations are the most effective method of avoiding viral infections, but there are only a few of them, and they are not available in all parts of the world (3). After infection, antiviral medications are the only option; unfortunately, only a limited number of antiviral medications are approved in this condition. Antiviral drugs on a big scale that can influence a wide spectrum of existing and emerging viruses are critical.
The three types of treatments currently available are small molecules (such as nucleoside analogues and peptidomimetics), proteins that stimulate the immune system (such as interferon), and oligonucleotides (for example, fomivirsen). The primary priorities include HIV, hepatitis B and C viruses, Herpes Simplex Virus (HSV), human cytomegalovirus (HCMV), and influenza virus. They work mainly on viral enzymes, which are necessary for viral replication but which differ from other host enzymes to ensure selective function. The specificity of antivirals is far from perfect because viruses rely on the biosynthesis machinery for reproduction of infected cells, which results in a widespread and inherent toxicity associated with such therapy. However, most viruses mutate rapidly due to their improper replicating mechanisms and so often develop resistance (4). Finally, since antiviral substances are targeted at viral proteins, it is challenging to build broad-based antivirals that can act with a wide range of phylogenetic and structurally different virus.
Over the last decade breakthroughs in nanotechnology have led to scientists developing incredibly specialized nanoparticles capable of traveling in specific cells through a human body. A broad spectrum of destructive viruses is being targeted and not only bind to, but also destroy, by modern computer modeling technology.
An international team of researchers led by the University of Illinois at Chicago chemistry professor Petr Kral developed novel anti-viral nanoparticles that bind to a variety of viruses, including herpes simplex virus, human papillomavirus, respiratory syncytial virus, Dengue, and lentiviruses. In contrast to conventional broad-spectrum antivirals, which just prevent viruses from invading cells, the new nanoparticles eradicate viruses. The team’s findings have been published in the journal “Nature Materials.”
The goal of this new study was to create a new anti-viral nanoparticle that could exploit the HSPG binding process to not only tightly attach with virus particles but also to destroy them. The work was done by a group of researchers ranging from biochemists to computer modeling experts until the team came up with a successful nanoparticle design that could, in principle, accurately target and kill individual virus particles.
The first step to combat many viruses consists in the attachment of heparin sulfate proteoglycan on cell surfaces to a protein (HSPG). Some of the antiviral medications already in place prevent an infection by imitating HSPG’s connection to the virus. An important constraint of these antivirals is that not only is this antiviral interaction weak, it does not kill the virus.
Kral said
We knew how the nanoparticles should bind on the overall composition of HSPG binding viral domains and the structures of the nanoparticles, but we did not realize why the various nanoparticles act so differently in terms of their both bond strength and viral entry in cells
Kral and colleagues assisted in resolving these challenges and guiding the experimentalists in fine-tuning the nanoparticle design so that it performed better.
The researchers have employed advanced computer modeling techniques to build exact structures of several target viruses and nanoparticles up to the atom’s position. A profound grasp of the interactions between individual atom groupings in viruses and nanoparticles allows the scientists to evaluate the strength and duration of prospective links between these two entities and to forecast how the bond could change over time and eventually kill the virus.
Atomistic MD simulations of an L1 pentamer of HPV capsid protein with the small NP (2.4 nm core, 100 MUP ligands). The NP and the protein are shown by van der Waals (vdW) and ribbon representations respectively. In the protein, the HSPG binding amino acids are displayed by vdW representation.
Kral added
We were capable of providing the design team with the data needed to construct a prototype of an antiviral of high efficiency and security, which may be utilized to save lives
The team has conducted several in vitro experiments following the development of a prototype nanoparticle design which have demonstrated success in binding and eventually destroying a wide spectrum of viruses, including herpes simplex, human papillomaviruses, respiratory syncytial viruses and dengue and lentiviruses.
The research is still in its early phases, and further in vivo animal testing is needed to confirm the nanoparticles’ safety, but this is a promising new road toward efficient antiviral therapies that could save millions of people from devastating virus infections each year.
Cagno, V., Andreozzi, P., D’Alicarnasso, M., Silva, P. J., Mueller, M., Galloux, M., … & Stellacci, F. (2018). Broad-spectrum non-toxic antiviral nanoparticles with a virucidal inhibition mechanism. Nature materials, 17(2), 195-203. https://www.nature.com/articles/nmat5053
Other Related Articles published in this Open Access Online Scientific Journal include the following:
Rare earth-doped nanoparticles applications in biological imaging and tumor treatment
Thriving Vaccines and Research: Weizmann Institute Coronavirus Research Development
Reporter:Amandeep Kaur, B.Sc., M.Sc.
In early February, Prof. Eran Segal updated in one of his tweets and mentioned that “We say with caution, the magic has started.”
The article reported that this statement by Prof. Segal was due to decreasing cases of COVID-19, severe infection cases and hospitalization of patients by rapid vaccination process throughout Israel. Prof. Segal emphasizes in another tweet to remain cautious over the country and informed that there is a long way to cover and searching for scientific solutions.
A daylong webinar entitled “COVID-19: The epidemic that rattles the world” was a great initiative by Weizmann Institute to share their scientific knowledge about the infection among the Israeli institutions and scientists. Prof. Gideon Schreiber and Dr. Ron Diskin organized the event with the support of the Weizmann Coronavirus Response Fund and Israel Society for Biochemistry and Molecular Biology. The speakers were invited from the Hebrew University of Jerusalem, Tel-Aviv University, the Israel Institute for Biological Research (IIBR), and Kaplan Medical Center who addressed the molecular structure and infection biology of the virus, treatments and medications for COVID-19, and the positive and negative effect of the pandemic.
The article reported that with the emergence of pandemic, the scientists at Weizmann started more than 60 projects to explore the virus from different range of perspectives. With the help of funds raised by communities worldwide for the Weizmann Coronavirus Response Fund supported scientists and investigators to elucidate the chemistry, physics and biology behind SARS-CoV-2 infection.
Prof. Avi Levy, the coordinator of the Weizmann Institute’s coronavirus research efforts, mentioned “The vaccines are here, and they will drastically reduce infection rates. But the coronavirus can mutate, and there are many similar infectious diseases out there to be dealt with. All of this research is critical to understanding all sorts of viruses and to preempting any future pandemics.”
The following are few important projects with recent updates reported in the article.
Mapping a hijacker’s methods
Dr. Noam Stern-Ginossar studied the virus invading strategies into the healthy cells and hijack the cell’s systems to divide and reproduce. The article reported that viruses take over the genetic translation system and mainly the ribosomes to produce viral proteins. Dr. Noam used a novel approach known as ‘ribosome profiling’ as her research objective and create a map to locate the translational events taking place inside the viral genome, which further maps the full repertoire of viral proteins produced inside the host.
She and her team members grouped together with the Weizmann’s de Botton Institute and researchers at IIBR for Protein Profiling and understanding the hijacking instructions of coronavirus and developing tools for treatment and therapies. Scientists generated a high-resolution map of the coding regions in the SARS-CoV-2 genome using ribosome-profiling techniques, which allowed researchers to quantify the expression of vital zones along the virus genome that regulates the translation of viral proteins. The study published in Nature in January, explains the hijacking process and reported that virus produces more instruction in the form of viral mRNA than the host and thus dominates the translation process of the host cell. Researchers also clarified that it is the misconception that virus forced the host cell to translate its viral mRNA more efficiently than the host’s own translation, rather high level of viral translation instructions causes hijacking. This study provides valuable insights for the development of effective vaccines and drugs against the COVID-19 infection.
Like chutzpah, some things don’t translate
Prof. Igor Ulitsky and his team worked on untranslated region of viral genome. The article reported that “Not all the parts of viral transcript is translated into protein- rather play some important role in protein production and infection which is unknown.” This region may affect the molecular environment of the translated zones. The Ulitsky group researched to characterize that how the genetic sequence of regions that do not translate into proteins directly or indirectly affect the stability and efficiency of the translating sequences.
Initially, scientists created the library of about 6,000 regions of untranslated sequences to further study their functions. In collaboration with Dr. Noam Stern-Ginossar’s lab, the researchers of Ulitsky’s team worked on Nsp1 protein and focused on the mechanism that how such regions affect the Nsp1 protein production which in turn enhances the virulence. The researchers generated a new alternative and more authentic protocol after solving some technical difficulties which included infecting cells with variants from initial library. Within few months, the researchers are expecting to obtain a more detailed map of how the stability of Nsp1 protein production is getting affected by specific sequences of the untranslated regions.
The landscape of elimination
The article reported that the body’s immune system consists of two main factors- HLA (Human Leukocyte antigen) molecules and T cells for identifying and fighting infections. HLA molecules are protein molecules present on the cell surface and bring fragments of peptide to the surface from inside the infected cell. These peptide fragments are recognized and destroyed by the T cells of the immune system. Samuels’ group tried to find out the answer to the question that how does the body’s surveillance system recognizes the appropriate peptide derived from virus and destroy it. They isolated and analyzed the ‘HLA peptidome’- the complete set of peptides bound to the HLA proteins from inside the SARS-CoV-2 infected cells.
After the analysis of infected cells, they found 26 class-I and 36 class-II HLA peptides, which are present in 99% of the population around the world. Two peptides from HLA class-I were commonly present on the cell surface and two other peptides were derived from coronavirus rare proteins- which mean that these specific coronavirus peptides were marked for easy detection. Among the identified peptides, two peptides were novel discoveries and seven others were shown to induce an immune response earlier. These results from the study will help to develop new vaccines against new coronavirus mutation variants.
Gearing up ‘chain terminators’ to battle the coronavirus
Prof. Rotem Sorek and his lab discovered a family of enzymes within bacteria that produce novel antiviral molecules. These small molecules manufactured by bacteria act as ‘chain terminators’ to fight against the virus invading the bacteria. The study published in Nature in January which reported that these molecules cause a chemical reaction that halts the virus’s replication ability. These new molecules are modified derivates of nucleotide which integrates at the molecular level in the virus and obstruct the works.
Prof. Sorek and his group hypothesize that these new particles could serve as a potential antiviral drug based on the mechanism of chain termination utilized in antiviral drugs used recently in the clinical treatments. Yeda Research and Development has certified these small novel molecules to a company for testing its antiviral mechanism against SARS-CoV-2 infection. Such novel discoveries provide evidences that bacterial immune system is a potential repository of many natural antiviral particles.
Resolving borderline diagnoses
Currently, Real-time Polymerase chain reaction (RT-PCR) is the only choice and extensively used for diagnosis of COVID-19 patients around the globe. Beside its benefits, there are problems associated with RT-PCR, false negative and false positive results and its limitation in detecting new mutations in the virus and emerging variants in the population worldwide. Prof. Eran Elinavs’ lab and Prof. Ido Amits’ lab are working collaboratively to develop a massively parallel, next-generation sequencing technique that tests more effectively and precisely as compared to RT-PCR. This technique can characterize the emerging mutations in SARS-CoV-2, co-occurring viral, bacterial and fungal infections and response patterns in human.
The scientists identified viral variants and distinctive host signatures that help to differentiate infected individuals from non-infected individuals and patients with mild symptoms and severe symptoms.
In Hadassah-Hebrew University Medical Center, Profs. Elinav and Amit are performing trails of the pipeline to test the accuracy in borderline cases, where RT-PCR shows ambiguous or incorrect results. For proper diagnosis and patient stratification, researchers calibrated their severity-prediction matrix. Collectively, scientists are putting efforts to develop a reliable system that resolves borderline cases of RT-PCR and identify new virus variants with known and new mutations, and uses data from human host to classify patients who are needed of close observation and extensive treatment from those who have mild complications and can be managed conservatively.
Moon shot consortium refining drug options
The ‘Moon shot’ consortium was launched almost a year ago with an initiative to develop a novel antiviral drug against SARS-CoV-2 and was led by Dr. Nir London of the Department of Chemical and Structural Biology at Weizmann, Prof. Frank von Delft of Oxford University and the UK’s Diamond Light Source synchroton facility.
To advance the series of novel molecules from conception to evidence of antiviral activity, the scientists have gathered support, guidance, expertise and resources from researchers around the world within a year. The article reported that researchers have built an alternative template for drug-discovery, full transparency process, which avoids the hindrance of intellectual property and red tape.
The new molecules discovered by scientists inhibit a protease, a SARS-CoV-2 protein playing important role in virus replication. The team collaborated with the Israel Institute of Biological Research and other several labs across the globe to demonstrate the efficacy of molecules not only in-vitro as well as in analysis against live virus.
Further research is performed including assaying of safety and efficacy of these potential drugs in living models. The first trial on mice has been started in March. Beside this, additional drugs are optimized and nominated for preclinical testing as candidate drug.
As part of the all-of-America approach to fighting the COVID-19 pandemic, the U.S. Food and Drug Administration has been working with partners across the U.S. government, academia and industry to expedite the development and availability of critical medical products to treat this novel virus. Today, we are providing an update on one potential treatment called convalescent plasma and encouraging those who have recovered from COVID-19 to donate plasma to help others fight this disease.
Convalescent plasma is an antibody-rich product made from blood donated by people who have recovered from the disease caused by the virus. Prior experience with respiratory viruses and limited data that have emerged from China suggest that convalescent plasma has the potential to lessen the severity or shorten the length of illness caused by COVID-19. It is important that we evaluate this potential therapy in the context of clinical trials, through expanded access, as well as facilitate emergency access for individual patients, as appropriate.
The response to the agency’s recently announced national efforts to facilitate the development of and access to convalescent plasma has been tremendous. More than 1,040 sites and 950 physician investigators nationwide have signed on to participate in the Mayo Clinic-ledExternal Link Disclaimer expanded access protocol. A number of clinical trials are also taking place to evaluate the safety and efficacy of convalescent plasma and the FDA has granted numerous single patient emergency investigational new drug (eIND) applications as well.
FDA issues guidelines on clinical trials and obtaining emergency enrollment concerning convalescent plasma
FDA has issued guidance to provide recommendations to health care providers and investigators on the administration and study of investigational convalescent plasma collected from individuals who have recovered from COVID-19 (COVID-19 convalescent plasma) during the public health emergency.
The guidance provides recommendations on the following:
Because COVID-19 convalescent plasma has not yet been approved for use by FDA, it is regulated as an investigational product. A health care provider must participate in one of the pathways described below. FDA does not collect COVID-19 convalescent plasma or provide COVID-19 convalescent plasma. Health care providers or acute care facilities should instead obtain COVID-19 convalescent plasma from an FDA-registered blood establishment.
Excerpts from the guidance document are provided below.
Background
The Food and Drug Administration (FDA or Agency) plays a critical role in protecting the United States (U.S.) from threats including emerging infectious diseases, such as the Coronavirus Disease 2019 (COVID-19) pandemic. FDA is committed to providing timely guidance to support response efforts to this pandemic.
One investigational treatment being explored for COVID-19 is the use of convalescent plasma collected from individuals who have recovered from COVID-19. Convalescent plasma that contains antibodies to severe acute respiratory syndrome coronavirus 2 or SARS-CoV-2 (the virus that causes COVID-19) is being studied for administration to patients with COVID-19. Use of convalescent plasma has been studied in outbreaks of other respiratory infections, including the 2003 SARS-CoV-1 epidemic, the 2009-2010 H1N1 influenza virus pandemic, and the 2012 MERS-CoV epidemic.
Although promising, convalescent plasma has not yet been shown to be safe and effective as a treatment for COVID-19. Therefore, it is important to study the safety and efficacy of COVID-19 convalescent plasma in clinical trials.
Pathways for Use of Investigational COVID-19 Convalescent Plasma
The following pathways are available for administering or studying the use of COVID-19 convalescent plasma:
Clinical Trials
Investigators wishing to study the use of convalescent plasma in a clinical trial should submit requests to FDA for investigational use under the traditional IND regulatory pathway (21 CFR Part 312). CBER’s Office of Blood Research and Review is committed to engaging with sponsors and reviewing such requests expeditiously. During the COVID-19 pandemic, INDs may be submitted via email to CBERDCC_eMailSub@fda.hhs.gov.
Expanded Access
An IND application for expanded access is an alternative for use of COVID-19 convalescent plasma for patients with serious or immediately life-threatening COVID-19 disease who are not eligible or who are unable to participate in randomized clinical trials (21 CFR 312.305). FDA has worked with multiple federal partners and academia to open an expanded access protocol to facilitate access to COVID-19 convalescent plasma across the nation. Access to this investigational product may be available through participation of acute care facilities in an investigational expanded access protocol under an IND that is already in place.
Although participation in clinical trials or an expanded access program are ways for patients to obtain access to convalescent plasma, for various reasons these may not be readily available to all patients in potential need. Therefore, given the public health emergency that the COVID-19 pandemic presents, and while clinical trials are being conducted and a national expanded access protocol is available, FDA also is facilitating access to COVID-19 convalescent plasma for use in patients with serious or immediately life-threatening COVID-19 infections through the process of the patient’s physician requesting a single patient emergency IND (eIND) for the individual patient under 21 CFR 312.310. This process allows the use of an investigational drug for the treatment of an individual patient by a licensed physician upon FDA authorization, if the applicable regulatory criteria are met. Note, in such case, a licensed physician seeking to administer COVID-19 convalescent plasma to an individual patient must request the eIND (see 21 CFR 312.310(b)).
Today, the U.S. Food and Drug Administration issued an emergency use authorization (EUA) for investigational convalescent plasma for the treatment of COVID-19 in hospitalized patients as part of the agency’s ongoing efforts to fight COVID-19. Based on scientific evidence available, the FDA concluded, as outlined in its decision memorandum, this product may be effective in treating COVID-19 and that the known and potential benefits of the product outweigh the known and potential risks of the product.
Today’s action follows the FDA’s extensive review of the science and data generated over the past several months stemming from efforts to facilitate emergency access to convalescent plasma for patients as clinical trials to definitively demonstrate safety and efficacy remain ongoing.
The EUA authorizes the distribution of COVID-19 convalescent plasma in the U.S. and its administration by health care providers, as appropriate, to treat suspected or laboratory-confirmed COVID-19 in hospitalized patients with COVID-19.
Alex Azar, Health and Human Services Secretary:
“The FDA’s emergency authorization for convalescent plasma is a milestone achievement in President Trump’s efforts to save lives from COVID-19,” said Secretary Azar. “The Trump Administration recognized the potential of convalescent plasma early on. Months ago, the FDA, BARDA, and private partners began work on making this product available across the country while continuing to evaluate data through clinical trials. Our work on convalescent plasma has delivered broader access to the product than is available in any other country and reached more than 70,000 American patients so far. We are deeply grateful to Americans who have already donated and encourage individuals who have recovered from COVID-19 to consider donating convalescent plasma.”
Stephen M. Hahn, M.D., FDA Commissioner:
“I am committed to releasing safe and potentially helpful treatments for COVID-19 as quickly as possible in order to save lives. We’re encouraged by the early promising data that we’ve seen about convalescent plasma. The data from studies conducted this year shows that plasma from patients who’ve recovered from COVID-19 has the potential to help treat those who are suffering from the effects of getting this terrible virus,” said Dr. Hahn. “At the same time, we will continue to work with researchers to continue randomized clinical trials to study the safety and effectiveness of convalescent plasma in treating patients infected with the novel coronavirus.”
Scientific Evidence on Convalescent Plasma
Based on an evaluation of the EUA criteria and the totality of the available scientific evidence, the FDA’s Center for Biologics Evaluation and Research determined that the statutory criteria for issuing an EUA criteria were met.
The FDA determined that it is reasonable to believe that COVID-19 convalescent plasma may be effective in lessening the severity or shortening the length of COVID-19 illness in some hospitalized patients. The agency also determined that the known and potential benefits of the product, when used to treat COVID-19, outweigh the known and potential risks of the product and that that there are no adequate, approved, and available alternative treatments.
CLINICAL MEMORANDUM From: , OBRR/DBCD/CRS To: , OBRR Through: , OBRR/DBCD , OBRR/DBCD , OBRR/DBCD/CRS Re: EUA 26382: Emergency Use Authorization (EUA) Request (original request 8/12/20; amended request 8/23/20) Product: COVID-19 Convalescent Plasma Items reviewed: EUA request Fact Sheet for Health Care Providers Fact Sheet for Recipients Sponsor: Robert Kadlec, M.D. Assistant Secretary for Preparedness and Response (ASPR) Office of Assistant Secretary for Preparedness and Response (ASPR) U.S. Department of Health and Human Services (HHS) EXECUTIVE SUMMARY COVID-19 Convalescent Plasma (CCP), an unapproved biological product, is proposed for use under an Emergency Use Authorization (EUA) under section 564 of the Federal Food, Drug, and Cosmetic Act (the Act),(21 USC 360bbb-3) as a passive immune therapy for the treatment of hospitalized patients with COVID-19, a serious or life-threatening disease. There currently is no adequate, approved, and available alternative to CCP for treating COVID-19. The sponsor has pointed to four lines of evidence to support that CCP may be effective in the treatment of hospitalized patients with COVID-19: 1) History of convalescent plasma for respiratory coronaviruses; 2) Evidence of preclinical safety and efficacy in animal models; 3) Published studies of the safety and efficacy of CCP; and 4) Data on safety and efficacy from the National Expanded Access Treatment Protocol (EAP) sponsored by the Mayo Clinic. Considering the totality of the scientific evidence presented in the EUA, I conclude that current data for the use of CCP in adult hospitalized patients with COVID-19 supports the conclusion that CCP meets the “may be effective” criterion for issuance of an EUA from section 564(c)(2)(A) of the Act. It is reasonable to conclude that the known and potential benefits of CCP outweigh the known and potential risks of CCP for the proposed EUA. Current data suggest the largest clinical benefit is associated with high-titer units of CCP administered early course of the disease.
A letter, from Senator Warren, to Commissioner Hahn from Senate Committee asking for documentation for any communication between FDA and White House
August 25, 2020 Dr. Stephen M. Hahn, M.D. Commissioner of Food and Drugs U.S. Food and Drug Administration 10903 New Hampshire Avenue Silver Spring, MD 20993 Dear Commissioner Hahn: We write regarding the U.S. Food and Drug Administration’s (FDA) troubling decision earlier this week to issue an Emergency Use Authorization (EUA) for convalescent plasma as a treatment for coronavirus disease 2019 (COVID-19).1 Reports suggests that the FDA granted the EUA amid intense political pressure from President Trump and other Administration officials, despite limited evidence of convalescent plasma’s effectiveness as a COVID-19 treatment.2 To help us better understand whether the issuance of the blood plasma EUA was motivated by politics, we request copies of any and all communications between FDA and White House officials regarding the blood plasma EUA.
The authorization will allow health-care providers in the U.S. to use the plasma to treat hospitalized patients with Covid-19.
The FDA’s emergency use authorization came a day after President Trump accused the agency of delaying enrollment in clinical trials for vaccines or therapeutics.
The criticism from Trump and action from the FDA led some scientists to believe the authorization, which came on the eve of the GOP national convention, was politically motivated.
FDA Commissioner Dr. Stephen Hahn is walking back comments on the benefits of convalescent plasma, saying he could have done a better job of explaining the data on its effectiveness against the coronavirus after authorizing it for emergency use over the weekend.
In an interview with Bloomberg’s Drew Armstrong, FDA Commissioner Hahn reiterates that his decision was based on hard evidence and scientific fact, not political pressure. The whole interview is at the link below:
Dr. Hahn corrected his initial statement about 35% of people would be cured by convalescent plasma. In the interview he stated:
I was trying to do what I do with patients, because patients often understand things in absolute terms versus relative terms. And I should’ve been more careful, there’s no question about it. What I was trying to get to is that if you look at a hundred patients who receive high titre, and a hundred patients who received low titre, the difference between those two particular subset of patients who had these specific criteria was a 35% reduction in mortality. So I frankly did not do a good job of explaining that.
FDA colleagues had frank discussion after the statement was made. He is not asking for other people in HHS to retract their statements, only is concerned that FDA has correct information for physicians and patients
Hahn is worried that people will not enroll due to chance they may be given placebo
He gave no opinion when asked if FDA should be an independent agency
For more articles on COVID19 please go to our Coronavirus Portal at
(CNN)Every morning, Dr. David Fajgenbaum takes three life-saving pills. He wakes up his 21-month-old daughter Amelia to help feed her. He usually grabs some Greek yogurt to eat quickly before sitting down in his home office. Then he spends most of the next 14 hours leading dozens of fellow researchers and volunteers in a systematic review of all the drugs that physicians and researchers have used so far to treat Covid-19. His team has already poredover more than 8,000 papers on how to treat coronavirus patients.
The 35-year-old associate professor at the University of Pennsylvania Perelman School of Medicine leads the school’s Center for Cytokine Storm Treatment & Laboratory. For the last few years, he has dedicated his life to studying Castleman disease, a rare condition that nearly claimed his life. Against epic odds, he found a drug that saved his own life six years ago, by creating a collaborative method for organizing medical research that could be applicable to thousands of human diseases. But after seeing how the same types of flares ofimmune-signaling cells, called cytokine storms, kill both Castleman and Covid-19 patients alike, his lab has devoted nearly all of its resources to aiding doctors fighting the pandemic.
A global repository for Covid-19 treatment data
Researchers working with his lab have reviewed published data on more than 150 drugs doctors around the world have to treat nearly 50,000 patients diagnosed with Covid-19. They’ve made their analysis public in a database called the Covid-19 Registry of Off-label & New Agents (or CORONA for short).
It’s a central repository of all available data in scientific journals on all the therapies used so far to curb the pandemic. This information can help doctors treat patients and tell researchers how to build clinical trials.The team’s process resembles that of the coordination Fajgenbaum used as a medical student to discover that he could repurpose Sirolimus, an immunosuppressant drug approved for kidney transplant patients, to prevent his body from producing deadly flares of immune-signaling cells called cytokines.The 13 members of Fajgenbaum’s lab recruited dozens of other scientific colleagues to join their coronavirus effort. And what this group is finding has ramifications for scientists globally.
This effort by Dr. Fajgenbaum’s lab and the resultant collaborative effort shows the power and speed at which a coordinated open science effort can achieve goals. Below is the description of the phased efforts planned and completed from the CORONA website.
CORONA (COvid19 Registry of Off-label & New Agents)
Drug Repurposing for COVID-19
Our overarching vision: A world where data on all treatments that have been used against COVID19 are maintained in a central repository and analyzed so that physicians currently treating COVID19 patients know what treatments are most likely to help their patients and so that clinical trials can be appropriately prioritized.
Our team reviewed 2500+ papers & extracted data on over 9,000 COVID19 patients. We found 115 repurposed drugs that have been used to treat COVID19 patients and analyzed data on which ones seem most promising for clinical trials. This data is open source and can be used by physicians to treat patients and prioritize drugs for trials. The CDCN will keep this database updated as a resource for this global fight. Repurposed drugs give us the best chance to help COVID19 as quickly as possible! As disease hunters who have identified and repurposed drugs for Castleman disease, we’re applying our ChasingMyCure approach to COVID19.
From Fajgenbaum, D.C., Khor, J.S., Gorzewski, A. et al. Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review. Infect Dis Ther (2020). https://doi.org/10.1007/s40121-020-00303-8
The following is the Abstract and link to the metastudy. This study was a systematic review of literature with strict inclusion criteria. Data was curated from these published studies and a total of 9152 patients were evaluated for treatment regimens for COVID19 complications and clinical response was curated for therapies in these curated studies. Main insights from this study were as follows:
Key Summary Points
Why carry out this study?
Data on drugs that have been used to treat COVID-19 worldwide are currently spread throughout disparate publications.
We performed a systematic review of the literature to identify drugs that have been tried in COVID-19 patients and to explore clinically meaningful response time.
What was learned from the study?
We identified 115 uniquely referenced treatments administered to COVID-19 patients. Antivirals were the most frequently administered class; combination lopinavir/ritonavir was the most frequently used treatment.
This study presents the latest status of off-label and experimental treatments for COVID-19. Studies such as this are important for all diseases, especially those that do not currently have definitive evidence from randomized controlled trials or approved therapies.
The emergence of SARS-CoV-2/2019 novel coronavirus (COVID-19) has created a global pandemic with no approved treatments or vaccines. Many treatments have already been administered to COVID-19 patients but have not been systematically evaluated. We performed a systematic literature review to identify all treatments reported to be administered to COVID-19 patients and to assess time to clinically meaningful response for treatments with sufficient data. We searched PubMed, BioRxiv, MedRxiv, and ChinaXiv for articles reporting treatments for COVID-19 patients published between 1 December 2019 and 27 March 2020. Data were analyzed descriptively. Of the 2706 articles identified, 155 studies met the inclusion criteria, comprising 9152 patients. The cohort was 45.4% female and 98.3% hospitalized, and mean (SD) age was 44.4 years (SD 21.0). The most frequently administered drug classes were antivirals, antibiotics, and corticosteroids, and of the 115 reported drugs, the most frequently administered was combination lopinavir/ritonavir, which was associated with a time to clinically meaningful response (complete symptom resolution or hospital discharge) of 11.7 (1.09) days. There were insufficient data to compare across treatments. Many treatments have been administered to the first 9152 reported cases of COVID-19. These data serve as the basis for an open-source registry of all reported treatments given to COVID-19 patients at www.CDCN.org/CORONA. Further work is needed to prioritize drugs for investigation in well-controlled clinical trials and treatment protocols.
Our team continues to work diligently to maintain an updated listing of all treatments reported to be used in COVID19 patients from papers in PubMed. We are also re-analyzing publicly available COVID19 single cell transcriptomic data alongside our iMCD data to search for novel insights and therapeutic targets.
You can visit the following link to access a database viewer built and managed by Matt Chadsey, owner of Nonlinear Ventures.
If you are a physician treating COVID19 patients, please visit the FDA’s CURE ID app to report de-identified information about drugs you’ve used to treat COVID19 in just a couple minutes.
For more information on COVID19 on this Open Access Journal please see our Coronavirus Portal at
3.5.2.2 Disentangling molecular alterations from water-content changes in the aging human brain using quantitative MRI, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine
Abstract
It is an open question whether aging-related changes throughout the brain are driven by a common factor or result from several distinct molecular mechanisms. Quantitative magnetic resonance imaging (qMRI) provides biophysical parametric measurements allowing for non-invasive mapping of the aging human brain. However, qMRI measurements change in response to both molecular composition and water content. Here, we present a tissue relaxivity approach that disentangles these two tissue components and decodes molecular information from the MRI signal. Our approach enables us to reveal the molecular composition of lipid samples and predict lipidomics measurements of the brain. It produces unique molecular signatures across the brain, which are correlated with specific gene-expression profiles. We uncover region-specific molecular changes associated with brain aging. These changes are independent from other MRI aging markers. Our approach opens the door to a quantitative characterization of the biological sources for aging, that until now was possible only post-mortem.
Introduction
The biology of the aging process is complex, and involves various physiological changes throughout cells and tissues1. One of the major changes is atrophy, which can be monitored by measuring macroscale brain volume reduction1,2. In some cases, atrophy can also be detected as localized microscale tissue loss reflected by increased water content3. This process is selective for specific brain regions and is thought to be correlated with cognitive decline in Alzheimer’s disease2,4,5. In addition to atrophy, there are molecular changes associated with the aging of both the normal and pathological brain5,6. Specifically, lipidome changes are observed with age, and are associated with several neurological diseases7,8,9,10,11.
It is an open question as to whether there are general principles that govern the aging process, or whether each system, tissue, or cell deteriorates with age for different reasons12,13. On one hand, the common-cause hypothesis proposes that different biological aging-related changes are the result of a single underlying factor14,15. This implies that various biomarkers of aging will be highly correlated16. On the other hand, the mosaic theory of aging suggests that there are several distinct aging mechanisms that have a heterogenous effect throughout the brain12,13. According to this latter view, combining different measurements of brain tissue is crucial in order to fully describe the state of the aging brain. To test these two competing hypotheses in the context of volumetric and molecular aging-related changes, it is essential to measure different biological aspects of brain tissue. Unfortunately, the molecular correlates of aging are not readily accessible by current in vivo imaging methods.
The main technique used for non-invasive mapping of the aging process in the human brain is magnetic resonance imaging (MRI)2,17,18,19. Advances in the field have led to the development of quantitative MRI (qMRI). This technique provides biophysical parametric measurements that are useful in the investigation and diagnosis of normal and abnormal aging20,21,22,23,24,25,26,27. qMRI parameters have been shown to be sensitive to the microenvironment of brain tissue and are therefore named in vivo histology28,29,30. Nevertheless, an important challenge in applying qMRI measurements is increasing their biological interpretability. It is common to assume that qMRI parameters are sensitive to the myelin fraction20,23,30,31,32,33, yet any brain tissue including myelin is a mixture of multiple lipids and proteins. Moreover, since water protons serve as the source of the MRI signal, the sensitivity of qMRI parameters to different molecular microenvironments may be confounded by their sensitivity to the water content of the tissue34,35. We hypothesized that the changes observed with aging in MRI measurements20,23,30,31,32,33,36 such as R1, R2, mean diffusivity (MD), and magnetization transfer saturation (MTsat)37, could be due to a combination of an increase in water content at the expense of tissue loss, and molecular alterations in the tissue.
Here, we present a qMRI analysis that separately addresses the contribution of changes in molecular composition and water content to brain aging. Disentangling these two factors goes beyond the widely accepted “myelin hypothesis” by increasing the biological specificity of qMRI measurements to the molecular composition of the brain. For this purpose, we generalize the concept of relaxivity, which is defined as the dependency of MR relaxation parameters on the concentration of a contrast agent38. Instead of a contrast agent, our approach exploits the qMRI measurement of the local non-water fraction39 to assess the relaxivity of the brain tissue itself. This approach allows us to decode the molecular composition from the MRI signal. In samples of known composition, our approach provides unique signatures for different brain lipids. In the live human brain, it produces unique molecular signatures for different brain regions. Moreover, these MRI signatures agree with post-mortem measurements of the brain lipid and macromolecular composition, as well as with specific gene-expression profiles. To further validate the sensitivity of the relaxivity signatures to molecular composition, we perform direct comparison of MRI and lipidomics on post-mortem brains. We exploit our approach for multidimensional characterization of aging-related changes that are associated with alterations in the molecular composition of the brain. Finally, we evaluate the spatial pattern of these changes throughout the brain, in order to compare the common-cause and the mosaic theories of aging in vivo.
Results
Different brain lipids have unique relaxivity signatures
The aging process in the brain is accompanied by changes in the chemophysical composition, as well as by regional alterations in water content. In order to examine the separate pattern of these changes, we developed a model system. This system was based on lipid samples comprising common brain lipids (phosphatidylcholine, sphingomyelin, phosphatidylserine, phosphatidylcholine-cholesterol, and phosphatidylinositol-phosphatidylcholine)7. Using the model system, we tested whether accounting for the effect of the water content on qMRI parameters provides sensitivity to fine molecular details such as the head groups that distinguish different membrane phospholipids. The non-water fraction of the lipid samples can be estimated by the qMRI measurement of lipid and macromolecular tissue volume (MTV, for full glossary of terms see Supplementary Table 1)39. By varying the concentration of the lipid samples, we could alter their MTV and then examine the effect of this manipulation on qMRI parameters. The parameters we estimated for the lipid samples were R1, R2, and MTsat. The potential ambiguity in the biological interpretation of qMRI parameters is demonstrated in Fig. 1a. On one hand, samples with similar lipid composition can present different R1 measurements (Fig. 1a, points 1 & 2). On the other hand, scanning samples with different lipid compositions may result in similar R1 measurements (Fig. 1a, points 2 & 3). This ambiguity stems from the confounding effect of the water content on the MR relaxation properties.
We evaluated the dependency of different qMRI parameters on the non-water fraction estimated by MTV. This analysis revealed strong linear dependencies (median R2 = 0.74, Fig. 1a, b and Supplementary Fig. 1a, b). These linear MTV dependencies change as a function of the lipid composition, reflecting the inherent relaxivity of the different lipids. We could therefore use the MTV derivatives of qMRI parameters (dqMRIdMTV, i.e., the slope of the linear relationship between each qMRI parameter and MTV) as a measure that is sensitive to molecular composition. By accounting for the Multidimensional Dependency on MTV (“MDM”) of several qMRI parameters, a unique MRI relaxivity signature was revealed for each lipid (Fig. 1c). This implies that the water-related ambiguity demonstrated in the inset of Fig. 1a can be removed by measuring the MTV dependencies (Fig. 1c). Creating mixtures of several lipids provided supportive evidence for the generality of our framework. Figure 1d and Supplementary Fig. 1c show that the qMRI measurements of a mixture can be predicted by summing the MTV dependencies of pure lipids (for further details see Supplementary Note 1 and Supplementary Fig. 2). Furthermore, we used this biophysical model to predict the lipid composition of a mixture from its MDM measurements (Fig. 1e). This model provided a good estimation of the sphingomyelin (Spg) and phosphatidylserine (PS) content (R2 > 0.64) but failed to predict phosphatidylcholine (PtdCho) content (for further details see Supplementary Note 2). While lipids are considered to be a major source of the MRI signal in the brain 40,41,42,43,44,45, our approach can be applied to other compounds to reveal differences in the MRI signal between different proteins, sugars, and ions (Supplementary Fig. 1d). Hence, the relationships between qMRI parameters and MTV account for the effect of water on MRI measurements and could be of use in quantifying the biological and molecular contributions to the MRI signal of water protons.
The tissue relaxivity of the human brain is region-specific.
In order to target age-related changes in molecular composition, we applied the same approach for the human brain (Fig. 2a).
We found that the linear dependency of qMRI parameters on MTV is not limited to in vitro samples and a similar relationship was also evident in the human brain (Fig. 2b and Supplementary Figs. 3–5). Importantly, different brain regions displayed a distinct dependency on MTV. Therefore, the relaxivity of brain tissue is region-specific. Figure 2b provides an example for the regional linear trends of R1 and MTsat in a single subject. Remarkably, while the thalamus and the pallidum presented relatively similar R1 dependencies on MTV, their MTsat dependencies were different (p < 0.001, two-sample t-test). Compared to these two brain regions, frontal white-matter demonstrated different dependencies on MTV (p < 0.001, two-sample t-test). A better separation between brain regions can therefore be achieved by combining the MTV dependencies of several qMRI parameters (MTsat, MD, R1 and R2). The MTV derivatives of qMRI parameters are consistent across subjects (Fig. 2c and Supplementary Fig. 6), with good agreement between hemispheres (Supplementary Fig. 5). Moreover, they provide a novel pattern of differentiation between brain regions, which is not captured by conventional qMRI methods (Supplementary Fig. 7). In our lipid sample experiments, the MDM approach revealed unique relaxivity signatures of different lipids (Fig. 1c). Therefore, we attribute the observed diversity in the MTV derivatives of qMRI parameters across brain regions to the intrinsic heterogeneity in the chemophysical microenvironment of these regions. The multidimensional dependency of various qMRI parameters on MTV can be represented by the space of MTV derivatives to reveal a unique chemophysical MDM signature for different brain regions (Fig. 2d, see explanatory scheme of the MDM method in Supplementary Fig. 8). Fig. 2 figure2 The MDM method provides region-specific signatures in the in vivo human brain. a Representative MTV, MTsat, and R1 maps. b Calculating the MDM signatures. The dependency of R1 (left) and MTsat (right) on MTV in three brain regions of a single subject. For each region, MTV values were pooled into bins (dots are the median of each bin; shaded area is the median absolute deviation), and a linear fit was calculated (colored lines). The slopes of the linear fit represent the MTV derivatives of R1 and MTsat and vary across brain regions. c The reliability of the MDM method across subjects. Variation in the MTV derivatives of R1 (left) and MTsat (right) in young subjects (N = 23). Different colors represent 14 brain regions (see legend). Edges of each box represent the 25th, and 75th percentiles, median is in black, and whiskers extends to extreme data points. Different brain regions show distinct MTV derivatives. d Unique MDM signatures for different brain regions (in different colors). Each axis is the MTV derivative (“MDM measurements”) of a different qMRI parameter (R1, MTsat, R2, and MD). The range of each axis is in the legend. Colored traces extend between the MDM measurements, shaded areas represent the variation across subjects (N = 23). An overlay of all MDM signatures is marked with dashed lines Full size image The in vivo MDM approach captures ex vivo molecular profiles To validate that the MDM signatures relate to the chemophysical composition of brain tissue, we compared them to a previous study that reported the phospholipid composition of the human brain7. First, we established the comparability between the in vivo MRI measurements and the reported post-mortem data. MTV measures the non-water fraction of the tissue, a quantity that is directly related to the total phospholipid content. Indeed, we found good agreement between the in vivo measurement of MTV and the total phospholipid content across brain regions (R2 = 0.95, Fig. 3a). Söderberg et al.7 identified a unique phospholipid composition for different brain regions along with diverse ratios of phospholipids to proteins and cholesterol. We compared this regional molecular variability to the regional variability in the MDM signatures. To capture the main axes of variation, we performed principal component analysis (PCA) on both the molecular composition of the different brain regions and on their MDM signatures. For each of these two analyses, the first principal component (PC) explained >45% of the variance. The regional projection on the first PC of ex vivo molecular composition was highly correlated (R2 = 0.84, Fig. 3b) with the regional projection on the first PC of in vivo MDM signatures. This confirms that brain regions with a similar molecular composition have similar MDM. Supplementary Fig. 9a provides the correlations of individual lipids with MDM. Importantly, neither MTV nor the first PC of standard qMRI parameters was as strongly correlated with the ex vivo molecular composition as the MDM (Supplementary Fig. 9b, c). We next used the MDM measurements as predictors for molecular properties of different brain regions. Following our content predictions for lipids samples (Fig. 1e), we constructed a weighted linear model for human data (for further details see Supplementary Note 3). To avoid over fitting, we reduced the number of fitted parameters by including only the MDM and the molecular features that accounted for most of the regional variability. The MTV derivatives of R1 and MTsat accounted for most of the variance in MDM. Thus, we used these parameters as inputs to the linear model, while adjusting their weights through cross validation. We tested the performance of this model in predicting the three molecular features that account for most of the variance in the ex vivo molecular composition. Remarkably, MRI-driven MDM measurements provided good predictions for the regional sphingomyelin composition (R2 = 0.56, p < 0.05 for the F-test, Fig. 3c) and the regional ratio of phospholipids to proteins (R2 = 0.56, p < 0.05 for the F-test, Fig. 3c).
Last, we compared the cortical MDM signatures to a gene co-expression network based on a widespread survey of gene expression in the human brain46. Nineteen modules were derived from the gene network, each comprised of a group of genes that co-varies in space. Six out of the nineteen gene modules were significantly correlated with the first PC of MDM. Interestingly, the first PC of MDM across the cortex was correlated most strongly with the two gene modules associated with membranes and synapses (Fig. 4, for further details see Supplementary Note 4 and Supplementary Figs. 10 and 11).
Post-mortem validation for the lipidomic sensitivity of MDM.
The aforementioned analyses demonstrate strong agreement between in vivo MDM measurements and ex vivo molecular composition based on a group-level comparison of two different datasets. Strikingly, we were able to replicate this result at the level of the single brain. To achieve this we performed MRI scans (R1, MTsat, R2, MD, and MTV mapping) followed by histology of two fresh post-mortem porcine brains (Fig. 5a, b). First, we validated the qMRI estimation of MTV using dehydration techniques. MTV values estimated using MRI were in agreement with the non-water fraction found histologically (adjusted R2 = 0.64, p < 0.001 for the F-test, Fig. 5c).
Next, we estimated the lipid composition of different brain regions. Thin-layer chromatography (TLC) was employed to quantify seven neutral and polar lipids (Supplementary Table 2 and Supplementary Fig. 12a). In accordance with the analysis in Fig. 3, we performed PCA to capture the main axes of variation in lipidomics, standard qMRI parameters, and MDM. Figure 5d shows that MTV did not correlate with the molecular variability across the brain, estimated by the 1st PC of lipidomics. Likewise, the molecular variability did not agree with the 1st PC of standard qMRI parameters (Fig. 5e).
Last, we applied the MDM approach to the post-mortem porcine brain. Similar to the human brain, different porcine brain regions have unique MDM signatures (Fig. 5f, g and Supplementary Fig. 12b). Remarkably, we found that agreement between lipid composition and MRI measurements emerges at the level of the MDM signatures. The molecular variability across brain regions significantly correlated with the regional variability in the MDM signatures (adjusted R2 = 0.3, p < 0.01 for the F-test, Fig. 5h). Excluding from the linear regression five outlier brain regions where the histological lipidomics results were 1.5 standard deviations away from the center yielded an even stronger correlation between MDM signatures and lipid composition (adjusted R2 = 0.55, p < 0.001 for the F-test, Supplementary Fig. 12c). This post-mortem analysis validates that the MDM approach allows us to capture molecular information using MRI at the level of the individual brain.
Disentangling water and molecular aging-related changes.
After establishing the sensitivity of the MDM signatures to the molecular composition of the brain, we used them to evaluate the chemophysical changes of the aging process. To assess aging-related changes across the brain, we scanned younger and older subjects (18 older adults aged 67 ± 6 years and 23 younger adults aged 27 ± 2 years). First, we identified significant molecular aging-related changes in the MDM signatures of different brain regions (Figs. 6 and 7, right column; Supplementary Fig. 13). Next, we tested whether the changes in MRI measurements, observed with aging, result from a combination of changes in the molecular composition of the tissue and its water content. We found that although it is common to attribute age-related changes in R1 and MTsat to myelin28,30,36, these qMRI parameters combine several physiological aging aspects. For example, using R1 and MTsat we identified significant aging-related changes in the parietal cortex, the thalamus, the parietal white-matter and the temporal white-matter (Figs. 6 and 7, left column). However, the MDM approach revealed that these changes have different biological sources (Figs. 6 and 7, middle columns; see Supplementary Figs. 14–17 for more brain regions).
In agreement with the mosaic hypothesis, we identified distinct aging patterns for different brain regions. For example, in the hippocampus we found a change in R2* values related to a higher iron concentration with age, along with significant reduction in the total hippocampal volume (Fig. 8a). This age-related shrinkage was not accompanied by lower MTV values, indicating conserved tissue density (Fig. 7b). In addition, there was no significant difference in the hippocampal MDM signature with age (Fig. 7b). Cortical gray-matter areas also exhibited similar trends of volume reduction without major loss in tissue density (Fig. 8a). Unlike the gray matter, in the white matter we did not find volume reduction or large iron accumulation with age (Fig. 8a). However, we did find microscale changes with age in tissue composition, as captured by the MDM signature (Figs. 6a and 7c, and Supplementary Fig. 13), accompanied by a significant density-related decline in MTV (Fig. 8a). These findings are consistent with previous histological studies49,50,51 (see Discussion), and provide the ability to monitor in vivo the different components of the aging mosaic.
Last, to test whether the different biological aging trajectories presented in Fig. 8a share a common cause, we evaluated the correlations between them (Fig. 8b). Importantly, the chemophysical trajectory did not correlate significantly with the iron or volume aging patterns. The spatial distribution of water-related changes was found to correlate with iron content alterations (R2 = 0.27) and chemophysical alterations (R2 = 0.25). However, the strongest correlation between aging-related changes was found in volume and iron content (R2 = 0.77). As shown previously, this correlation may be explained to some extent by a systematic bias in automated tissue classification23. Additional analysis revealed that the different dimensions of the MDM signature capture distinct patterns of aging-related changes (Supplementary Fig. 30). Hence, complementary information regarding the various chemophysical mechanisms underlying brain aging could be gained by combining them.
Discussion
Normal brain aging involves multiple changes, at both the microscale and macroscale level. MRI is the main tool for in vivo evaluation of such age-related changes in the human brain. Here, we propose to improve the interpretation of MRI findings by accounting for the fundamental effect of the water content on the imaging parameters. This approach allows for non-invasive mapping of the molecular composition in the aging human brain.
Our work is part of a major paradigm shift in the field of MRI toward in vivo histology30,36,52. The MDM approach contributes to this important change by providing a hypothesis-driven biophysical framework that was rigorously developed. We demonstrated the power of our framework, starting from simple pure lipid phantoms to more complicated lipid mixtures, and from there, to the full complexity of the brain. In the brain, we show both in vivo and post-mortem validations for the molecular sensitivity of the MDM signatures. Early observations relate different qMRI parameters to changes in the fraction of myelin20,23,30,31,32,33,36. The current approach enriches this view and provides better sensitivity to the molecular composition and fraction of myelin and other cellular tissues.
We developed a unique phantom system of lipid samples to validate our method. While the phantom system is clearly far from the complexity of brain tissue, its simplicity allowed us to verify the specificity of our method to the chemophysical environment. Remarkably, our approach revealed unique signatures for different lipids, and is therefore sensitive even to relatively subtle details that distinguish one lipid from another. We chose to validate our approach using membrane lipids based on previous experiments40,41,42,43,44,45. Nevertheless, we do acknowledge the fact that brain tissue comprises many other compounds beside lipids, such as proteins, sugars, and ions. As we have shown, these other compounds also exhibit unique dependency on MTV. The effect of such compounds, along with other factors such as microstructure, and multi-compartment organization28 is probably captured when we apply the MDM approach to the in vivo human brain. Therefore, the phantoms were made to examine the MRI sensitivity for the chemophysical environment, and the human brain data was used to measure the true biological effects in a complex in vivo environment.
Our relaxivity approach captures the molecular signatures of the tissue, but is limited in its abilities to describe the full complexity of the chemophysical environment of the human brain. For example, R1 and R2, which are used to generate the MDM signatures, are also sensitive to the iron content23,48,52. However, we found that most of our findings cannot be attributed to alterations in iron content as measured with R2* (for more details see Supplementary Note 5). While there is great importance in further isolating different molecular components, we argue that accounting for the major effect of water on qMRI parameters (for R2 distributions see Supplementary Fig. 5) is a crucial step towards more specific qMRI interpretation.
We provide evidence from lipids samples and post-mortem data for the sensitivity of the MDM signatures to the molecular environment (Figs. 1e, 3b, and 5h). The variability of MDM values between human brain regions also correlated with specific gene-expression profiles (Fig. 4). While the comparison of in vivo human brain measurements to previously published ex vivo findings is based on two different datasets, these measurements are highly stable across normal subjects and the intersubject variabilities are much smaller than the regional variability. The agreement between the modalities provides strong evidence for the ability of our method to capture molecular information.
Remarkably, we were able to demonstrate the sensitivity of MDM signatures to lipid composition using direct comparison on post-mortem porcine brains. Even though there are many challenges in scanning post-mortem tissue, segmenting it, and comparing it to anatomically relevant histological results, we were able to replicate our in vivo findings. We provide histological validation for the MRI estimation of MTV. Moreover, we find that while standard qMRI parameters and MTV do not explain the lipidomic variability across the brain, the MDM signatures are in agreement with histological results. Lipids constitute the majority of the brain’s dry weight and are known to be important for maintaining neural conduction and chemical balance53,54. The brain lipidome was shown to have a great deal of structural and functional diversity and was found to vary according to age, gender, brain region, and cell type55. Disruptions of the brain lipid metabolism have been linked to different disorders, including Alzheimer’s disease, Parkinson’s disease, depression, and anxiety7,8,11,54,55,56,57. Our results indicate that the MDM approach enhances the consistency between MRI-driven measurements and lipidomics, compared with standard qMRI parameters.
The simplicity of our model, which is based on a first-order approximation of qMRI dependencies, has great advantages in the modeling of complex environments. Importantly, we used lipids samples to show that the contributions of different mixture-components can be summed linearly (Fig. 1d). For contrast agents, the relaxivity is used to characterize the efficiency of different agents. Here, we treated the tissue itself, rather than a contrast material, as an agent to compute the relaxivity of the tissue. While relaxivity is usually calculated for R1 and R2, we extended this concept to other qMRI parameters. Our results showed that the tissue relaxivity changes as a function of the molecular composition. This suggests that the relaxivity of the tissue relates to the surface interaction between the water and the chemophysical environment. A theoretical formulation for the effect of the surface interaction on proton relaxation has been proposed before58,59. Specifically, a biophysical model for the linear relationship between R1 and R2 to the inverse of the water content (1/WC = 1/(1 – MTV)) was suggested by Fullerton et al.43. Interestingly, 1/WC varies almost linearly with MTV in the physiological range of MTV values. Applying our approach with 1/WC instead of MTV produces relatively similar results (Supplementary Fig. 28). However, using MTV as a measure of tissue relaxivity allowed us to generalize the linear model to multiple qMRI parameters, thus producing multidimensional MDM signatures.
We show that the MDM signatures allow for better understanding of the biological sources for the aging-related changes observe with MRI. Normal brain aging involves multiple changes, at both the microscale and macroscale levels. Measurements of macroscale brain volume have been widely used to characterize aging-associated atrophy. Our method of analysis can complement such findings and provide a deeper understanding of microscale processes co-occurring with atrophy. Moreover, it allows us to test whether these various microscale and macroscale processes are caused by a common factor or represent the aging mosaic. Notably, we discovered that different brain regions undergo different biological aging processes. Therefore, combining several measurements of brain tissue is crucial in order to fully describe the state of the aged brain. For example, the macroscale aging-related volume reduction in cortical gray areas was accompanied by conserved tissue density, as estimated by MTV, and region-specific chemophysical changes, as estimated by the MDM. In contrast, in white-matter areas both MDM and MTV changed with age. These microscale alterations were not accompanied by macroscale volume reduction. Our in vivo results were validated by previous histological studies, which reported that the cortex shrinks with age, while the neural density remains relatively constant49,50. In contrast, white matter was found to undergo significant loss of myelinated nerve fibers during aging51. In addition, we found that the shrinkage of the hippocampus with age is accompanied with conserved tissue density and chemophysical composition. This is in agreement with histological findings, which predict drastic changes in hippocampal tissue composition in neurological diseases such as Alzheimer, but not in normal aging49,50,60,61. In contrast, hippocampal macroscale volume reduction was observed in both normal and pathological aging2.
It should be noted that most of the human subjects recruited for this study were from the academic community. However, the different age groups were not matched for variables such as IQ and socioeconomic status. In addition, the sample size in our study was quite small. Therefore, the comparison we made between the two age groups may be affected by variables other than age. Our approach may benefit from validation based on larger quantitative MRI datasets27,62. Yet, we believe we have demonstrated the potential of our method to reveal molecular alterations in the brain. Moreover, the agreement of our findings with previous histological aging studies supports the association between the group differences we measured and brain aging. Our results suggest that the MDM approach may be very useful in differentiating the effects of normal aging from those of neurodegenerative diseases. There is also great potential for applications in other brain research fields besides aging. For example, our approach may be used to advance the study and diagnosis of brain cancer, in which the lipidomic environment undergoes considerable changes63,64,65.
To conclude, we have presented here a quantitative MRI approach that decodes the molecular composition of the aging brain. While common MRI measurements are primarily affected by the water content of the tissue, our method employed the tissue relaxivity to expose the sensitivity of MRI to the molecular microenvironment. We presented evidence from lipid samples, post-mortem porcine brains and in vivo human brains for the sensitivity of the tissue relaxivity to molecular composition. Results obtained by this method in vivo disentangled different biological processes occurring in the human brain during aging. We identified region-specific patterns of microscale aging-related changes that are associated with the molecular composition of the human brain. Moreover, we showed that, in agreement with the mosaic theory of aging, different biological age-related processes measured in vivo have unique spatial patterns throughout the brain. The ability to identify and localize different age-derived processes in vivo may further advance human brain research.
Methods
Phantom construction
The full protocol of lipids phantom preparation is described in Shtangel et al.66.
In short, we prepared liposomes from one of the following lipids: phosphatidylserine (PS), phosphatidylcholine (PtdCho), phosphatidylcholine-cholesterol (PtdCho-Chol), Phosphatidylinositol-phosphatidylcholine (PI-PtdCho), or sphingomyelin (Spg). These phantoms were designed to model biological membranes and were prepared from lipids by the hydration–dehydration dry film technique67. The lipids were dissolved over a hot plate and vortexed. Next, the solvent was removed to create a dry film by vacuum-rotational evaporation. The samples were then stirred on a hot plate at 65 °C for 2.5 h to allow the lipids to achieve their final conformation as liposomes. Liposomes were diluted with Dulbecco’s phosphate buffered saline (PBS), without calcium and magnesium (Biological Industries), to maintain physiological conditions in terms of osmolarity, ion concentrations and pH. To change the MTV of the liposome samples we varied the PBS to lipid volume ratios66. Samples were then transferred to the phantom box for scanning in a 4 mL squared polystyrene cuvettes glued to a polystyrene box, which was then filled with ~1% SeaKem Agarose (Ornat Biochemical) and ~0.0005 M Gd (Gadotetrate Melumine, (Dotarem, Guerbet)) dissolved in double distilled water (ddw). The purpose of the agar with Gd (Agar-Gd) was to stabilize the cuvettes, and to create a smooth area in the space surrounding the cuvettes that minimalized air–cuvette interfaces. In some of our experiments we used lipid mixtures composed of several lipids. We prepared nine mixtures containing different combinations of two out of three lipids (PtdChol, Spg and PS) in varying volume ratios (1:1,1:2,2:1). For each mixture, we prepared samples in which the ratio between the different lipid components remained constant while the water-to-lipid volume fraction varied.
For the bovine serum albumin (BSA) phantoms, samples were prepared by dissolving lyophilized BSA powder (Sigma Aldrich) in PBS. To change the MTV of these phantoms, we changed the BSA concentration. For the BSA + Iron phantoms, BSA was additionally mixed with a fixed concentration of 50 µg/mL ferrous sulfate heptahydrate (FeSO4*7H2O). Samples were prepared in their designated concentrations at room temperature. Prepared samples were allowed to sit overnight at 4 ℃ to ensure BSA had fully dissolved, without the need for significant agitation, which is known to cause protein cross-linking. Samples were then transferred to the phantom box for scanning.
For Glucose and Sucrose phantoms, different concentrations of D-( + )-Sucrose (Bio-Lab) and D-( + )-Glucose (Sigma) were dissolved in PBS at 40 ℃. Samples were allowed to reach room temperature before the scan.
MRI acquisition for phantoms
Data was collected on a 3 T Siemens MAGNETOM Skyra scanner equipped with a 32-channel head receive-only coil at the ELSC neuroimaging unit at the Hebrew University.
For quantitative R1 & MTV mapping, three-dimensional (3D) Spoiled gradient (SPGR) echo images were acquired with different flip angles (α = 4°, 8°, 16°, and 30°). The TE/TR was 3.91/18 ms. The scan resolution was 1.1 × 1.1 × 0.9 mm. The same sequence was repeated with a higher resolution of 0.6 × 0.6 × 0.5 mm. The TE/TR was 4.45/18 ms. For calibration, we acquired an additional spin-echo inversion recovery (SEIR) scan. This scan was done on a single slice, with adiabatic inversion pulse and inversion times of TI = 2000, 1200, 800, 400, and 50. The TE/TR was 73/2540 ms. The scan resolution was 1.2 mm isotropic.
For quantitative T2 mapping, images were acquired with a multi spin-echo sequence with 15 equally spaced spin echoes between 10.5 ms and 157.5 ms. The TR was 4.94 s. The scan resolution was 1.2 mm isotropic. For quantitative MTsat mapping, images were acquired with the FLASH Siemens WIP 805 sequence. The TR was 23 ms for all samples except PI:PtdCho for which the TR was 72 ms. Six echoes were equally spaced between 1.93 ms to 14.58 ms. The on-resonance flip angle was 6°, the MT flip angle was 220°, and the RF offset was 700. We used 1.1-mm in-plane resolution with a slice thickness of 0.9 mm. For samples of sucrose and glucose, MTsat mapping was done similar to the human subjects, based on 3D Spoiled gradient (SPGR) echo image with an additional MT pulse. The flip angle was 10°, the TE/TR was 3.91/28 ms. The scan resolution was 1 mm isotropic.
Estimation of qMRI parameters for phantoms
MTV and R1 estimations for the lipids samples were computed based on a the mrQ39 (https://github.com/mezera/mrQ) and Vista Lab (https://github.com/vistalab/vistasoft/wiki) software. The mrQ software was modified to suit the phantom system66. The modification utilizes the fact that the Agar-Gd filling the box around the samples is homogeneous and can, therefore, be assumed to have a constant T1 value. We used this gold standard T1 value generated from the SEIR scan to correct for the excite bias in the spoiled gradient echo scans. While the data was acquired in two different resolutions (see “MRI acquisition”), in our analysis we use the median R1 and MTV of each lipid sample and these are invariant to the resolution of acquisition (Supplementary Fig. 1e). Thus, we were able to use scans with different resolutions without damaging our results. T2 maps were computed by implementing the echo‐modulation curve (EMC) algorithm68.
For quantitative MTsat mapping see the “MTsat estimation” section for human subjects.
MDM computation for phantoms
We computed the dependency of each qMRI parameter (R1, MTsat, and R2) on MTV in different lipids samples. This process was implemented in MATLAB (MathWorks, Natwick, MI, USA). To manipulate the MTV values, we scanned samples of the same lipid in varying concentrations. We computed the median MTV of each sample, along with the median of qMRI parameters. We used these data points to fit a linear model across all samples of the same lipid. The slope of this linear model represents the MTV derivative of the linear equation. We used this derivative estimate of three qMRI parameters (R1, R2, and MTsat) to compute the MDM signatures. The same procedure was used for the MDM computation of lipid mixtures.
MDM modeling of lipid mixtures
We tested the ability of MDM to predict the composition of lipid mixtures. For this analysis we used nine mixture phantoms (see “Phantom construction”), along with the three phantoms of the pure lipid constituents of the mixtures (PS, Spg, and Ptd-Cho).
In order to predict the qMRI parameters of a lipid mixture (Fig. 1d) we used Supplementary Eq. 1 (Supplementary Note 1). To further predict the composition of the mixtures (Fig. 1e) we used Supplementary Eq. 5 (Supplementary Note 2). We solved this equation using the QR factorization algorithm.
Ethics
Human experiments complied with all relevant ethical regations. The Helsinki Ethics Committee of Hadassah Hospital, Jerusalem, Israel approved the experimental procedure. Written informed consent was obtained from each participant prior to the procedure.
Human subjects
Human measurements were performed on 23 young adults (aged 27 ± 2 years, 11 females), and 18 older adults (aged 67 ± 6 years, five females). Healthy volunteers were recruited from the community surrounding the Hebrew University of Jerusalem.
MRI acquisition for human subjects
Data was collected on a 3 T Siemens MAGNETOM Skyra scanner equipped with a 32-channel head receive-only coil at the ELSC neuroimaging unit at the Hebrew University.
For quantitative R1, R2*, & MTV mapping, 3D Spoiled gradient (SPGR) echo images were acquired with different flip angles (α = 4°, 10°, 20°, and 30°). Each image included five equally spaced echoes (TE = 3.34–14.02 ms) and the TR was 19 ms (except for six young subjects for which the scan included only one TE = 3.34 ms). The scan resolution was 1 mm isotropic. For calibration, we acquired additional spin-echo inversion recovery scan with an echo-planar imaging (EPI) read-out (SEIR-epi). This scan was done with a slab-inversion pulse and spatial-spectral fat suppression. For SEIR-epi, the TE/TR was 49/2920 ms. TI were 200, 400, 1,200, and 2400 ms. We used 2-mm in-plane resolution with a slice thickness of 3 mm. The EPI read-out was performed using 2 × acceleration.
For quantitative T2 mapping, multi‐SE images were acquired with ten equally spaced spin echoes between 12 ms and 120 ms. The TR was 4.21 s. The scan resolution was 2 mm isotropic. T2 scans of four subjects (one young, three old) were excluded from the analysis due to motion.
For quantitative MTsat mapping, 3D Spoiled gradient (SPGR) echo image were acquired with an additional MT pulse. The flip angle was 10°, the TE/TR was 3.34/27 ms. The scan resolution was 1 mm isotropic.
Whole-brain DTI measurements were performed using a diffusion-weighted spin-echo EPI sequence with isotropic 1.5-mm resolution. Diffusion weighting gradients were applied at 64 directions and the strength of the diffusion weighting was set to b = 2000 s/mm2 (TE/TR = 95.80/6000 ms, G = 45mT/m, δ = 32.25 ms, Δ = 52.02 ms). The data includes eight non-diffusion-weighted images (b = 0). In addition, we collected non-diffusion-weighted images with reversed phase-encode blips. For five subjects (four young, one old) we failed to acquire this correction data and they were excluded from the diffusion analysis.
Anatomical images were acquired with 3D magnetization prepared rapid gradient echo (MP-RAGE) scans for 24 of the subjects (14 from the younger subjects, 10 from the older subjects). The scan resolution was 1 mm isotropic, the TE/TR was 2.98/2300 ms. Magnetization Prepared 2 Rapid Acquisition Gradient Echoes (MP2RAGE) scans were acquired for the rest of the subjects. The scan resolution was 1 mm isotropic, the TE/TR was 2.98/5000 ms.
Estimation of qMRI parameters for human subjects
Whole-brain MTV and R1 maps, together with bias correction maps of B1 + and B1-, were computed using the mrQ software39,69 (https://github.com/mezera/mrQ). Voxels in which the B1 + inhomogeneities were extrapolated and not interpolated were removed from the MTV and R1 maps. While we did not correct our MTV estimates for R2*, we showed that employing such a correction does not significantly change our results (see Supplementary Note 6, Supplementary Figs. 20–27). MTV maps of four subjects had bias in the lower part of the brain and they were therefore excluded from the analysis presented in Fig. 3, which includes ROIs in the brainstem.
Whole-brain T2 maps were computed by implementing the echo‐modulation curve (EMC) algorithm68. To combine the MTV and T2 we co-registered the quantitative MTV map to the T2 map. We used the ANTS software package70 to calculate the transformation and to warp the MTV map and the segmentation. The registration was computed to match the T1 map to the T2 map. Next, we applied the calculated transformation to MTV map (since MTV and T1 are in the same imaging space) and resampled the MTV map to match the resolution of the T2 map. The same transformation was also applied to the segmentation. R2 maps were calculated as 1/T2.
Whole-brain MTsat maps were computed as described in Helms et al.37. The MTsat measurement was extracted from Eq. (1):
MTsat=𝑀0𝐵1𝛼𝑅1TR𝑆MT−(𝐵1𝛼)22−𝑅1TR
(1)
Where SMT is the signal of the SPGR scan with additional MT pulse, α is the flip angle and TR is the repetition time. Mo (the equilibrium magnetization parameter), B1 (the transmit inhomogeneity), and R1 estimations were computed from the non-MT weighted SPGR scans, during the pipeline described under “MTV & R1 estimation”. Registration of the SMT image to the imaging space of the MTV map was done using a rigid-body alignment (R1, B1, and MO are all in the same space as MTV).
Diffusion analysis was done using the FDT toolbox in FSL71,72. Susceptibility and eddy current induced distortions were corrected using the reverse phase-encode data, with the eddy and topup commands73,74. MD maps were calculated using vistasoft (https://github.com/vistalab/vistasoft/wiki). We used a rigid-body alignment to register the corrected dMRI data to the imaging space of the MTV map (Flirt, FSL). In order to calculate the MD-MTV derivatives, we resampled the MTV map and the segmentation to match the dMRI resolution.
We used the SPGR scans with multiple echoes to estimate R2*. Fitting was done through the MPM toolbox75. As we had four SPGR scans with variable flip angles, we averaged the R2* maps acquired from each of these scans for increased SNR.
Human brain segmentation
Whole-brain segmentation was computed automatically using the FreeSurfer segmentation algorithm76. For subjects who had an MP-RAGE scan, we used it as a reference. For the other subjects the MP2RAGE scan was used as a reference. These anatomical images were registered to the MTV space prior to the segmentation process, using a rigid-body alignment. Sub-cortical gray-matter structures were segmented with FSL’s FIRST tool77. To avoid partial volume effects, we removed the outer shell of each ROI and left only the core.
MDM computation in the human brain
We computed the dependency of each qMRI parameter (R1, MTsat, MD, and R2) on MTV in different brain areas. This process was implemented in MATLAB (MathWorks, Natwick, MI, USA). For each ROI, we extracted the MTV values from all voxels and pooled them into 36 bins spaced equally between 0.05 and 0.40. This was done so that the linear fit would not be heavily affected by the density of the voxels in different MTV values. We removed any bins in which the number of voxels was smaller than 4% of the total voxel count in the ROI. The median MTV of each bin was computed, along with the median of the qMRI parameter. We used these data points to fit the linear model across bins using Eq. (2):
qMRIparameters=𝑎∗MTV+𝑏
(2)
The slope of this linear model (“a”) represents the MTV derivative of the linear equation. We used this derivative estimate to compute the MDM signatures.
For each subject, ROIs in which the total voxel count was smaller than a set threshold of 500 voxels for the MTsat and R1 maps, 150 voxels for the MD map, and 50 voxels for the R2 map were excluded.
Principal component analysis (PCA) in the human brain
To estimate the variability in the MDM signatures across the brain, we computed the first principal component (PC) of MDM. For each MDM dimension (MTV derivatives of R1, MTsat, MD, and R2), we evaluated the median of the different brain areas across the young subjects. As each MDM dimension has different units, we then computed the z-score of each dimension across the different brain area. Finally, we performed PCA. The variables in this analysis were the different MDM dimensions, and the observations were the different brain areas. From this analysis, we derived the first PC that accounts for most of the variability in MDM signatures across the brain. To estimate the median absolute deviations (MAD) across subjects of each MDM measurement in the PC basis, we applied the z-score transformation to the original MAD and then projected them onto the PC basis.
To compute the first PC of standard qMRI parameters we followed the same procedure, but used R1, MTsat, MD, and R2 instead of their MTV derivatives.
For the first PC of molecular composition, we followed the same procedure, but used the phospholipid composition and the ratio between phospholipids to proteins and cholesterol as variables. The data was taken from eight post-mortem human brains7. Brains were obtained from individuals between 54 and 57 years of age, which were autopsied within 24 h after death.
Linear model for prediction of human molecular composition
We used MDM measurements in order to predict the molecular composition of different brain areas (Fig. 3c). For this analysis we used Supplementary Eq. 5 in the Supplementary Note 2. We solved this equation using QR factorization algorithm (for more details see Supplementary Note 3).
Gene-expression dataset
For the gene-expression analysis we followed the work of Ben-David and Shifman46. Microarray data was acquired from the Allen Brain Atlas (http://human.brain-map.org/well_data_files) and included a total of 1340 microarray profiles from donors H0351.2001 and H0351.2002, encompassing the different regions of the human brain. The donors were 24 and 39 years old, respectively, at the time of their death, with no known psychopathologies. We used the statistical analysis described by Ben-David and Shifman46. They constructed a gene network using a weighted gene co-expression network analysis. The gene network included 19 modules of varying sizes, from 38 to 7385 genes. The module eigengenes were derived by taking the first PC of the expression values in each module. In addition, we used the gene ontology enrichment analysis described by Ben-David and Shifman to define the name of each module. The colors of the different modules in the Fig. 4 and Supplementary Fig. 10 are the same as in the original paper.
Next, we matched between the gene-expression data and the MRI measurements. This analysis was done on 35 cortical regions extracted from FreeSurfer cortical parcellation. We downloaded the T1-weighted images of the two donors provided by the Allen Brain Atlas (http://human.brain-map.org/mri_viewers/data) and used them as a reference for FreeSurfer segmentation. We then found the FreeSurfer label of each gene-expression sample using the sample’s coordinates in brain space. We removed samples for which the FreeSurfer label and the label provided in the microarray dataset did not agree (there were 72 such samples out of 697 cortical samples). For each gene module, we averaged over the eigengenes of all samples from the same cortical area across the two donors.
Last, we compared the cortical eigengene of each module to the projection of cortical areas on the first PC of MDM. In addition, we compared the modules’ eigengenes to the MTV values of the cortical areas and to the projection of cortical areas on the first PC of standard qMRI parameters (Supplementary Fig. 10). These 57 correlations were corrected for multiple comparisons using the FDR method.
Brain region’s volume computation
To estimate the volume of different brain regions, we calculated the number of voxels in the FreeSurfer segmentation of each region (see “Brain segmentation”).
R2* correction for MTV
To correct the MTV estimates for R2* we used Eq. (3):
MTV𝐶=1−(1−MTV)⋅exp(TE⋅R2∗)
(3)
Where MTVC is the corrected MTV.
Statistical analysis
The statistical significance of the differences between the age groups was computed using an independent-sample t-test (alpha = 0.05, both right and left tail) and was corrected for multiple comparisons using the false-discovery rate (FDR) method. For this analysis, MRI measurements of both hemispheres of bilateral brain regions were joined together. R2 measurements were adjusted for the number of data points. All statistical tests were two-sided.
Post-mortem tissue acquisition
Two post-mortem porcine brains were purchased from BIOTECH FARM.
Post-mortem MRI acquisition
Brains were scanned fresh (without fixation) in water within 6 h after death. Data was collected on a 3 T Siemens MAGNETOM Skyra scanner equipped with a 32-channel head receive-only coil at the ELSC neuroimaging unit at the Hebrew University.
For quantitative R1, R2*, & MTV mapping, 3D Spoiled gradient (SPGR) echo images were acquired with different flip angles (α = 4°, 10°, 20°, and 30°). Each image included five equally spaced echoes (TE = 4.01 – 16.51 ms) and the TR was 22 ms. The scan resolution was 0.8 mm isotropic. For calibration, we acquired additional spin-echo inversion recovery scan with an echo-planar imaging (EPI) read-out (SEIR-epi). This scan was done with a slab-inversion pulse and spatial-spectral fat suppression. For SEIR-epi, the TE/TR was 49/2920 ms. TI were 50, 200, 400, 1200 ms. The scan resolution was 2 mm isotropic. The EPI read-out was performed using 2 × acceleration.
For quantitative T2 mapping, multi‐SE images were acquired with ten equally spaced spin echoes between 12 and 120 ms. The TR was 4.21 s. The scan resolution was 2 mm isotropic.
For quantitative MTsat mapping, 3D Spoiled gradient (SPGR) echo image were acquired with an additional MT pulse. The flip angle was 10°, the TE/TR was 4.01/40 ms. The scan resolution was 0.8 mm isotropic.
Whole-brain DTI measurements were performed using a diffusion-weighted spin-echo EPI sequence with isotropic 1.5-mm resolution. Diffusion weighting gradients were applied at 64 directions and the strength of the diffusion weighting was set to b = 2000 s/mm2 (TE/TR = 95.80/6000 ms, G = 45mT/m, δ = 32.25 ms, Δ = 52.02 ms). The data includes eight non-diffusion-weighted images (b = 0).
For anatomical images, 3D magnetization prepared rapid gradient echo (MP-RAGE) scans were acquired. The scan resolution was 1 mm isotropic, the TE/TR was 2.98/2300 ms.
Histological analysis
Following the MRI scans the brains were dissected. Total of 42 brain regions were identified. Four samples were excluded as we were not able to properly separate the WM from the GM. One sample was excluded as we could not properly identify its anatomical origin. Additional two samples were too small for TLC analysis.
The non-water fraction (MTV) was determined by desiccation, also known as the dry-wet method. A small fraction of each brain sample (~0.25 g) was weighed. In order to completely dehydrate the fresh tissues, they were left for several days in a vacuum dessicator over silica gel at 4 °C. The experiment ended when no further weight loss occurred. The MTV of each brain sample was calculated based on the difference between the wet (Wwet) and dry (Wdry) weights of the tissue (Eq. 4):
MTV=𝑊wet−𝑊dry𝑊wet
(4)
For lipid extraction and lipidomics analysis78, Brain samples were weighted and homogenized with saline in plastic tubes on ice at concentration of 1 mg/12.5 µL. Two-hundred fifty microliters from each homogenate were utilized for lipid extraction and analysis with thin-layer chromatography (TLC). The lipid species distribution was analyzed by TLC applying 150 µg aliquots. Samples were reconstituted in 10 µL of Folch mixture and spotted on Silica-G TLC plates. Standards for each fraction were purchased from Sigma Aldrich (Rehovot, Israel) and were spotted in separate TLC lanes, i.e., 50 µg of triacylglycerides (TG), cholesterol (Chol), cholesteryl esters (CE), free fatty acids (FFA), lysophospholipids (Lyso), sphingomyelin (Spg), phosphatidylcholine (PtdCho), phosphatidylinositol (PI), phosphatidylserine (PS), and phosphatidylethanolamine (PE). Plates were then placed in a 20 × 20 cm TLC chamber containing petroleum ether, ethyl ether, and acetic acid (80:20:1, v/v/v) for quantification of neutral lipids or chloroform, methanol, acetic acid, and water (65:25:4:2, v:v:v:v) for quantification of polar lipids and run for 45 min. TG, Chol, CE, FFA, phospholipids (PL), Lyso, Spg, PtdCho, PI, PS, and PE bands were visualized with Iodine, scanned and quantified by Optiquant after scanning (Epson V700). Lyso, CE, TG, and PI were excluded from further analysis as their quantification was noisy and demonstrated high variability across TLC plates. This analysis was conducted under the guidance of Prof. Alicia Leikin-Frenkel in the Bert Strassburger Lipid Center, Sheba, Tel Hashomer.
Estimation of qMRI parameters in the post-mortem brain
Similar to human subjects.
Brain segmentation of post-mortem brain
Brain segmentation was done manually. Five tissue samples were excluded as we could not identify their origin location in the MRI scans.
MDM computation in the post-mortem brain
We computed the dependency of each qMRI parameter (R1, MTsat, MD, and R2) on MTV in different brain areas similarly to the analysis of the human subjects.
Principal component analysis (PCA) in the post-mortem brain
To estimate the variability in the MDM signatures across the brain, we computed the first principal component (PC) of MDM. PCA analysis was performed with four variables corresponding to the MDM dimensions (MTV derivatives of R1, MTsat, MD, and R2), and 30 observations corresponding to the different brain regions. As each MDM dimension has different units, we first computed the z-score of each dimension across the different brain areas prior to the PCA. From this analysis we derived the first PC that accounts for most of the variability in MDM signatures across the brain.
To compute the first PC of standard qMRI parameters we followed the same procedure, but used R1, MTsat, MD, and R2 instead of their MTV derivatives.
To estimate the variability in the lipid composition across the brain, we computed the first principal component (PC) of lipidomics. PCA analysis was performed with seven variables corresponding to the different polar and neutral lipids (Chol, FFA, PL, Spg, PtdCho, PS, PE), and 30 observations corresponding to the different brain regions. From this analysis, we derived the first PC that accounts for most of the variability in lipid composition across the brain.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
Code availability
A toolbox for computing MDM signatures is available at [https://github.com/MezerLab/MDM_toolbox].
The code generating the figures of in the paper is available at [https://github.com/MezerLab/MDM_Gen_Figs].
References
1.
Peters, R. Ageing and the brain. Postgrad. Med. J. 82, 84–88 (2006).
2.
Lockhart, S. N. & DeCarli, C. Structural imaging measures of brain aging. Neuropsychol. Rev. 24, 271–289 (2014).
3.
Wozniak, J. R. & Lim, K. O. Advances in white matter imaging: a review of in vivo magnetic resonance methodologies and their applicability to the study of development and aging. Neurosci. Biobehav. Rev. 30, 762–774 (2006).
4.
Frisoni, G. B., Fox, N. C., Jack, C. R., Scheltens, P. & Thompson, P. M. The clinical use of structural MRI in Alzheimer disease. Nat. Rev. Neurol. 6, 67–77 (2010).
5.
Mrak, R. E., Griffin, S. T. & Graham, D. I. Aging-associated changes in human brain. J. Neuropathol. Exp. Neurol. 56, 1269–1275 (1997).
6.
Yankner, B. A., Lu, T. & Loerch, P. The aging brain. Annu. Rev. Pathol. 3, 41–66 (2008).
7.
Söderberg, M., Edlund, C., Kristensson, K. & Dallner, G. Lipid compositions of different regions of the human brain during aging. J. Neurochem. 54, 415–423 (1990).
8.
Lauwers, E. et al. Membrane lipids in presynaptic function and disease. Neuron 90, 11–25 (2016).
9.
Li, Q. et al. Changes in lipidome composition during brain development in humans, chimpanzees, and Macaque monkeys. Mol. Biol. Evol. 34, 1155–1166 (2017).
10.
Müller, C. P. et al. Brain membrane lipids in major depression and anxiety disorders. Biochim. Biophys. Acta-Mol. Cell Biol. Lipids 1851, 1052–1065 (2015).
11.
Naudí, A. et al. Lipidomics of human brain aging and Alzheimer’s disease pathology. Int. Rev. Neurobiol. 122, 133–189 (2015).
12.
Walker, L. C. & Herndon, J. G. Mosaic aging. Med. Hypotheses 74, 1048–1051 (2010).
13.
Cole, J. H., Marioni, R. E., Harris, S. E. & Deary, I. J. Brain age and other bodily ‘ages’: implications for neuropsychiatry. Mol. Psychiatry 1 (2018). https://doi.org/10.1038/s41380-018-0098-1.
14.
Hayflick, L. Biological aging is no longer an unsolved problem. Ann. N. Y. Acad. Sci. 1100, 1–13 (2007).
15.
Christensen, H., Mackinnon, A. J., Korten, A. & Jorm, A. F. The ‘common cause hypothesis’; of cognitive aging: evidence for not only a common factor but also specific associations of age with vision and grip strength in a cross-sectional analysis. Psychol. Aging 16, 588–599 (2001).
16.
Cole, J. H. et al. Brain age predicts mortality. Mol. Psychiatry 23, 1385–1392 (2018).
17.
Sowell, E. R., Thompson, P. M. & Toga, A. W. Mapping changes in the human cortex throughout the span of life. Neuroscience 10, 372–392 (2004).
18.
Fjell, A. M. & Walhovd, K. B. Structural brain changes in aging: courses, causes and cognitive consequences. Rev. Neurosci. 21, 187–221 (2010).
19.
Gunning-Dixon, F. M., Brickman, A. M., Cheng, J. C. & Alexopoulos, G. S. Aging of cerebral white matter: a review of MRI findings. Int. J. Geriatr. Psychiatry 24, 109–117 (2009).
20.
Callaghan, M. F. et al. Widespread age-related differences in the human brain microstructure revealed by quantitative magnetic resonance imaging. Neurobiol. Aging 35, 1862–1872 (2014).
21.
Yeatman, J. D., Wandell, B. A. & Mezer, A. A. Lifespan maturation and degeneration of human brain white matter. Nat. Commun. 5, 4932 (2014).
22.
Cox, S. R. et al. Ageing and brain white matter structure in 3,513 UK Biobank participants. Nat. Commun. 7, 13629 (2016).
23.
Lorio, S. et al. Disentangling in vivo the effects of iron content and atrophy on the ageing human brain. Neuroimage 103, 280–289 (2014).
24.
Gracien, R.-M. et al. Evaluation of brain ageing: a quantitative longitudinal MRI study over 7 years. Eur. Radiol. 27, 1568–1576 (2017).
25.
Draganski, B. et al. Regional specificity of MRI contrast parameter changes in normal ageing revealed by voxel-based quantification (VBQ). Neuroimage 55, 1423–1434 (2011).
26.
Tardif, C. L. et al. Investigation of the confounding effects of vasculature and metabolism on computational anatomy studies. Neuroimage 149, 233–243 (2017).
27.
Carey, D. et al. Quantitative MRI provides markers of intra-, inter-regional, and age-related differences in young adult cortical microstructure. Neuroimage 182, 429–440 (2017).
28.
Cercignani, M., Dowell, N. G. & Tofts, P. S. Quantitative MRI of the Brain: Principles of Physical Measurement. (CRC Press, United States, 2018).
29.
Basser, P. J. & Pierpaoli, C. Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. J. Magn. Reson. Ser. B 111, 209–219 (1996).
30.
Weiskopf, N., Mohammadi, S., Lutti, A. & Callaghan, M. F. Advances in MRI-based computational neuroanatomy. Curr. Opin. Neurol. 28, 313–322 (2015).
31.
Winklewski, P. J. et al. Understanding the physiopathology behind axial and radial diffusivity changes—what do we know? Front. Neurol. 9, 92 (2018).
32.
Heath, F., Hurley, S. A., Johansen-Berg, H. & Sampaio-Baptista, C. Advances in noninvasive myelin imaging. Dev. Neurobiol. 78, 136–151 (2018).
33.
Lutti, A., Dick, F., Sereno, M. I. & Weiskopf, N. Using high-resolution quantitative mapping of R1 as an index of cortical myelination. Neuroimage 93, 176–188 (2014).
34.
Filo, S. & Mezer, A. A. in Quantitative MRI of the Brain: Principles of Physical Measurement (eds Cercignani, M., Dowell, N. G. & Tofts, P. S.) 55–72 (CRC Press, United States, 2018).
35.
Fullerton, G. D., Cameron, I. L. & Ord, V. A. Frequency dependence of magnetic resonance spin-lattice relaxation of protons in biological materials. Radiology 151, 135–138 (1984).
36.
Does, M. D. Inferring brain tissue composition and microstructure via MR relaxometry. Neuroimage 182, 136–148 (2018).
37.
Helms, G., Dathe, H., Kallenberg, K. & Dechent, P. High-resolution maps of magnetization transfer with inherent correction for RF inhomogeneity and T 1 relaxation obtained from 3D FLASH MRI. Magn. Reson. Med. 60, 1396–1407 (2008).
38.
Rohrer, M., Bauer, H., Mintorovitch, J., Requardt, M. & Weinmann, H. -J. Comparison of magnetic properties of MRI contrast media solutions at different magnetic field strengths. Investig. Radiol. 40, 715–724 (2005).
39.
Mezer, A. et al. Quantifying the local tissue volume and composition in individual brains with magnetic resonance imaging. Nat. Med. 19, 1667–1672 (2013).
40.
Koenig, S. H. Cholesterol of myelin is the determinant of gray‐white contrast in MRI of brain. Magn. Reson. Med. 20, 285–291 (1991).
41.
Koenig, S. H., Brown, R. D., Spiller, M. & Lundbom, N. Relaxometry of brain: why white matter appears bright in MRI. Magn. Reson. Med. 14, 482–495 (1990).
42.
Kucharczyk, W., Macdonald, P. M., Stanisz, G. J. & Henkelman, R. M. Relaxivity and magnetization transfer of white matter lipids at MR imaging: importance of cerebrosides and pH. Radiology 192, 521–529 (1994).
43.
Fullerton, G. D., Potter, J. L. & Dornbluth, N. C. NMR relaxation of protons in tissues and other macromolecular water solutions. Magn. Reson. Imaging 1, 209–226 (1982).
44.
Morawski, M. et al. Developing 3D microscopy with CLARITY on human brain tissue: towards a tool for informing and validating MRI-based histology. Neuroimage 182, 417–428 (2018).
45.
Leuze, C. et al. The separate effects of lipids and proteins on brain MRI contrast revealed through tissue clearing. Neuroimage 156, 412–422 (2017).
46.
Ben-David, E. & Shifman, S. Networks of neuronal genes affected by common and rare variants in autism spectrum disorders. PLoS Genet. 8, e1002556 (2012).
47.
Zecca, L., Youdim, M. B. H., Riederer, P., Connor, J. R. & Crichton, R. R. Iron, brain ageing and neurodegenerative disorders. Nat. Rev. Neurosci. 5, 863–873 (2004).
48.
Langkammer, C. et al. Quantitative MR imaging of brain iron: a postmortem validation study. Radiology 257, 455–462 (2010).
49.
Freeman, S. H. et al. Preservation of neuronal number despite age-related cortical brain atrophy in elderly subjects without Alzheimer disease. J. Neuropathol. Exp. Neurol. 67, 1205–1212 (2008).
50.
Burke, S. N. & Barnes, C. A. Neural plasticity in the ageing brain. Nat. Rev. Neurosci. 7, 30–40 (2006).
51.
Bowley, M. P., Cabral, H., Rosene, D. L. & Peters, A. Age changes in myelinated nerve fibers of the cingulate bundle and corpus callosum in the rhesus monkey. J. Comp. Neurol. 518, 3046–3064 (2010).
52.
Callaghan, M. F., Helms, G., Lutti, A., Mohammadi, S. & Weiskopf, N. A general linear relaxometry model of R1 using imaging data. Magn. Reson. Med. 73, 1309–1314 (2015).
53.
Piomelli, D., Astarita, G. & Rapaka, R. A neuroscientist’s guide to lipidomics. Nat. Rev. Neurosci. 8, 743–754 (2007).
54.
Sethi, S., Hayashi, M. A., Sussulini, A., Tasic, L. & Brietzke, E. Analytical approaches for lipidomics and its potential applications in neuropsychiatric disorders. World J. Biol. Psychiatry 18, 506–520 (2017).
55.
Fantini, J. & Yahi, N. Brain Lipids in Synaptic Function and Neurological Disease: Clues to Innovative Therapeutic Strategies for Brain Disorders. (Academic Press, United States, 2015).
56.
Shinitzky, M. Patterns of lipid changes in membranes of the aged brain. Gerontology 33, 149–154 (1987).
57.
Martin, M., Dotti, C. G. & Ledesma, M. D. Brain cholesterol in normal and pathological aging. Biochim. Biophys. Acta-Mol. Cell Biol. Lipids 1801, 934–944 (2010).
58.
Calucci, L. & Forte, C. Proton longitudinal relaxation coupling in dynamically heterogeneous soft systems. Prog. Nucl. Magn. Reson. Spectrosc. 55, 296–323 (2009).
59.
Halle, B. Molecular theory of field-dependent proton spin-lattice relaxation in tissue. Magn. Reson. Med. 56, 60–72 (2006).
60.
West, M. J., Coleman, P. D., Flood, D. G. & Troncoso, J. C. Differences in the pattern of hippocampal neuronal loss in normal ageing and Alzheimer’s disease. Lancet (Lond., Engl.) 344, 769–772 (1994).
61.
West, M. J., Kawas, C. H., Stewart, W. F., Rudow, G. L. & Troncoso, J. C. Hippocampal neurons in pre-clinical Alzheimer’s disease. Neurobiol. Aging 25, 1205–1212 (2004).
62.
Slater, D. A. et al. Evolution of white matter tract microstructure across the life span. Hum. Brain Mapp. 40, 2252–2268 (2019).
63.
Jarmusch, A. K. et al. Lipid and metabolite profiles of human brain tumors by desorption electrospray ionization-MS. Proc. Natl Acad. Sci. U.S.A. 113, 1486–1491 (2016).
64.
Wenk, M. R. The emerging field of lipidomics. Nat. Rev. Drug Discov. 4, 594–610 (2005).
65.
Eberlin, L. S. et al. Classifying human brain tumors by lipid imaging with mass spectrometry. Cancer Res. 72, 645–654 (2012).
66.
Shtangel, O. & Mezer, A. A phantom system designed to assess the effects of membrane lipids on water proton relaxation. bioRxiv 387845 (2018). https://doi.org/10.1101/387845.
67.
Akbarzadeh, A. et al. Liposome: methods of preparation and applications. Liposome Technol. 6, 102 (2013).
68.
Ben-Eliezer, N., Sodickson, D. K. & Block, K. T. Rapid and accurate T 2 mapping from multi-spin-echo data using Bloch-simulation-based reconstruction. Magn. Reson. Med. 73, 809–817 (2015).
69.
Mezer, A., Rokem, A., Berman, S., Hastie, T. & Wandell, B. A. Evaluating quantitative proton-density-mapping methods. Hum. Brain Mapp. 37, 3623–3635 (2016).
70.
Avants, B. B., Tustison, N. & Song, G. Advanced normalization tools (ANTS). Insight J. (2009). http://hdl.handle.net/10380/3113
72.
Behrens, T. E. J. et al. Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magn. Reson. Med. (2003). https://doi.org/10.1002/mrm.10609.
73.
Andersson, J. L. R., Skare, S. & Ashburner, J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage (2003). https://doi.org/10.1016/S1053-8119(03)00336-7.
74.
Andersson, J. L. R. & Sotiropoulos, S. N. An integrated approach to correction for off-resonance effects and subject movement in diffusion MR imaging. Neuroimage (2016). https://doi.org/10.1016/j.neuroimage.2015.10.019.
75.
Weiskopf, N. et al. Quantitative multi-parameter mapping of R1, PD*, MT, and R2* at 3T: a multi-center validation. Front. Neurosci. (2013). https://doi.org/10.3389/fnins.2013.00095.
76.
Fischl, B. FreeSurfer. Neuroimage 62, 774–781 (2012).
77.
Patenaude, B., Smith, S. M., Kennedy, D. N. & Jenkinson, M. A Bayesian model of shape and appearance for subcortical brain segmentation. Neuroimage (2011). https://doi.org/10.1016/j.neuroimage.2011.02.046.
78.
Shomonov-Wagner, L., Raz, A. & Leikin-Frenkel, A. Alpha linolenic acid in maternal diet halts the lipid disarray due to saturated fatty acids in the liver of mice offspring at weaning. Lipids Health Dis. (2015). https://doi.org/10.1186/s12944-015-0012-7.
Download references
Acknowledgements
This work was supported by the ISF grant 0399306, awarded to A.A.M. We acknowledge Ady Zelman for the assistance in collecting the human MRI data. We thank Assaf Friedler for assigning research lab space and advising on the lipid sample experiments. We thank Inbal Goshen for assigning research lab space and advising on the protein and ion samples as well as the porcine brain experiments. We thank Magnus Soderberg for advising on histological data interpretation. We are grateful to Brian A. Wandell, Jason Yeatman, Hermona Soreq, Ami Citri, Mark Does, Yaniv Ziv, Ofer Yizhar, Shai Berman, Roey Schurr, Jonathan Bain, Asier Erramuzpe Aliaga, Menachem Gutman, and Esther Nachliel for their critical reading of the manuscript and very useful comments. We thank Prof. Alicia Leikin-Frenkel for her guidance with the TLC analysis. We thank Rona Shaharabani for guidance and support in the post-mortem experiments.
Affiliations
The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, 9190401, Israel
Shir Filo, Oshrat Shtangel, Noga Salamon, Adi Kol, Batsheva Weisinger & Aviv A. Mezer
Department of Genetics, The Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, 9190401, Israel
Sagiv Shifman
Contributions
S.F., O.S., and A.A.M. conceived of the presented idea. S.F. and A.A.M. wrote the manuscript and designed the figures. S.F. collected the human and non-human brain datasets and analyzed them. O.S. performed the phantom experiments and analyzed them. B.W. performed the phantom experiments for non-lipid compounds. N.S. performed the gene-expression analysis. S.S. assisted and instructed with the gene-expression analysis. A.K. performed the porcine brain dissection.
Corresponding author
Correspondence to Aviv A. Mezer.
Ethics declarations & Competing interests
A.A.M, S.F., O.S. and the Hebrew University of Jerusalem have filed a patent application describing the technology used to measure MDM in this work. The other authors declare no competing interests.