Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT
Curator: Stephen J. Williams, Ph.D.
Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.
“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.
The field of bioinformatics is a natural fit for incorporting this technology. The curated information below shows some use examples of this technology in bioinformatics related to cancer research.
How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.
For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”
It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.
Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”
Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank. Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.
Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”
It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.
ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.
To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.
Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:
First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.
Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.
Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.
It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.
To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:
Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.
Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.
Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.
ChatGPT could also be useful for sequence analysis
A few examples of sequence analysis a ChatGPT could be useful include:
Protein structure
Identifying functional regions of a protein
Predicting protein-protein interactions
Identifying protein homologs
Generating Protein alignments
All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding
ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:
Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.
Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.
Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.
Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.
It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.
ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:
Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.
Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.
Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.
Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.
Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.
PLEASE WATCH VIDEO
In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.
He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:
Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search. This saves time as one does not have to pour through multiple pages to find information. However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page. This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs
ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws
ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.
One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT
0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts
The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
The abstract from the paper is the following:
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
Tips:
BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
This model was contributed by kamalkraj. The original code can be found here.
BioGPT is MIT-licensed. The license applies to the pre-trained models as well.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
As of right now this does not seem Open Access, however a sign up is required!
We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.
Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661
We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …
Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer
Table 7. Example of generated text
Input Model Text
Bicalutamide GPT-2 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation
of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.
BioGPT Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.
Janus kinase 3 (JAK-3) GPT-2 Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.
BioGPT Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.
Apricitabine GPT-2 Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.
BioGPT Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.
Xylazine GPT-2 Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted
a video of himself having a bath filled with it.
BioGPT Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.
Psoralen GPT-2 Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.
BioGPT Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)
CP-673451 GPT-2 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-
BioGPT CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.
BIIB-021 GPT-2 BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.
BioGPT BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).
Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012
The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.
Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.
Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.
While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.
PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT
This one entitled
Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP
gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.
Please note one of the comments which is VERY IMPORTANT
bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.
Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill. Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.
ALSO VIEW VIDEO
This is a talk from Microsoft on BioGPT
Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include
Mimicking vaginal cells and microbiome interactions on chip microfluidic culture
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
Scientists at Harvard University’s Wyss Institute for Biologically Inspired Engineering have developed the world’s first “vagina-on-a-chip,” which uses living cells and bacteria to mimic the microbial environment of the human vagina. It could help to test drugs against bacterial vaginosis, a common microbial imbalance that makes millions of people more susceptible to sexually transmitted diseases and puts them at risk of preterm delivery when pregnant. Vaginal health is difficult to study in a laboratory setting partly because laboratory animals have “totally different microbiomes” than humans. To address this, scientists have created an unique chip, which is an inch-long, rectangular polymer case containing live human vaginal tissue from a donor and a flow of estrogen-carrying material to simulate vaginal mucus.
The organs-on-a-chip mimic real bodily function, making it easier to study diseases and test drugs. Previous examples include models of the lungs and the intestines. In this case, the tissue acts like that of a real vagina in some important ways. It even responds to changes in estrogen by adjusting the expression of certain genes. And it can grow a humanlike microbiome dominated by “good” or “bad” bacteria. The researchers have demonstrated that Lactobacilli growing on the chip’s tissue help to maintain a low pH by producing lactic acid. Conversely, if the researchers introduce Gardnerella, the chip develops a higher pH, cell damage and increased inflammation: classic bacterial vaginosis signs. So, the chip can demonstrate how a healthy / unhealthy microbiome affects the vagina.
The next step is personalization or subject specific culture from individuals. The chip is a real leap forward, it has the prospect of testing how typical antibiotic treatments against bacterial vaginosis affect the different bacterial strains. Critics of organ-on-a-chip technology often raise the point that it models organs in isolation from the rest of the body. There are limitations such as many researchers are interested in vaginal microbiome changes that occur during pregnancy because of the link between bacterial vaginosis and labor complications. Although the chip’s tissue responds to estrogen, but it does not fully mimic pregnancy without feedback loops from other organs. The researchers are already working on connecting the vagina chip to a cervix chip, which could better represent the larger reproductive system.
All these information indicate that the human vagina chip offers a new model to study host-vaginal microbiome interactions in both optimal and non-optimal states, as well as providing a human relevant preclinical model for development and testing of reproductive therapeutics, including live bio-therapeutics products for bacterial vaginosis. This microfluidic human vagina chip that enables flow through an open epithelial lumen also offers a unique advantage for studies on the effect of cervicovaginal mucus on vaginal health as clinical mucus samples or commercially available mucins can be flowed through this channel. The role of resident and circulating immune cells in host-microbiome interactions also can be explored by incorporating these cells into the vagina chip in the future, as this has been successfully done in various other organ chip models.
The following paper in Cells describes the discovery of protein interactors of endoglin, which is recruited to membranes at the TGF-β receptor complex upon TGF-β signaling. Interesting a carbohydrate binding protein, galectin-3, and an E3-ligase, TRIM21, were found to be unique interactors within this complex.
Gallardo-Vara E, Ruiz-Llorente L, Casado-Vela J, Ruiz-Rodríguez MJ, López-Andrés N, Pattnaik AK, Quintanilla M, Bernabeu C. Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners. Cells. 2019 Sep 13;8(9):1082. doi: 10.3390/cells8091082. PMID: 31540324; PMCID: PMC6769930.
Abstract
Endoglin is a 180-kDa glycoprotein receptor primarily expressed by the vascular endothelium and involved in cardiovascular disease and cancer. Heterozygous mutations in the endoglin gene (ENG) cause hereditary hemorrhagic telangiectasia type 1, a vascular disease that presents with nasal and gastrointestinal bleeding, skin and mucosa telangiectases, and arteriovenous malformations in internal organs. A circulating form of endoglin (alias soluble endoglin, sEng), proteolytically released from the membrane-bound protein, has been observed in several inflammation-related pathological conditions and appears to contribute to endothelial dysfunction and cancer development through unknown mechanisms. Membrane-bound endoglin is an auxiliary component of the TGF-β receptor complex and the extracellular region of endoglin has been shown to interact with types I and II TGF-β receptors, as well as with BMP9 and BMP10 ligands, both members of the TGF-β family. To search for novel protein interactors, we screened a microarray containing over 9000 unique human proteins using recombinant sEng as bait. We find that sEng binds with high affinity, at least, to 22 new proteins. Among these, we validated the interaction of endoglin with galectin-3, a secreted member of the lectin family with capacity to bind membrane glycoproteins, and with tripartite motif-containing protein 21 (TRIM21), an E3 ubiquitin-protein ligase. Using human endothelial cells and Chinese hamster ovary cells, we showed that endoglin co-immunoprecipitates and co-localizes with galectin-3 or TRIM21. These results open new research avenues on endoglin function and regulation.
Endoglin is an auxiliary TGF-β co-receptor predominantly expressed in endothelial cells, which is involved in vascular development, repair, homeostasis, and disease [1,2,3,4]. Heterozygous mutations in the human ENDOGLIN gene (ENG) cause hereditary hemorrhagic telangiectasia (HHT) type 1, a vascular disease associated with nasal and gastrointestinal bleeds, telangiectases on skin and mucosa and arteriovenous malformations in the lung, liver, and brain [4,5,6]. The key role of endoglin in the vasculature is also illustrated by the fact that endoglin-KO mice die in utero due to defects in the vascular system [7]. Endoglin expression is markedly upregulated in proliferating endothelial cells involved in active angiogenesis, including the solid tumor neovasculature [8,9]. For this reason, endoglin has become a promising target for the antiangiogenic treatment of cancer [10,11,12]. Endoglin is also expressed in cancer cells where it can behave as both a tumor suppressor in prostate, breast, esophageal, and skin carcinomas [13,14,15,16] and a promoter of malignancy in melanoma and Ewing’s sarcoma [17]. Ectodomain shedding of membrane-bound endoglin may lead to a circulating form of the protein, also known as soluble endoglin (sEng) [18,19,20]. Increased levels of sEng have been found in several vascular-related pathologies, including preeclampsia, a disease of high prevalence in pregnant women which, if left untreated, can lead to serious and even fatal complications for both mother and baby [2,18,19,21]. Interestingly, several lines of evidence support a pathogenic role of sEng in the vascular system, including endothelial dysfunction, antiangiogenic activity, increased vascular permeability, inflammation-associated leukocyte adhesion and transmigration, and hypertension [18,22,23,24,25,26,27]. Because of its key role in vascular pathology, a large number of studies have addressed the structure and function of endoglin at the molecular level, in order to better understand its mechanism of action.
Galectin-3 Interacts with Endoglin in Cells
Galectin-3 is a secreted member of the lectin family with the capacity to bind membrane glycoproteins like endoglin and is involved in the pathogenesis of many human diseases [52]. We confirmed the protein screen data for galectin-3, as evidenced by two-way co-immunoprecipitation of endoglin and galectin-3 upon co-transfection in CHO-K1 cells. As shown in Figure 1A, galectin-3 and endoglin were efficiently transfected, as demonstrated by Western blot analysis in total cell extracts. No background levels of endoglin were observed in control cells transfected with the empty vector (Ø). By contrast, galectin-3 could be detected in all samples but, as expected, showed an increased signal in cells transfected with the galectin-3 expression vector. Co-immunoprecipitation studies of these cell lysates showed that galectin-3 was present in endoglin immunoprecipitates (Figure 1B). Conversely, endoglin was also detected in galectin-3 immunoprecipitates (Figure 1C).
Figure 1. Protein–protein association between galectin-3 and endoglin. (A–C). Co-immunoprecipitation of galectin-3 and endoglin. CHO-K1 cells were transiently transfected with pcEXV-Ø (Ø), pcEXV–HA–EngFL (Eng) and pcDNA3.1–Gal-3 (Gal3) expression vectors. (A) Total cell lysates (TCL) were analyzed by SDS-PAGE under reducing conditions, followed by Western blot (WB) analysis using specific antibodies to endoglin, galectin-3 and β-actin (loading control). Cell lysates were subjected to immunoprecipitation (IP) with anti-endoglin (B) or anti-galectin-3 (C) antibodies, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin or anti-galectin-3 antibodies, as indicated. Negative controls with an IgG2b (B) and IgG1 (C) were included. (D) Protein-protein interactions between galectin-3 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 7.3 µM recombinant human galectin-3/6xHis at the C-terminus (LGALS3), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Figure 2.Galectin-3 and endoglin co-localize in human endothelial cells. Human umbilical vein-derived endothelial cell (HUVEC) monolayers were fixed with paraformaldehyde, permeabilized with Triton X-100, incubated with the mouse mAb P4A4 anti-endoglin, washed, and incubated with a rabbit polyclonal anti-galectin-3 antibody (PA5-34819). Galectin-3 and endoglin were detected by immunofluorescence upon incubation with Alexa 647 goat anti-rabbit IgG (red staining) and Alexa 488 goat anti-mouse IgG (green staining) secondary antibodies, respectively. (A) Single staining of galectin-3 (red) and endoglin (green) at the indicated magnifications. (B) Merge images plus DAPI (nuclear staining in blue) show co-localization of galectin-3 and endoglin (yellow color). Representative images of five different experiments are shown.
Endoglin associates with the cullin-type E3 ligase TRIM21
Figure 3.Protein–protein association between TRIM21 and endoglin. (A–E) Co-immunoprecipitation of TRIM21 and endoglin. A,B. HUVEC monolayers were lysed and total cell lysates (TCL) were subjected to SDS-PAGE under reducing (for TRIM21 detection) or nonreducing (for endoglin detection) conditions, followed by Western blot (WB) analysis using antibodies to endoglin, TRIM21 or β-actin (A). HUVECs lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or negative control antibodies, followed by WB analysis with anti-endoglin (B). C,D. CHO-K1 cells were transiently transfected with pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (E) or pcDNA3.1–HA–hTRIM21 (T) expression vectors, as indicated. Total cell lysates (TCL) were subjected to SDS-PAGE under nonreducing conditions and WB analysis using specific antibodies to endoglin, TRIM21, and β-actin (C). Cell lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or anti-endoglin antibodies, followed by SDS-PAGE under reducing (upper panel) or nonreducing (lower panel) conditions and WB analysis with anti-TRIM21 or anti-endoglin antibodies. Negative controls of appropriate IgG were included (D). E. CHO-K1 cells were transiently transfected with pcDNA3.1–HA–hTRIM21 and pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (FL; full-length), pDisplay–HA–EngEC (EC; cytoplasmic-less) or pDisplay–HA–EngTMEC (TMEC; cytoplasmic-less) expression vectors, as indicated. Cell lysates were subjected to immunoprecipitation with anti-TRIM21, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin antibodies, as indicated. The asterisk indicates the presence of a nonspecific band. Mr, molecular reference; Eng, endoglin; TRIM, TRIM21. (F) Protein–protein interactions between TRIM21 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 5.4 µM recombinant human TRIM21/6xHis at the N-terminus (R052), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Table 1. Human protein-array analysis of endoglin interactors1.
1 Microarrays containing over 9000 unique human proteins were screened using recombinant sEng as a probe. Protein interactors showing the highest scores (Z-score ≥2.0) are listed. GeneBank (https://www.ncbi.nlm.nih.gov/genbank/) and UniProtKB (https://www.uniprot.org/help/uniprotkb) accession numbers are indicated with a yellow or green background, respectively. The cellular compartment of each protein was obtained from the UniProtKB webpage. Proteins selected for further studies (TRIM21 and galectin-3) are indicated in bold type with blue background.
Note: the following are from NCBI Genbank and Genecards on TRIM21
Official Symbol TRIM21provided by HGNC Official Full Name tripartite motif containing 21provided by HGNC Primary source HGNC:HGNC:11312 See related Ensembl:ENSG00000132109MIM:109092;AllianceGenome:HGNC:11312 Gene type protein coding RefSeq status REVIEWED Organism Homo sapiens Lineage Eukaryota; Metazoa; Chordata; Craniata; Vertebrata; Euteleostomi; Mammalia; Eutheria; Euarchontoglires; Primates; Haplorrhini; Catarrhini; Hominidae; Homo Also known as SSA; RO52; SSA1; RNF81; Ro/SSA Summary This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008] Expression Ubiquitous expression in spleen (RPKM 15.5), appendix (RPKM 13.2) and 24 other tissues See more Orthologs mouseall NEW Try the new Gene table Try the new Transcript table
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression. Enhances the decapping activity of DCP2. Exists as a ribonucleoprotein particle present in all mammalian cells studied and composed of a single polypeptide and one of four small RNA molecules. At least two isoforms are present in nucleated and red blood cells, and tissue specific differences in RO/SSA proteins have been identified. The common feature of these proteins is their ability to bind HY RNAs.2. Involved in the regulation of innate immunity and the inflammatory response in response to IFNG/IFN-gamma. Organizes autophagic machinery by serving as a platform for the assembly of ULK1, Beclin 1/BECN1 and ATG8 family members and recognizes specific autophagy targets, thus coordinating target recognition with assembly of the autophagic apparatus and initiation of autophagy. Acts as an autophagy receptor for the degradation of IRF3, hence attenuating type I interferon (IFN)-dependent immune responses (PubMed:26347139, 16297862, 16316627, 16472766, 16880511, 18022694, 18361920, 18641315, 18845142, 19675099). Represses the innate antiviral response by facilitating the formation of the NMI-IFI35 complex through ‘Lys-63’-linked ubiquitination of NMI (PubMed:26342464). ( RO52_HUMAN,P19474 )
Molecular function for TRIM21 Gene according to UniProtKB/Swiss-Prot
Function:
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression.
Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners
Gallardo-Vara E, Ruiz-Llorente L, Casado-Vela J, Ruiz-Rodríguez MJ, López-Andrés N, Pattnaik AK, Quintanilla M, Bernabeu C. Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners. Cells. 2019 Sep 13;8(9):1082. doi: 10.3390/cells8091082. PMID: 31540324; PMCID: PMC6769930.
Abstract
Endoglin is a 180-kDa glycoprotein receptor primarily expressed by the vascular endothelium and involved in cardiovascular disease and cancer. Heterozygous mutations in the endoglin gene (ENG) cause hereditary hemorrhagic telangiectasia type 1, a vascular disease that presents with nasal and gastrointestinal bleeding, skin and mucosa telangiectases, and arteriovenous malformations in internal organs. A circulating form of endoglin (alias soluble endoglin, sEng), proteolytically released from the membrane-bound protein, has been observed in several inflammation-related pathological conditions and appears to contribute to endothelial dysfunction and cancer development through unknown mechanisms. Membrane-bound endoglin is an auxiliary component of the TGF-β receptor complex and the extracellular region of endoglin has been shown to interact with types I and II TGF-β receptors, as well as with BMP9 and BMP10 ligands, both members of the TGF-β family. To search for novel protein interactors, we screened a microarray containing over 9000 unique human proteins using recombinant sEng as bait. We find that sEng binds with high affinity, at least, to 22 new proteins. Among these, we validated the interaction of endoglin with galectin-3, a secreted member of the lectin family with capacity to bind membrane glycoproteins, and with tripartite motif-containing protein 21 (TRIM21), an E3 ubiquitin-protein ligase. Using human endothelial cells and Chinese hamster ovary cells, we showed that endoglin co-immunoprecipitates and co-localizes with galectin-3 or TRIM21. These results open new research avenues on endoglin function and regulation.
Endoglin is an auxiliary TGF-β co-receptor predominantly expressed in endothelial cells, which is involved in vascular development, repair, homeostasis, and disease [1,2,3,4]. Heterozygous mutations in the human ENDOGLIN gene (ENG) cause hereditary hemorrhagic telangiectasia (HHT) type 1, a vascular disease associated with nasal and gastrointestinal bleeds, telangiectases on skin and mucosa and arteriovenous malformations in the lung, liver, and brain [4,5,6]. The key role of endoglin in the vasculature is also illustrated by the fact that endoglin-KO mice die in utero due to defects in the vascular system [7]. Endoglin expression is markedly upregulated in proliferating endothelial cells involved in active angiogenesis, including the solid tumor neovasculature [8,9]. For this reason, endoglin has become a promising target for the antiangiogenic treatment of cancer [10,11,12]. Endoglin is also expressed in cancer cells where it can behave as both a tumor suppressor in prostate, breast, esophageal, and skin carcinomas [13,14,15,16] and a promoter of malignancy in melanoma and Ewing’s sarcoma [17]. Ectodomain shedding of membrane-bound endoglin may lead to a circulating form of the protein, also known as soluble endoglin (sEng) [18,19,20]. Increased levels of sEng have been found in several vascular-related pathologies, including preeclampsia, a disease of high prevalence in pregnant women which, if left untreated, can lead to serious and even fatal complications for both mother and baby [2,18,19,21]. Interestingly, several lines of evidence support a pathogenic role of sEng in the vascular system, including endothelial dysfunction, antiangiogenic activity, increased vascular permeability, inflammation-associated leukocyte adhesion and transmigration, and hypertension [18,22,23,24,25,26,27]. Because of its key role in vascular pathology, a large number of studies have addressed the structure and function of endoglin at the molecular level, in order to better understand its mechanism of action.
Galectin-3 Interacts with Endoglin in Cells
Galectin-3 is a secreted member of the lectin family with the capacity to bind membrane glycoproteins like endoglin and is involved in the pathogenesis of many human diseases [52]. We confirmed the protein screen data for galectin-3, as evidenced by two-way co-immunoprecipitation of endoglin and galectin-3 upon co-transfection in CHO-K1 cells. As shown in Figure 1A, galectin-3 and endoglin were efficiently transfected, as demonstrated by Western blot analysis in total cell extracts. No background levels of endoglin were observed in control cells transfected with the empty vector (Ø). By contrast, galectin-3 could be detected in all samples but, as expected, showed an increased signal in cells transfected with the galectin-3 expression vector. Co-immunoprecipitation studies of these cell lysates showed that galectin-3 was present in endoglin immunoprecipitates (Figure 1B). Conversely, endoglin was also detected in galectin-3 immunoprecipitates (Figure 1C).
Figure 1. Protein–protein association between galectin-3 and endoglin. (A–C). Co-immunoprecipitation of galectin-3 and endoglin. CHO-K1 cells were transiently transfected with pcEXV-Ø (Ø), pcEXV–HA–EngFL (Eng) and pcDNA3.1–Gal-3 (Gal3) expression vectors. (A) Total cell lysates (TCL) were analyzed by SDS-PAGE under reducing conditions, followed by Western blot (WB) analysis using specific antibodies to endoglin, galectin-3 and β-actin (loading control). Cell lysates were subjected to immunoprecipitation (IP) with anti-endoglin (B) or anti-galectin-3 (C) antibodies, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin or anti-galectin-3 antibodies, as indicated. Negative controls with an IgG2b (B) and IgG1 (C) were included. (D) Protein-protein interactions between galectin-3 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 7.3 µM recombinant human galectin-3/6xHis at the C-terminus (LGALS3), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Figure 2.Galectin-3 and endoglin co-localize in human endothelial cells. Human umbilical vein-derived endothelial cell (HUVEC) monolayers were fixed with paraformaldehyde, permeabilized with Triton X-100, incubated with the mouse mAb P4A4 anti-endoglin, washed, and incubated with a rabbit polyclonal anti-galectin-3 antibody (PA5-34819). Galectin-3 and endoglin were detected by immunofluorescence upon incubation with Alexa 647 goat anti-rabbit IgG (red staining) and Alexa 488 goat anti-mouse IgG (green staining) secondary antibodies, respectively. (A) Single staining of galectin-3 (red) and endoglin (green) at the indicated magnifications. (B) Merge images plus DAPI (nuclear staining in blue) show co-localization of galectin-3 and endoglin (yellow color). Representative images of five different experiments are shown.
Endoglin associates with the cullin-type E3 ligase TRIM21
Figure 3.Protein–protein association between TRIM21 and endoglin. (A–E) Co-immunoprecipitation of TRIM21 and endoglin. A,B. HUVEC monolayers were lysed and total cell lysates (TCL) were subjected to SDS-PAGE under reducing (for TRIM21 detection) or nonreducing (for endoglin detection) conditions, followed by Western blot (WB) analysis using antibodies to endoglin, TRIM21 or β-actin (A). HUVECs lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or negative control antibodies, followed by WB analysis with anti-endoglin (B). C,D. CHO-K1 cells were transiently transfected with pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (E) or pcDNA3.1–HA–hTRIM21 (T) expression vectors, as indicated. Total cell lysates (TCL) were subjected to SDS-PAGE under nonreducing conditions and WB analysis using specific antibodies to endoglin, TRIM21, and β-actin (C). Cell lysates were subjected to immunoprecipitation (IP) with anti-TRIM21 or anti-endoglin antibodies, followed by SDS-PAGE under reducing (upper panel) or nonreducing (lower panel) conditions and WB analysis with anti-TRIM21 or anti-endoglin antibodies. Negative controls of appropriate IgG were included (D). E. CHO-K1 cells were transiently transfected with pcDNA3.1–HA–hTRIM21 and pDisplay–HA–Mock (Ø), pDisplay–HA–EngFL (FL; full-length), pDisplay–HA–EngEC (EC; cytoplasmic-less) or pDisplay–HA–EngTMEC (TMEC; cytoplasmic-less) expression vectors, as indicated. Cell lysates were subjected to immunoprecipitation with anti-TRIM21, followed by SDS-PAGE under reducing conditions and WB analysis with anti-endoglin antibodies, as indicated. The asterisk indicates the presence of a nonspecific band. Mr, molecular reference; Eng, endoglin; TRIM, TRIM21. (F) Protein–protein interactions between TRIM21 and endoglin using Bio-layer interferometry (BLItz). The Ni–NTA biosensors tips were loaded with 5.4 µM recombinant human TRIM21/6xHis at the N-terminus (R052), and protein binding was measured against 0.1% BSA in PBS (negative control) or 4.1 µM soluble endoglin (sEng). Kinetic sensorgrams were obtained using a single channel ForteBioBLItzTM instrument.
Table 1. Human protein-array analysis of endoglin interactors1.
1 Microarrays containing over 9000 unique human proteins were screened using recombinant sEng as a probe. Protein interactors showing the highest scores (Z-score ≥2.0) are listed. GeneBank (https://www.ncbi.nlm.nih.gov/genbank/) and UniProtKB (https://www.uniprot.org/help/uniprotkb) accession numbers are indicated with a yellow or green background, respectively. The cellular compartment of each protein was obtained from the UniProtKB webpage. Proteins selected for further studies (TRIM21 and galectin-3) are indicated in bold type with blue background.
Note: the following are from NCBI Genbank and Genecards on TRIM21
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
Expression
Ubiquitous expression in spleen (RPKM 15.5), appendix (RPKM 13.2) and 24 other tissues See more
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. The encoded protein is part of the RoSSA ribonucleoprotein, which includes a single polypeptide and one of four small RNA molecules. The RoSSA particle localizes to both the cytoplasm and the nucleus. RoSSA interacts with autoantigens in patients with Sjogren syndrome and systemic lupus erythematosus. Alternatively spliced transcript variants for this gene have been described but the full-length nature of only one has been determined. [provided by RefSeq, Jul 2008]
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression. Enhances the decapping activity of DCP2. Exists as a ribonucleoprotein particle present in all mammalian cells studied and composed of a single polypeptide and one of four small RNA molecules. At least two isoforms are present in nucleated and red blood cells, and tissue specific differences in RO/SSA proteins have been identified. The common feature of these proteins is their ability to bind HY RNAs.2. Involved in the regulation of innate immunity and the inflammatory response in response to IFNG/IFN-gamma. Organizes autophagic machinery by serving as a platform for the assembly of ULK1, Beclin 1/BECN1 and ATG8 family members and recognizes specific autophagy targets, thus coordinating target recognition with assembly of the autophagic apparatus and initiation of autophagy. Acts as an autophagy receptor for the degradation of IRF3, hence attenuating type I interferon (IFN)-dependent immune responses (PubMed:26347139, 16297862, 16316627, 16472766, 16880511, 18022694, 18361920, 18641315, 18845142, 19675099). Represses the innate antiviral response by facilitating the formation of the NMI-IFI35 complex through ‘Lys-63’-linked ubiquitination of NMI (PubMed:26342464). ( RO52_HUMAN,P19474 )
Molecular function for TRIM21 Gene according to UniProtKB/Swiss-Prot
Function:
E3 ubiquitin-protein ligase whose activity is dependent on E2 enzymes, UBE2D1, UBE2D2, UBE2E1 and UBE2E2. Forms a ubiquitin ligase complex in cooperation with the E2 UBE2D2 that is used not only for the ubiquitination of USP4 and IKBKB but also for its self-ubiquitination. Component of cullin-RING-based SCF (SKP1-CUL1-F-box protein) E3 ubiquitin-protein ligase complexes such as SCF(SKP2)-like complexes. A TRIM21-containing SCF(SKP2)-like complex is shown to mediate ubiquitination of CDKN1B (‘Thr-187’ phosphorylated-form), thereby promoting its degradation by the proteasome. Monoubiquitinates IKBKB that will negatively regulates Tax-induced NF-kappa-B signaling. Negatively regulates IFN-beta production post-pathogen recognition by polyubiquitin-mediated degradation of IRF3. Mediates the ubiquitin-mediated proteasomal degradation of IgG1 heavy chain, which is linked to the VCP-mediated ER-associated degradation (ERAD) pathway. Promotes IRF8 ubiquitination, which enhanced the ability of IRF8 to stimulate cytokine genes transcription in macrophages. Plays a role in the regulation of the cell cycle progression.
Other Articles in this Open Access Scientific Journal on Galectins and Proteosome Include
Accelerating PROTAC drug discovery: Establishing a relationship between ubiquitination and target protein degradation
Curator: Stephen J. Williams, Ph.D.
PROTACs have been explored in multiple disease fields with focus on only few ligases like cereblon (CRBN), Von Hippel-Lindau (VHL), IAP and MDM2. Cancer targets like androgen receptor, estrogen receptor, BTK, BCL2, CDK8 and c-MET [[6], [7], [8], [9], [10], [11]] have been successfully targeted using PROTACs. A variety of BET family (BRD2, BRD3, and BRD4)- PROTACs were designed using multiple ligases; MDM2-based BRD4 PROTAC [12], CRBN based dBET1 [13] and BETd-24-6 [14] for triple-negative breast cancer, enhanced membrane permeable dBET6 [15], and dBET57 PROTAC [16]. PROTACs for Hepatitis c virus (HCV) protease, IRAK4 and Tau [[17], [18], [19]] have been explored for viral, immune and neurodegenerative diseases, respectively. Currently, the PROTAC field expansion to vast undruggable proteome is hindered due to narrow focus on select E3 ligases. Lack of reliable tools to rapidly evaluate PROTACs based on new ligases is hindering the progress. Screening platforms designed must be physiologically relevant and represent true PROTAC cellular function, i.e., PROTAC-mediated target ubiquitination and degradation.
In the current study, we employ TUBEs as affinity capture reagents to monitor PROTAC-induced poly-ubiquitination and degradation as a measure of potency. We established and validated proof-of-concept cell-based assays in a 96-well format using PROTACS for three therapeutic targets BET family proteins, kinases, and KRAS. To our knowledge, the proposed PROTAC assays are first of its kind that can simultaneously 1) detect ubiquitination of endogenous, native protein targets, 2) evaluate the potency of PROTACs, and 3) establish a link between the UPS and protein degradation. Using these TUBE assays, we established rank order potencies between four BET family PROTACs dBET1, dBET6, BETd246 and dBET57 based on peak ubiquitination signals (“UbMax”) of the target protein. TUBE assay was successful in demonstrating promiscuous kinase PROTACs efficiency to degrade Aurora Kinase A at sub-nanomolar concentrations within 1 h. A comparative study to identify changes in the ubiquitination and degradation profile of KRAS G12C PROTACs recruiting two E3 ligases (CRBN and VHL). All of the ubiquitination and degradation profiles obtained from TUBE based assays correlate well with traditional low throughput immunoblotting. Significant correlation between DC50 obtained from protein degradation in western blotting and UbMax values demonstrates our proposed assays can aid in high-throughput screening and drastically eliminate artifacts to overcome bottlenecks in PROTAC drug discovery.
To successfully set up HTS screening with novel PROTACs without pre-existing knowledge, we recommend the following steps. 1. Identify a model PROTAC that can potentially demonstrate activity based on knowledge in PROTAC design or in vitro binding studies. 2. Perform a time course study with 2–3 doses of the model PROTAC based on affinities of the ligands selected. 3. Monitor ubiquitination and degradation profiles using plate-based assay and identify time point that demonstrates UbMax. 4. Perform a dose response at selected time point with a library of PROTACs to establish rank order potency.
INTRODUCTION
Ubiquitination is a major regulatory mechanism to maintain cellular protein homeostasis by marking proteins for proteasomal-mediated degradation [1]. Given ubiquitin’s role in a variety of pathologies, the idea of targeting the Ubiquitin Proteasome System (UPS) is at the forefront of drug discovery [2]. “Event-driven” protein degradation using the cell’s own UPS is a promising technology for addressing the “undruggable” proteome [3]. Targeted protein degradation (TPD) has emerged as a new paradigm and promising therapeutic option to selectively attack previously intractable drug targets using PROteolytic TArgeting Chimeras (PROTACs) [4]. PROTACs are heterobifunctional molecules with a distinct ligand that targets a specific E3 ligase which is tethered to another ligand specific for the target protein using an optimized chemical linker. A functional PROTAC induces a ternary E3-PROTAC-target complex, resulting in poly-ubiquitination and subsequent controlled protein degradation [5]. Ability to function at sub-stoichiometric levels for efficient degradation, a significant advantage over traditional small molecules.
PROTACs have been explored in multiple disease fields with focus on only few ligases like cereblon (CRBN), Von Hippel-Lindau (VHL), IAP and MDM2. Cancer targets like androgen receptor, estrogen receptor, BTK, BCL2, CDK8 and c-MET [[6], [7], [8], [9], [10], [11]] have been successfully targeted using PROTACs. A variety of BET family (BRD2, BRD3, and BRD4)- PROTACs were designed using multiple ligases; MDM2-based BRD4 PROTAC [12], CRBN based dBET1 [13] and BETd-24-6 [14] for triple-negative breast cancer, enhanced membrane permeable dBET6 [15], and dBET57 PROTAC [16]. PROTACs for Hepatitis c virus (HCV) protease, IRAK4 and Tau [[17], [18], [19]] have been explored for viral, immune and neurodegenerative diseases, respectively. Currently, the PROTAC field expansion to vast undruggable proteome is hindered due to narrow focus on select E3 ligases. Lack of reliable tools to rapidly evaluate PROTACs based on new ligases is hindering the progress. Screening platforms designed must be physiologically relevant and represent true PROTAC cellular function, i.e., PROTAC-mediated target ubiquitination and degradation.
Cellular PROTAC screening is traditionally performed using cell lines harboring reporter genes and/or Western blotting. While Western blotting is easy to perform, they are low throughput, semi-quantitative and lack sensitivity. While reporter gene assays address some of the issues, they are challenged by reporter tags having internal lysines leading to artifacts. Currently, no approaches are available that can identify true PROTAC effects such as target ubiquitination and proteasome-mediated degradation simultaneously. High affinity ubiquitin capture reagents like TUBEs [20] (tandem ubiquitin binding entities), are engineered ubiquitin binding domains (UBDs) that allow for detection of ultralow levels of polyubiquitinated proteins under native conditions with affinities as low as 1 nM. The versatility and selectivity of TUBEs makes them superior to antibodies, and they also offer chain-selectivity (-K48, -K63, or linear) [21]. High throughput assays that can report the efficacy of multiple PROTACs simultaneously by monitoring PROTAC mediated ubiquitination can help establish rank order potency and guide chemists in developing meaningful structure activity relationships (SAR) rapidly.
In the current study, we employ TUBEs as affinity capture reagents to monitor PROTAC-induced poly-ubiquitination and degradation as a measure of potency. We established and validated proof-of-concept cell-based assays in a 96-well format using PROTACS for three therapeutic targets BET family proteins, kinases, and KRAS. To our knowledge, the proposed PROTAC assays are first of its kind that can simultaneously 1) detect ubiquitination of endogenous, native protein targets, 2) evaluate the potency of PROTACs, and 3) establish a link between the UPS and protein degradation. Using these TUBE assays, we established rank order potencies between four BET family PROTACs dBET1, dBET6, BETd246 and dBET57 based on peak ubiquitination signals (“UbMax”) of the target protein. TUBE assay was successful in demonstrating promiscuous kinase PROTACs efficiency to degrade Aurora Kinase A at sub-nanomolar concentrations within 1 h. A comparative study to identify changes in the ubiquitination and degradation profile of KRAS G12C PROTACs recruiting two E3 ligases (CRBN and VHL). All of the ubiquitination and degradation profiles obtained from TUBE based assays correlate well with traditional low throughput immunoblotting. Significant correlation between DC50 obtained from protein degradation in western blotting and UbMax values demonstrates our proposed assays can aid in high-throughput screening and drastically eliminate artifacts to overcome bottlenecks in PROTAC drug discovery.
Fig. 1. Schematic representation of TUBE assay to monitor PROTAC mediated cellular ubiquitination of target proteins.Fig. 2. TUBE based assay screening of PROTACs: Jurkat cell lysates were treated with BRD3-specific PROTACs A) dBET1, B) dBET6, C) BETd24-6, and D) dBET57. Polyubiquitination profiles and Ubmax of BRD3 for each PROTAC were represented as relative CL intensity. Relative CL intensities were calculated by dividing raw CL signals from a given PROTAC dose over DMSO treated samples. Error bars represent standard deviations, n = 3.Fig. 3. PROTAC mediated degradation of bromodomain proteins analyzed by anti-BRD3 western blotting. Dose response of PROTACs dBET1, dBET6, Betd-24-6 and dBET57 at 45 min in Jurkat cells demonstrates degradation of BRD3, Acting as loading control.
Fig. 4. PROTAC mediated ubiquitination and degradation of AURKA in K562 cells. (A) Time course study to evaluate intracellular ubiquitination and degradation. (B) Western blot analysis of time course study: degradation kinetics (C) A dose response study to evaluate DC50 of the promiscuous kinase PROTAC in K562 cells. (D) Western blot analysis of dose response study to monitor degradation, GAPDH as loading control. Error bars represent standard deviation, n = 3.
Sperm damage and fertility problem due to COVID-19
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
Many couples initially deferred attempts at pregnancy or delayed fertility care due to concerns about coronavirus disease 2019 (COVID-19). One significant fear during the COVID-19 pandemic was the possibility of sexual transmission. Many couples have since resumed fertility care while accepting the various uncertainties associated with severe acute respiratory syndrome coronavirus 2 (SARS-Cov2), including the evolving knowledge related to male reproductive health. Significant research has been conducted exploring viral shedding, tropism, sexual transmission, the impact of male reproductive hormones, and possible implications to semen quality. However, to date, limited definitive evidence exists regarding many of these aspects, creating a challenging landscape for both patients and physicians to obtain and provide the best clinical care.
According to a new study, which looked at sperm quality in patients who suffered symptomatic coronavirus (COVID-19) infections, showed that it could impact fertility for weeks after recovery from the virus. The data showed 60% COVID-19 infected men had reduction in sperm motility and 37% had drop in sperm count, but, 2 months after recovery from COVID-19 the value came down to 28% and 6% respectively. The researchers also of the view that COVID-19 could not be sexually transmitted through semen after a person had recovered from illness. Patients with mild and severe cases of COVID-19 showed similar rate of drop in sperm quality. But further work is required to establish whether or not COVID-19 could have a longer-term impact on fertility. The estimated recovery time is three months, but further follow-up studies are still required to confirm this and to determine if permanent damage occurred in a minority of men.
Some viruses like influenza are already known to damage sperm mainly by increasing body temperature. But in the case of COVID-19, the researchers found no link between the presence or severity of fever and sperm quality. Tests showed that higher concentrations of specific COVID-19 antibodies in patients’ blood serum were strongly correlated with reduced sperm function. So, it was believed the sperm quality reduction cause could be linked to the body’s immune response to the virus. While the study showed that there was no COVID-19 RNA present in the semen of patients who had got over the virus, the fact that antibodies were attacking sperm suggests the virus may cross the blood-testis barrier during the peak of an infection.
It was found in a previous report that SARS-CoV-2 can be present in the semen of patients with COVID-19, and SARS-CoV-2 may still be detected in the semen of recovering patients. Due to imperfect blood-testes/deferens/epididymis barriers, SARS-CoV-2 might be seeded to the male reproductive tract, especially in the presence of systemic local inflammation. Even if the virus cannot replicate in the male reproductive system, it may persist, possibly resulting from the privileged immunity of testes.
If it could be proved that SARS-CoV-2 can be transmitted sexually in future studies, sexual transmission might be a critical part of the prevention of transmission, especially considering the fact that SARS-CoV-2 was detected in the semen of recovering patients. Abstinence or condom use might be considered as preventive means for these patients. In addition, it is worth noting that there is a need for studies monitoring fetal development. Therefore, to avoid contact with the patient’s saliva and blood may not be enough, since the survival of SARS-CoV-2 in a recovering patient’s semen maintains the likelihood to infect others. But further studies are required with respect to the detailed information about virus shedding, survival time, and concentration in semen.
Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.
Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.
As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.
New studies link cell cycle proteins to immunosurveillance of premalignant cells
Curator: Stephen J. Williams, Ph.D.
The following is from a Perspectives article in the journal Science by Virinder Reen and Jesus Gil called “Clearing Stressed Cells: Cell cycle arrest produces a p21-dependent secretome that initaites immunosurveillance of premalignant cells”. This is a synopsis of the Sturmlechener et al. research article in the same issue (2).
Complex organisms repair stress-induced damage to limit the replication of faulty cells that could drive cancer. When repair is not possible, tissue homeostasis is maintained by the activation of stress response programs such as apoptosis, which eliminates the cells, or senescence, which arrests them (1). Cellular senescence causes the arrest of damaged cells through the induction of cyclin-dependent kinase inhibitors (CDKIs) such as p16 and p21 (2). Senescent cells also produce a bioactive secretome (the senescence-associated secretory phenotype, SASP) that places cells under immunosurveillance, which is key to avoiding the detrimental inflammatory effects caused by lingering senescent cells on surrounding tissues. On page 577 of this issue, Sturmlechner et al. (3) report that induction of p21 not only contributes to the arrest of senescent cells, but is also an early signal that primes stressed cells for immunosurveillance.Senescence is a complex program that is tightly regulated at the epigenetic and transcriptional levels. For example, exit from the cell cycle is controlled by the induction of p16 and p21, which inhibit phosphorylation of the retinoblastoma protein (RB), a transcriptional regulator and tumor suppressor. Hypophosphorylated RB represses transcription of E2F target genes, which are necessary for cell cycle progression. Conversely, production of the SASP is regulated by a complex program that involves super-enhancer (SE) remodeling and activation of transcriptional regulators such as nuclear factor κB (NF-κB) or CCAAT enhancer binding protein–β (C/EBPβ) (4).
Senescence is a complex program that is tightly regulated at the epigenetic and transcriptional levels. For example, exit from the cell cycle is controlled by the induction of p16 and p21, which inhibit phosphorylation of the retinoblastoma protein (RB), a transcriptional regulator and tumor suppressor. Hypophosphorylated RB represses transcription of E2F target genes, which are necessary for cell cycle progression. Conversely, production of the SASP is regulated by a complex program that involves super-enhancer (SE) remodeling and activation of transcriptional regulators such as nuclear factor κB (NF-κB) or CCAAT enhancer binding protein–β (C/EBPβ) (4).
Sturmlechner et al. found that activation of p21 following stress rapidly halted cell cycle progression and triggered an internal biological timer (of ∼4 days in hepatocytes), allowing time to repair and resolve damage (see the figure). In parallel, C-X-C motif chemokine 14 (CXCL14), a component of the PASP, attracted macrophages to surround and closely surveil these damaged cells. Stressed cells that recovered and normalized p21 expression suspended PASP production and circumvented immunosurveillance. However, if the p21-induced stress was unmanageable, the repair timer expired, and the immune cells transitioned from surveillance to clearance mode. Adjacent macrophages mounted a cytotoxic T lymphocyte response that destroyed damaged cells. Notably, the overexpression of p21 alone was sufficient to orchestrate immune killing of stressed cells, without the need of a senescence phenotype. Overexpression of other CDKIs, such as p16 and p27, did not trigger immunosurveillance, likely because they do not induce CXCL14 expression.In the context of cancer, senescent cell clearance was first observed following reactivation of the tumor suppressor p53 in liver cancer cells. Restoring p53 signaling induced senescence and triggered the elimination of senescent cells by the innate immune system, prompting tumor regression (5). Subsequent work has revealed that the SASP alerts the immune system to target preneoplastic senescent cells. Hepatocytes expressing the oncogenic mutant NRASG12V (Gly12→Val) become senescent and secrete chemokines and cytokines that trigger CD4+ T cell–mediated clearance (6). Despite the relevance for tumor suppression, relatively little is known about how immunosurveillance of oncogene-induced senescent cells is initiated and controlled.
Source of image: Reen, V. and Gil, J. Clearing Stressed Cells. Science Perspectives 2021;Vol 374(6567) p 534-535.
References
2. Sturmlechner I, Zhang C, Sine CC, van Deursen EJ, Jeganathan KB, Hamada N, Grasic J, Friedman D, Stutchman JT, Can I, Hamada M, Lim DY, Lee JH, Ordog T, Laberge RM, Shapiro V, Baker DJ, Li H, van Deursen JM. p21 produces a bioactive secretome that places stressed cells under immunosurveillance. Science. 2021 Oct 29;374(6567):eabb3420. doi: 10.1126/science.abb3420. Epub 2021 Oct 29. PMID: 34709885.
More Articles on Cancer, Senescence and the Immune System in this Open Access Online Scientific Journal Include
#TUBiol5227: Biomarkers & Biotargets: Genetic Testing and Bioethics
Curator: Stephen J. Williams, Ph.D.
The advent of direct to consumer (DTC) genetic testing and the resultant rapid increase in its popularity as well as companies offering such services has created some urgent and unique bioethical challenges surrounding this niche in the marketplace. At first, most DTC companies like 23andMe and Ancestry.com offered non-clinical or non-FDA approved genetic testing as a way for consumers to draw casual inferences from their DNA sequence and existence of known genes that are linked to disease risk, or to get a glimpse of their familial background. However, many issues arose, including legal, privacy, medical, and bioethical issues. Below are some articles which will explain and discuss many of these problems associated with the DTC genetic testing market as well as some alternatives which may exist.
As you can see,this market segment appears to want to expand into the nutritional consulting business as well as targeted biomarkers for specific diseases.
Rising incidence of genetic disorders across the globe will augment the market growth
Increasing prevalence of genetic disorders will propel the demand for direct-to-consumer genetic testing and will augment industry growth over the projected timeline. Increasing cases of genetic diseases such as breast cancer, achondroplasia, colorectal cancer and other diseases have elevated the need for cost-effective and efficient genetic testing avenues in the healthcare market.
For instance, according to the World Cancer Research Fund (WCRF), in 2018, over 2 million new cases of cancer were diagnosed across the globe. Also, breast cancer is stated as the second most commonly occurring cancer. Availability of superior quality and advanced direct-to-consumer genetic testing has drastically reduced the mortality rates in people suffering from cancer by providing vigilant surveillance data even before the onset of the disease. Hence, the aforementioned factors will propel the direct-to-consumer genetic testing market overt the forecast timeline.
Nutrigenomic Testing will provide robust market growth
The nutrigenomic testing segment was valued over USD 220 million market value in 2019 and its market will witness a tremendous growth over 2020-2028. The growth of the market segment is attributed to increasing research activities related to nutritional aspects. Moreover, obesity is another major factor that will boost the demand for direct-to-consumer genetic testing market.
Nutrigenomics testing enables professionals to recommend nutritional guidance and personalized diet to obese people and help them to keep their weight under control while maintaining a healthy lifestyle. Hence, above mentioned factors are anticipated to augment the demand and adoption rate of direct-to-consumer genetic testing through 2028.
Browse key industry insights spread across 161 pages with 126 market data tables & 10 figures & charts from the report, “Direct-To-Consumer Genetic Testing Market Size By Test Type (Carrier Testing, Predictive Testing, Ancestry & Relationship Testing, Nutrigenomics Testing), By Distribution Channel (Online Platforms, Over-the-Counter), By Technology (Targeted Analysis, Single Nucleotide Polymorphism (SNP) Chips, Whole Genome Sequencing (WGS)), Industry Analysis Report, Regional Outlook, Application Potential, Price Trends, Competitive Market Share & Forecast, 2020 – 2028” in detail along with the table of contents: https://www.gminsights.com/industry-analysis/direct-to-consumer-dtc-genetic-testing-market
Targeted analysis techniques will drive the market growth over the foreseeable future
Based on technology, the DTC genetic testing market is segmented into whole genome sequencing (WGS), targeted analysis, and single nucleotide polymorphism (SNP) chips. The targeted analysis market segment is projected to witness around 12% CAGR over the forecast period. The segmental growth is attributed to the recent advancements in genetic testing methods that has revolutionized the detection and characterization of genetic codes.
Targeted analysis is mainly utilized to determine any defects in genes that are responsible for a disorder or a disease. Also, growing demand for personalized medicine amongst the population suffering from genetic diseases will boost the demand for targeted analysis technology. As the technology is relatively cheaper, it is highly preferred method used in direct-to-consumer genetic testing procedures. These advantages of targeted analysis are expected to enhance the market growth over the foreseeable future.
Over-the-counter segment will experience a notable growth over the forecast period
The over-the-counter distribution channel is projected to witness around 11% CAGR through 2028. The segmental growth is attributed to the ease in purchasing a test kit for the consumers living in rural areas of developing countries. Consumers prefer over-the-counter distribution channel as they are directly examined by regulatory agencies making it safer to use, thereby driving the market growth over the forecast timeline.
Favorable regulations provide lucrative growth opportunities for direct-to-consumer genetic testing
Europe direct-to-consumer genetic testing market held around 26% share in 2019 and was valued at around USD 290 million. The regional growth is due to elevated government spending on healthcare to provide easy access to genetic testing avenues. Furthermore, European regulatory bodies are working on improving the regulations set on the direct-to-consumer genetic testing methods. Hence, the above-mentioned factors will play significant role in the market growth.
Focus of market players on introducing innovative direct-to-consumer genetic testing devices will offer several growth opportunities
Few of the eminent players operating in direct-to-consumer genetic testing market share include Ancestry, Color Genomics, Living DNA, Mapmygenome, Easy DNA, FamilytreeDNA (Gene By Gene), Full Genome Corporation, Helix OpCo LLC, Identigene, Karmagenes, MyHeritage, Pathway genomics, Genesis Healthcare, and 23andMe. These market players have undertaken various business strategies to enhance their financial stability and help them evolve as leading companies in the direct-to-consumer genetic testing industry.
For example, in November 2018, Helix launched a new genetic testing product, DNA discovery kit, that allows customer to delve into their ancestry. This development expanded the firm’s product portfolio, thereby propelling industry growth in the market.
The following posts discuss bioethical issues related to genetic testing and personalized medicine from a clinicians and scientisit’s perspective
Question:Each of these articles discusses certain bioethical issues although focuses on personalized medicine and treatment. Given your understanding of the robust process involved in validating clinical biomarkers and the current state of the DTC market, how could DTC testing results misinform patients and create mistrust in the physician-patient relationship?
Question: If you are developing a targeted treatment with a companion diagnostic, what bioethical concerns would you address during the drug development process to ensure fair, equitable and ethical treatment of all patients, in trials as well as post market?
Articles on Genetic Testing, Companion Diagnostics and Regulatory Mechanisms
Question: What type of regulatory concerns should one have during the drug development process in regards to use of biomarker testing?From the last article on Protecting Your IP how important is it, as a drug developer, to involve all payers during the drug development process?
Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?
Some in industry voice their concerns
Curator: Stephen J. Williams, PhD
Post on AI healthcare and explainable AI
In a Policy Forum article in Science “Beware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions. The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes. These have been discussed in prior posts on this site, as well as issues arising from multi-center trials. The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.
Summary
Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
Types of AI/ML Algorithms: Explainable and Interpretable algorithms
Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image). Generally the algorithm has to be trained on past data with known parameters. When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex. Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
Explainable AI/ML: This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model. The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing. Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable. Many medical devices that use an AI/ML algorithm use this type. An example is deep learning and neural networks.
The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.
For a deeper understanding of these two types of algorithms see here:
How interpretability is different from explainability
Why a model might need to be interpretable and/or explainable
Who is working to solve the black box problem—and how
What is interpretability?
Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.
All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.
People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.
Interpretability means that the cause and effect can be determined.
What is explainability?
ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.
To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.
Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.
In this neural network, the hidden layers (the two columns of blue dots) would be the black box.
For example, we have these data inputs:
Age
BMI score
Number of years spent smoking
Career category
If this model had high explainability, we’d be able to say, for instance:
The career category is about 40% important
The number of years spent smoking weighs in at 35% important
The age is 15% important
The BMI score is 10% important
Explainability: important, not always necessary
Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.
The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.
Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.
In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.
If that signal is high, that node is significant to the model’s overall performance.
If that signal is low, the node is insignificant.
With this understanding, we can define explainability as:
Knowledge of what one node represents and how important it is to the model’s performance.
So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?
The authors argue:
“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”
A suggestion for
Enhanced more involved clinical trials
Provide individuals added flexibility when interacting with a model, for example inputting their own test data
More interaction between user and model generators
Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)
Other articles on AI/ML in medicine and healthcare on this Open Access Journal include
From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery
Curator: Stephen J. Williams, PhD
Marc W. Kirschner*
Department of Systems Biology Harvard Medical School
Boston, Massachusetts 02115
With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like species, arise by descent with modification, so in their earliest forms even the founders of great dynasties are only marginally different than their sister fields and species. It is only in retrospect that we can recognize the significant founding events. Before embarking on a definition of systems biology, it may be worth remembering that confusion and controversy surrounded the introduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retrospect molecular biology was new and different. It introduced both new subject matter and new technological approaches, in addition to a new style.
As a point of departure for systems biology, consider the quintessential experiment in the founding of molecular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonlethal mutation in these genes in a multicellular organism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiological. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.
That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been perfected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new continent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the simplistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organisms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combinations, something that is not assured in chemistry. It also downplays the significant regulatory features that involve interactions between gene products, their localization, binding, posttranslational modification, degradation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the conserved genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different phenotypes in different tissues of metazoan organisms. These circuits may have certain robustness, but more important they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes themselves. Among other things it loads the deck in evolutionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.
Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the phenotype. One aspect of systems biology is the development of techniques to examine broadly the level of protein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.
High-throughput biology has opened up another important area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and reproducible environment. The real world of ecology, evolution, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later extended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some geneticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and protein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quantitative effects, partially masked or accentuated by other genetic and environmental conditions. To understand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environmental variation.
Extracts and explants are relatively accessible to synthetic manipulation. Next there is the explicit reconstruction of circuits within cells or the deliberate modification of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of describing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, proteins, cells in tissues, and whole organisms in their environment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.
You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biological organization and processes in terms of the molecular constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the importance of defining a succession of physiological states in that process, and on evolutionary biology and ecology for the appreciation that all aspects of the organism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology generates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of systems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.
Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.
5.1.5. Large-Scale Proteomics
While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.
All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.
5.2. Genetic Approaches
Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.
Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.
5.2.1. Resistance Cloning
The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.
In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].
While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].
When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].
While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.
5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens
When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).
An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].
The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].
Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].
An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].
From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.
SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY
Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence
The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question
1. Introduction
The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.
Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)
Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).
Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16].
2. Systems Biology in Cancer Research
Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].
In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.
2.1. Biological Network Analysis for Biomarker Validation
The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].
2.2. De Novo Construction of Biological Networks
While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].
2.3. Network Based Machine Learning
A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].
In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.
3. Network-Based Learning in Cancer Research
As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.
3.1. Molecular Characterization with Network Information
Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].
Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.
3.2. Tumor Heterogeneity Study with Network Information
The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].
In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].
3.3. Drug Target Identification with Network Information
In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].
Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.
4. Deep Learning in Cancer Research
DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].
In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].
4.1. Challenges for Deep Learning in Cancer Research
Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].
Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).
4.2. Molecular Charactization with Network and DNN Model
DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].
Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].
4.3. Tumor Heterogeneity with Network and DNN Model
As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].
4.4. Drug Target Identification with Networks and DNN Models
In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].
The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].
4.5. Graph Neural Network Model
In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].
In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].
4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge
The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].
As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].
Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.
“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”
Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell2014, 158, 929–944. [Google Scholar] [CrossRef] [PubMed]
Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell2018, 173, 283–285. [Google Scholar] [CrossRef]
Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol.2007, 3, 140. [Google Scholar] [CrossRef]
Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol.2017, 1, 25. [Google Scholar] [CrossRef] [PubMed]
Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol.2019, 20, e262–e273. [Google Scholar] [CrossRef]
Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods2015, 12, 615. [Google Scholar]
Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun.2020, 11, 729. [Google Scholar] [CrossRef]
Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet.2019, 10, 13. [Google Scholar] [CrossRef]
Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun.2020, 11, 728. [Google Scholar] [CrossRef]
Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res.2018, 24, 1248–1259. [Google Scholar] [CrossRef]
Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis2019, 8, 44. [Google Scholar] [CrossRef]
Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics2019, 35, 5191–5198. [Google Scholar] [CrossRef]
Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature2020, 578, 82. [Google Scholar] [CrossRef] [PubMed]
King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science2003, 302, 643–646. [Google Scholar] [CrossRef] [PubMed]
Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol.2010, 28, 1075. [Google Scholar] [CrossRef] [PubMed]
Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol.2009, 27, 1160. [Google Scholar] [CrossRef]
Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol.2014, 5, 412. [Google Scholar] [CrossRef] [PubMed]
Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform.2019, 20, 572–584. [Google Scholar] [CrossRef] [PubMed]
Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet.2016, 17, 630. [Google Scholar] [CrossRef] [PubMed]
Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet.2017, 8, 84. [Google Scholar] [CrossRef]
Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med.2011, 17, 297. [Google Scholar] [CrossRef] [PubMed]
Use of Systems Biology in Anti-Microbial Drug Development
Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965
In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.
Genome Sequences and Proteomic Structural Databases
In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.
Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.
There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.
We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018; Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.
FIGURE 2
Figure 2.(A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).
Examples of Understanding and Combatting Resistance
The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.
FIGURE 3
Figure 3.(A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).
Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include: