Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence in Health Care – Tools & Innovations’ Category

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Artificial Intelligence (AI) Used to Successfully Determine Most Likely Repurposed Antibiotic Against Deadly Superbug Acinetobacter baumanni

Reporter: Stephen J. Williams, Ph.D.

The World Health Organization has identified 3 superbugs, or infective micororganisms displaying resistance to common antibiotics and multidrug resistance, as threats to humanity:

Three bacteria were listed as critical:

  • Acinetobacter baumannii bacteria that are resistant to important antibiotics called carbapenems. Acinetobacter baumannii are highly-drug resistant bacteria that can cause a range of infections for hospitalized patients, including pneumonia, wound, or blood infections.
  • Pseudomonas aeruginosa, which are resistant to carbapenems. Pseudomonas aeruginosa can cause skin rashes and ear infectious in healthy people but also severe blood infections and pneumonia when contracted by sick people in the hospital.
  • Enterobacteriaceae — a family of bacteria that live in the human gut — that are resistant to both carbepenems and another class of antibiotics, cephalosporins.

 

It has been designated critical need for development of  antibiotics to these pathogens.  Now researchers at Mcmaster University and others in the US had used artificial intelligence (AI) to screen libraries of over 7,000 chemicals to find a drug that could be repurposed to kill off the pathogen.

Liu et. Al. (1) published their results of an AI screen to narrow down potential chemicals that could work against Acinetobacter baumanii in Nature Chemical Biology recently.

Abstract

Acinetobacter baumannii is a nosocomial Gram-negative pathogen that often displays multidrug resistance. Discovering new antibiotics against A. baumannii has proven challenging through conventional screening approaches. Fortunately, machine learning methods allow for the rapid exploration of chemical space, increasing the probability of discovering new antibacterial molecules. Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii. Through this approach, we discovered abaucin, an antibacterial compound with narrow-spectrum activity against A. baumannii. Further investigations revealed that abaucin perturbs lipoprotein trafficking through a mechanism involving LolE. Moreover, abaucin could control an A. baumannii infection in a mouse wound model. This work highlights the utility of machine learning in antibiotic discovery and describes a promising lead with targeted activity against a challenging Gram-negative pathogen.

Schematic workflow for incorporation of AI for antibiotic drug discovery for A. baumannii from 1. Liu, G., Catacutan, D.B., Rathod, K. et al. Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

Figure source: https://www.nature.com/articles/s41589-023-01349-8

Article Source: https://www.nature.com/articles/s41589-023-01349-8

  1. Liu, G., Catacutan, D.B., Rathod, K. et al.Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumanniiNat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

 

 

For reference to WHO and lists of most pathogenic superbugs see https://www.scientificamerican.com/article/who-releases-list-of-worlds-most-dangerous-superbugs/

The finding was first reported by the BBC.

Source: https://www.bbc.com/news/health-65709834

By James Gallagher

Health and science correspondent

Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.

The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.

The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.

The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.

It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.

Stopping the superbugs

Antibiotics kill bacteria. However, there has been a lack of new drugs for decades and bacteria are becoming harder to treat, as they evolve resistance to the ones we have.

More than a million people a year are estimated to die from infections that resist treatment with antibiotics.The researchers focused on one of the most problematic species of bacteria – Acinetobacter baumannii, which can infect wounds and cause pneumonia.

You may not have heard of it, but it is one of the three superbugs the World Health Organization has identified as a “critical” threat.

It is often able to shrug off multiple antibiotics and is a problem in hospitals and care homes, where it can survive on surfaces and medical equipment.

Dr Jonathan Stokes, from McMaster University, describes the bug as “public enemy number one” as it’s “really common” to find cases where it is “resistant to nearly every antibiotic”.

 

Artificial intelligence

To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.

This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.

The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.

The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.

Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.

However, Dr Stokes told me: “This is when the work starts.”

The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.

Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.

Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.

 

In principle, the AI could screen tens of millions of potential compounds – something that would be impractical to do manually.

“AI enhances the rate, and in a perfect world decreases the cost, with which we can discover these new classes of antibiotic that we desperately need,” Dr Stokes told me.

The researchers tested the principles of AI-aided antibiotic discovery in E. coli in 2020, but have now used that knowledge to focus on the big nasties. They plan to look at Staphylococcus aureus and Pseudomonas aeruginosa next.

“This finding further supports the premise that AI can significantly accelerate and expand our search for novel antibiotics,” said Prof James Collins, from the Massachusetts Institute of Technology.

He added: “I’m excited that this work shows that we can use AI to help combat problematic pathogens such as A. baumannii.”

Prof Dame Sally Davies, the former chief medical officer for England and government envoy on anti-microbial resistance, told Radio 4’s The World Tonight: “We’re onto a winner.”

She said the idea of using AI was “a big game-changer, I’m thrilled to see the work he (Dr Stokes) is doing, it will save lives”.

Other related articles and books published in this Online Scientific Journal include the following:

Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases, Reproductive Genomic Endocrinology

(3 book series: Volume 1, 2&3, 4)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

 

 

 

 

 

 

 

 

 

 

  • The Immune System, Stress Signaling, Infectious Diseases and Therapeutic Implications:

 

  • Series D, VOLUME 2

Infectious Diseases and Therapeutics

and

  • Series D, VOLUME 3

The Immune System and Therapeutics

(Series D: BioMedicine & Immunology) Kindle Edition.

On Amazon.com since September 4, 2017

(English Edition) Kindle Edition – as one Book

https://www.amazon.com/dp/B075CXHY1B $115

 

Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic

The Journey of Antibiotic Discovery

FDA cleared Clever Culture Systems’ artificial intelligence tech for automated imaging, analysis and interpretation of microbiology culture plates speeding up Diagnostics

Artificial Intelligence: Genomics & Cancer

Read Full Post »

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

The female reproductive lifespan is regulated by the menstrual cycle. Defined as the interval between the menarche and menopause, it is approximately 35 years in length on average. Based on current average human life expectancy figures, and excluding fertility issues, this means that the female body can bear children for almost half of its lifetime. Thus, within this time span many individuals may consider contraception at some point in their reproductive life. A wide variety of contraceptive methods are now available, which are broadly classified into hormonal and non-hormonal approaches. A normal menstrual cycle is controlled by a delicate interplay of hormones, including estrogen, progesterone, follicle-stimulating hormone (FSH) and luteinizing hormone (LH), among others. These molecules are produced by the various glands in the body that make up the endocrine system.

Hormonal contraceptives – including the contraceptive pill, some intrauterine devices (IUDs) and hormonal implants – utilize exogenous (or synthetic) hormones to block or suppress ovulation, the phase of the menstrual cycle where an egg is released into the uterus. Beyond their use as methods to prevent pregnancy, hormonal contraceptives are also being increasingly used to suppress ovulation as a method for treating premenstrual syndromes. Hormonal contraceptives composed of exogenous estrogen and/or progesterone are commonly administered artificial means of birth control. Despite many benefits, adverse side effects associated with high doses such as thrombosis and myocardial infarction, cause hesitation to usage.

Scientists at the University of the Philippines and Roskilde University are exploring methods to optimize the dosage of exogenous hormones in such contraceptives. Their overall aim is the creation of patient-specific minimizing dosing schemes, to prevent adverse side effects that can be associated with hormonal contraceptive use and empower individuals in their contraceptive journey. Their research data showed evidence that the doses of exogenous hormones in certain contraceptive methods could be reduced, while still ensuring ovulation is suppressed. Reducing the total exogenous hormone dose by 92% in estrogen-only contraceptives, or the total dose by 43% in progesterone-only contraceptives, prevented ovulation according to the model. In contraceptives combining estrogen and progesterone, the doses could be reduced further.

References:

https://www.technologynetworks.com/drug-discovery/news/hormone-doses-in-contraceptives-could-be-reduced-by-as-much-as-92-372088?utm_campaign=NEWSLETTER_TN_Breaking%20Science%20News

https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010073

https://www.medicalnewstoday.com/articles/birth-control-with-up-to-92-lower-hormone-doses-could-still-be-effective

https://www.ncbi.nlm.nih.gov/books/NBK441576/

https://www.sciencedirect.com/science/article/pii/S0277953621005797

Read Full Post »

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

AI enabled Drug Discovery and Development: The Challenges and the Promise

Reporter: Aviva Lev-Ari, PhD, RN

 

Early Development

Caroline Kovac (the first IBM GM of Life Sciences) is the one who started in silico development of drugs in 2000 using a big db of substances and computer power. She transformed an idea into $2b business. Most of the money was from big pharma. She was asking what is are the new drugs they are planning to develop and provided the four most probable combinations of substances, based on in Silicon work. 

Carol Kovac

General Manager, Healthcare and Life Sciences, IBM

from speaker at conference on 2005

Carol Kovac is General Manager of IBM Healthcare and Life Sciences responsible for the strategic direction of IBM′s global healthcare and life sciences business. Kovac leads her team in developing the latest information technology solutions and services, establishing partnerships and overseeing IBM investment within the healthcare, pharmaceutical and life sciences markets. Starting with only two employees as an emerging business unit in the year 2000, Kovac has successfully grown the life sciences business unit into a multi-billion dollar business and one of IBM′s most successful ventures to date with more than 1500 employees worldwide. Kovac′s prior positions include general manager of IBM Life Sciences, vice president of Technical Strategy and Division Operations, and vice president of Services and Solutions. In the latter role, she was instrumental in launching the Computational Biology Center at IBM Research. Kovac sits on the Board of Directors of Research!America and Africa Harvest. She was inducted into the Women in Technology International Hall of Fame in 2002, and in 2004, Fortune magazine named her one of the 50 most powerful women in business. Kovac earned her Ph.D. in chemistry at the University of Southern California.

SOURCE

https://www.milkeninstitute.org/events/conferences/global-conference/2005/speaker-detail/1536

 

In 2022

The use of artificial intelligence in drug discovery, when coupled with new genetic insights and the increase of patient medical data of the last decade, has the potential to bring novel medicines to patients more efficiently and more predictably.

WATCH VIDEO

https://www.youtube.com/watch?v=b7N3ijnv6lk

SOURCE

https://engineering.stanford.edu/magazine/promise-and-challenges-relying-ai-drug-development?utm_source=Stanford+ALL

Conversation among three experts:

Jack Fuchs, MBA ’91, an adjunct lecturer who teaches “Principled Entrepreneurial Decisions” at Stanford School of Engineering, moderated and explored how clearly articulated principles can guide the direction of technological advancements like AI-enabled drug discovery.

Kim Branson, Global head of AI and machine learning at GSK.

Russ Altman, the Kenneth Fong Professor of Bioengineering, of genetics, of medicine (general medical discipline), of biomedical data science and, by courtesy, of computer science.

 

Synthetic Biology Software applied to development of Galectins Inhibitors at LPBI Group

 

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

Using Structural Computation Models to Predict Productive PROTAC Ternary Complexes

Ternary complex formation is necessary but not sufficient for target protein degradation. In this research, Bai et al. have addressed questions to better understand the rate-limiting steps between ternary complex formation and target protein degradation. They have developed a structure-based computer model approach to predict the efficiency and sites of target protein ubiquitination by CRNB-binding PROTACs. Such models will allow a more complete understanding of PROTAC-directed degradation and allow crafting of increasingly effective and specific PROTACs for therapeutic applications.

Another major feature of this research is that it a result of collaboration between research groups at Amgen, Inc. and Promega Corporation. In the past commercial research laboratories have shied away from collaboration, but the last several years have found researchers more open to collaborative work. This increased collaboration allows scientists to bring their different expertise to a problem or question and speed up discovery. According to Dr. Kristin Riching, Senior Research Scientist at Promega Corporation, “Targeted protein degraders have broken many of the rules that have guided traditional drug development, but it is exciting to see how the collective learnings we gain from their study can aid the advancement of this new class of molecules to the clinic as effective therapeutics.”

Literature Reviewed

Bai, N. , Riching K.M. et al. (2022) Modeling the CRLRA ligase complex to predict target protein ubiquitination induced by cereblon-recruiting PROTACsJ. Biol. Chem.

The researchers NanoBRET assays as part of their model validation. Learn more about NanoBRET technology at the Promega.com website.

SOURCE

https://www.promegaconnections.com/protac-ternary-complex/?utm_campaign=ms-2022-pharma_tpd&utm_source=linkedin&utm_medium=Khoros&utm_term=sf254230485&utm_content=030822ct-blogsf254230485&sf254230485=1

Read Full Post »

 

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,

The motivation for this post is two fold:

First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.

Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.

Third, to highlight one academic center with an AI focus

ETH Logo
 
ETH AI Center - Header Image
 
 
Dear friends of the ETH AI Center,

We would like to provide you with some exciting updates from the ETH AI Center and its growing community.

We would like to provide you with some exciting updates from the ETH AI Center and its growing community. The ETH AI Center now comprises 110 research groups in the faculty, 20 corporate partners and has led to nine AI startups.

As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.

We already have many interesting events coming up, we look forward to seeing you at our main and community events!

SOURCE

https://news.ethz.ch/html_mail.jsp?params=%2FUnFXUQJ%2FmiOP6akBq8eHxaXG%2BRdNmeoVa9gX5ArpTr6mX74xp5d78HhuIHTd9V6AHtAfRahyx%2BfRGrzVL1G8Jy5e3zykvr1WDtMoUC%2B7vILoHCGQ5p1rxaPzOsF94ID

 

 

LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning

Our Book 

Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:

Part 1: Next Generation Sequencing (NGS)

 

1.1 The NGS Science

1.1.1 BioIT Aspect

 

Hypergraph Plot #1 and Tree Diagram Plot #1

for 1.1.1 based on 16 articles & on 12 keywords

protein, cancer, dna, genes, rna, survival, immune, tumor, patients, human, genome, expression

(more…)

Read Full Post »

@MIT Artificial intelligence system rapidly predicts how two proteins will attach: The model called Equidock, focuses on rigid body docking — which occurs when two proteins attach by rotating or translating in 3D space, but their shapes don’t squeeze or bend

Reporter: Aviva Lev-Ari, PhD, RN

This paper introduces a novel SE(3) equivariant graph matching network, along with a keypoint discovery and alignment approach, for the problem of protein-protein docking, with a novel loss based on optimal transport. The overall consensus is that this is an impactful solution to an important problem, whereby competitive results are achieved without the need for templates, refinement, and are achieved with substantially faster run times.
28 Sept 2021 (modified: 18 Nov 2021)ICLR 2022 SpotlightReaders:  Everyone Show BibtexShow Revisions
 
Keywords:protein complexes, protein structure, rigid body docking, SE(3) equivariance, graph neural networks
AbstractProtein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications such as drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no three-dimensional flexibility during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right location and the right orientation relative to the second protein. We mathematically guarantee that the predicted complex is always identical regardless of the initial placements of the two structures, avoiding expensive data augmentation. Our model approximates the binding pocket and predicts the docking pose using keypoint matching and alignment through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements over existing protein docking software and predict qualitatively plausible protein complex structures despite not using heavy sampling, structure refinement, or templates.
One-sentence SummaryWe perform rigid protein docking using a novel independent SE(3)-equivariant message passing mechanism that guarantees the same resulting protein complex independent of the initial placement of the two 3D structures.
 
SOURCE
 

MIT researchers created a machine-learning model that can directly predict the complex that will form when two proteins bind together. Their technique is between 80 and 500 times faster than state-of-the-art software methods, and often predicts protein structures that are closer to actual structures that have been observed experimentally.

This technique could help scientists better understand some biological processes that involve protein interactions, like DNA replication and repair; it could also speed up the process of developing new medicines.

Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them. This deep-learning model can learn these types of interactions from data,” says Octavian-Eugen Ganea, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Ganea’s co-lead author is Xinyuan Huang, a graduate student at ETH Zurich. MIT co-authors include Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in CSAIL, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering in CSAIL and a member of the Institute for Data, Systems, and Society. The research will be presented at the International Conference on Learning Representations.

Significance of the Scientific Development by the @MIT Team

EquiDock wide applicability:

  • Our method can be integrated end-to-end to boost the quality of other models (see above discussion on runtime importance). Examples are predicting functions of protein complexes [3] or their binding affinity [5], de novo generation of proteins binding to specific targets (e.g., antibodies [6]), modeling back-bone and side-chain flexibility [4], or devising methods for non-binary multimers. See the updated discussion in the “Conclusion” section of our paper.

 

Advantages over previous methods:

  • Our method does not rely on templates or heavy candidate sampling [7], aiming at the ambitious goal of predicting the complex pose directly. This should be interpreted in terms of generalization (to unseen structures) and scalability capabilities of docking models, as well as their applicability to various other tasks (discussed above).

 

  • Our method obtains a competitive quality without explicitly using previous geometric (e.g., 3D Zernike descriptors [8]) or chemical (e.g., hydrophilic information) features [3]. Future EquiDock extensions would find creative ways to leverage these different signals and, thus, obtain more improvements.

   

Novelty of theory:

  • Our work is the first to formalize the notion of pairwise independent SE(3)-equivariance. Previous work (e.g., [9,10]) has incorporated only single object Euclidean-equivariances into deep learning models. For tasks such as docking and binding of biological objects, it is crucial that models understand the concept of multi-independent Euclidean equivariances.

  • All propositions in Section 3 are our novel theoretical contributions.

  • We have rewritten the Contribution and Related Work sections to clarify this aspect.

   


Footnote [a]: We have fixed an important bug in the cross-attention code. We have done a more extensive hyperparameter search and understood that layer normalization is crucial in layers used in Eqs. 5 and 9, but not on the h embeddings as it was originally shown in Eq. 10. We have seen benefits from training our models with a longer patience in the early stopping criteria (30 epochs for DIPS and 150 epochs for DB5). Increasing the learning rate to 2e-4 is important to speed-up training. Using an intersection loss weight of 10 leads to improved results compared to the default of 1.

 

Bibliography:

[1] Protein-ligand blind docking using QuickVina-W with inter-process spatio-temporal integration, Hassan et al., 2017

[2] GNINA 1.0: molecular docking with deep learning, McNutt et al., 2021

[3] Protein-protein and domain-domain interactions, Kangueane and Nilofer, 2018

[4] Side-chain Packing Using SE(3)-Transformer, Jindal et al., 2022

[5] Contacts-based prediction of binding affinity in protein–protein complexes, Vangone et al., 2015

[6] Iterative refinement graph neural network for antibody sequence-structure co-design, Jin et al., 2021

[7] Hierarchical, rotation-equivariant neural networks to select structural models of protein complexes, Eismann et al, 2020

[8] Protein-protein docking using region-based 3D Zernike descriptors, Venkatraman et al., 2009

[9] SE(3)-transformers: 3D roto-translation equivariant attention networks, Fuchs et al, 2020

[10] E(n) equivariant graph neural networks, Satorras et al., 2021

[11] Fast end-to-end learning on protein surfaces, Sverrisson et al., 2020

SOURCE

https://openreview.net/forum?id=GQjaI9mLet

Read Full Post »

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

UPDATED on 11/5/2021

Introducing Isomorphic Labs

I believe we are on the cusp of an incredible new era of biological and medical research. Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.
Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company –  Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.

An AI-first approach to drug discovery and biology
The pandemic has brought to the fore the vital work that brilliant scientists and clinicians do every day to understand and combat disease. We believe that the foundational use of cutting edge computational and AI methods can help scientists take their work to the next level, and massively accelerate the drug discovery process. AI methods will increasingly be used not just for analysing data, but to also build powerful predictive and generative models of complex biological phenomena. AlphaFold2 is an important first proof point of this, but there is so much more to come. 
At its most fundamental level, I think biology can be thought of as an information processing system, albeit an extraordinarily complex and dynamic one. Taking this perspective implies there may be a common underlying structure between biology and information science – an isomorphic mapping between the two – hence the name of the company. Biology is likely far too complex and messy to ever be encapsulated as a simple set of neat mathematical equations. But just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI.

What’s next for Isomorphic Labs
This is just the beginning of what we hope will become a radical new approach to drug discovery, and I’m incredibly excited to get this ambitious new commercial venture off the ground and to partner with pharmaceutical and biomedical companies. I will serve as CEO for Isomorphic’s initial phase, while remaining as DeepMind CEO, partially to help facilitate collaboration between the two companies where relevant, and to set out the strategy, vision and culture of the new company. This will of course include the building of a world-class multidisciplinary team, with deep expertise in areas such as AI, biology, medicinal chemistry, biophysics, and engineering, brought together in a highly collaborative and innovative environment. (We are hiring!
As pioneers in the emerging field of ‘digital biology’, we look forward to helping usher in an amazingly productive new age of biomedical breakthroughs. Isomorphic’s mission could not be a more important one: to use AI to accelerate drug discovery, and ultimately, find cures for some of humanity’s most devastating diseases.

SOURCE

https://www.isomorphiclabs.com/blog

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

DeepMind plans to release hundreds of millions of protein structures for free

James Vincent July 22, 2021 11:00 am

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.
 “the culmination of the entire 10-year-plus lifetime of DeepMind” 
Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.
“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”



Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green). 
Image: DeepMind


There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like 
E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”
 “anyone can use it for anything” 
After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.
The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

The benefits of protein folding


Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.
 “it will definitely have a huge impact for the scientific community” 
New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.
Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind
Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.
Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.
Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

 Protein folding has been a “grand challenge” of biology for decades 

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.
In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition. 
Image: DeepMind
Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis.

@@@@@@@

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

@@@@@@@

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.
 “There’s many ways value can be attained.” 

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”
Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”


SOURCE

https://www.theverge.com/platform/amp/2021/7/22/22586578/deepmind-alphafold-ai-protein-folding-human-proteome-released-for-free?__twitter_impression=true

Potential Use of Protein Folding Predictions for Drug Discovery

PROTAC Technology: Opportunities and Challenges

  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240Publication Date:March 12, 2020https://doi.org/10.1021/acsmedchemlett.9b00597Copyright © 2020 American Chemical Society

Abstract

PROTACs-induced targeted protein degradation has emerged as a novel therapeutic strategy in drug development and attracted the favor of academic institutions, large pharmaceutical enterprises (e.g., AstraZeneca, Bayer, Novartis, Amgen, Pfizer, GlaxoSmithKline, Merck, and Boehringer Ingelheim, etc.), and biotechnology companies. PROTACs opened a new chapter for novel drug development. However, any new technology will face many new problems and challenges. Perspectives on the potential opportunities and challenges of PROTACs will contribute to the research and development of new protein degradation drugs and degrader tools.

Although PROTAC technology has a bright future in drug development, it also has many challenges as follows:
(1)
Until now, there is only one example of PROTAC reported for an “undruggable” target; (18) more cases are needed to prove the advantages of PROTAC in “undruggable” targets in the future.
(2)
“Molecular glue”, existing in nature, represents the mechanism of stabilized protein–protein interactions through small molecule modulators of E3 ligases. For instance, auxin, the plant hormone, binds to the ligase SCF-TIR1 to drive recruitment of Aux/IAA proteins and subsequently triggers its degradation. In addition, some small molecules that induce targeted protein degradation through “molecular glue” mode of action have been reported. (21,22) Furthermore, it has been recently reported that some PROTACs may actually achieve target protein degradation via a mechanism that includes “molecular glue” or via “molecular glue” alone. (23) How to distinguish between these two mechanisms and how to combine them to work together is one of the challenges for future research.
(3)
Since PROTAC acts in a catalytic mode, traditional methods cannot accurately evaluate the pharmacokinetics (PK) and pharmacodynamics (PD) properties of PROTACs. Thus, more studies are urgently needed to establish PK and PD evaluation systems for PROTACs.
(4)
How to quickly and effectively screen for target protein ligands that can be used in PROTACs, especially those targeting protein–protein interactions, is another challenge.
(5)
How to understand the degradation activity, selectivity, and possible off-target effects (based on different targets, different cell lines, and different animal models) and how to rationally design PROTACs etc. are still unclear.
(6)
The human genome encodes more than 600 E3 ubiquitin ligases. However, there are only very few E3 ligases (VHL, CRBN, cIAPs, and MDM2) used in the design of PROTACs. How to expand E3 ubiquitin ligase scope is another challenge faced in this area.

PROTAC technology is rapidly developing, and with the joint efforts of the vast number of scientists in both academia and industry, these problems shall be solved in the near future.

PROTACs have opened a new chapter for the development of new drugs and novel chemical knockdown tools and brought unprecedented opportunities to the industry and academia, which are mainly reflected in the following aspects:
(1)
Overcoming drug resistance of cancer. In addition to traditional chemotherapy, kinase inhibitors have been developing rapidly in the past 20 years. (12) Although kinase inhibitors are very effective in cancer therapy, patients often develop drug resistance and disease recurrence, consequently. PROTACs showed greater advantages in drug resistant cancers through degrading the whole target protein. For example, ARCC-4 targeting androgen receptor could overcome enzalutamide-resistant prostate cancer (13) and L18I targeting BTK could overcome C481S mutation. (14)
(2)
Eliminating both the enzymatic and nonenzymatic functions of kinase. Traditional small molecule inhibitors usually inhibit the enzymatic activity of the target, while PROTACs affect not only the enzymatic activity of the protein but also nonenzymatic activity by degrading the entire protein. For example, FAK possesses the kinase dependent enzymatic functions and kinase independent scaffold functions, but regulating the kinase activity does not successfully inhibit all FAK function. In 2018, a highly effective and selective FAK PROTAC reported by Craig M. Crews’ group showed a far superior activity to clinical candidate drug in cell migration and invasion. (15) Therefore, PROTAC can expand the druggable space of the existing targets and regulate proteins that are difficult to control by traditional small molecule inhibitors.
(3)
Degrade the “undruggable” protein target. At present, only 20–25% of the known protein targets (include kinases, G protein-coupled receptors (GPCRs), nuclear hormone receptors, and iron channels) can be targeted by using conventional drug discovery technologies. (16,17) The proteins that lack catalytic activity and/or have catalytic independent functions are still regarded as “undruggable” targets. The involvement of Signal Transducer and Activator of Transcription 3 (STAT3) in the multiple signaling pathway makes it an attractive therapeutic target; however, the lack of an obviously druggable site on the surface of STAT3 limited the development of STAT3 inhibitors. Thus, there are still no effective drugs directly targeting STAT3 approved by the Food and Drug Administration (FDA). In November 2019, Shaomeng Wang’s group first reported a potent PROTAC targeting STAT3 with potent biological activities in vitro and in vivo. (18) This successful case confirms the key potential of PROTAC technology, especially in the field of “undruggable” targets, such as K-Ras, a tricky tumor target activated by multiple mutations as G12A, G12C, G12D, G12S, G12 V, G13C, and G13D in the clinic. (19)
(4)
Fast and reversible chemical knockdown strategy in vivo. Traditional genetic protein knockout technologies, zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), or CRISPR-Cas9, usually have a long cycle, irreversible mode of action, and high cost, which brings a lot of inconvenience for research, especially in nonhuman primates. In addition, these genetic animal models sometimes produce phenotypic misunderstanding due to potential gene compensation or gene mutation. More importantly, the traditional genetic method cannot be used to study the function of embryonic-lethal genes in vivo. Unlike DNA-based protein knockout technology, PROTACs knock down target proteins directly, rather than acting at the genome level, and are suitable for the functional study of embryonic-lethal proteins in adult organisms. In addition, PROTACs provide exquisite temporal control, allowing the knockdown of a target protein at specific time points and enabling the recovery of the target protein after withdrawal of drug treatment. As a new, rapid and reversible chemical knockdown method, PROTAC can be used as an effective supplement to the existing genetic tools. (20)

SOURCE

PROTAC Technology: Opportunities and Challenges
  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240

Goal in Drug Design: Eliminating both the enzymatic and nonenzymatic functions of kinase.

Work-in-Progress

Induction and Inhibition of Protein in Galectins Drug Design

Work-in-Progress

Screening Proteins in DeepMind’s AlphaFold DataBase

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Work-in-Progress

Other related research published in this Open Access Online Journal include the following:

Synthetic Biology in Drug Discovery

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression  for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

Read Full Post »

Patients with type 2 diabetes may soon receive artificial pancreas and a smartphone app assistance

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

In a brief, randomized crossover investigation, adults with type 2 diabetes and end-stage renal disease who needed dialysis benefited from an artificial pancreas. Tests conducted by the University of Cambridge and Inselspital, University Hospital of Bern, Switzerland, reveal that now the device can help patients safely and effectively monitor their blood sugar levels and reduce the risk of low blood sugar levels.

Diabetes is the most prevalent cause of kidney failure, accounting for just under one-third (30%) of all cases. As the number of people living with type 2 diabetes rises, so does the number of people who require dialysis or a kidney transplant. Kidney failure raises the risk of hypoglycemia and hyperglycemia, or unusually low or high blood sugar levels, which can lead to problems ranging from dizziness to falls and even coma.

Diabetes management in adults with renal failure is difficult for both the patients and the healthcare practitioners. Many components of their therapy, including blood sugar level targets and medications, are poorly understood. Because most oral diabetes drugs are not indicated for these patients, insulin injections are the most often utilized diabetic therapy-yet establishing optimum insulin dose regimes is difficult.

A team from the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust earlier developed an artificial pancreas with the goal of replacing insulin injections for type 1 diabetic patients. The team, collaborating with experts at Bern University Hospital and the University of Bern in Switzerland, demonstrated that the device may be used to help patients with type 2 diabetes and renal failure in a study published on 4 August 2021 in Nature Medicine.

The study’s lead author, Dr Charlotte Boughton of the Wellcome Trust-MRC Institute of Metabolic Science at the University of Cambridge, stated:

Patients living with type 2 diabetes and kidney failure are a particularly vulnerable group and managing their condition-trying to prevent potentially dangerous highs or lows of blood sugar levels – can be a challenge. There’s a real unmet need for new approaches to help them manage their condition safely and effectively.

The Device

The artificial pancreas is a compact, portable medical device that uses digital technology to automate insulin delivery to perform the role of a healthy pancreas in managing blood glucose levels. The system is worn on the outside of the body and consists of three functional components:

  • a glucose sensor
  • a computer algorithm for calculating the insulin dose
  • an insulin pump

The artificial pancreas directed insulin delivery on a Dana Diabecare RS pump using a Dexcom G6 transmitter linked to the Cambridge adaptive model predictive control algorithm, automatically administering faster-acting insulin aspart (Fiasp). The CamDiab CamAPS HX closed-loop app on an unlocked Android phone was used to manage the closed loop system, with a goal glucose of 126 mg/dL. The program calculated an insulin infusion rate based on the data from the G6 sensor every 8 to 12 minutes, which was then wirelessly routed to the insulin pump, with data automatically uploaded to the Diasend/Glooko data management platform.

The Case Study

Between October 2019 and November 2020, the team recruited 26 dialysis patients. Thirteen patients were randomly assigned to get the artificial pancreas first, followed by 13 patients who received normal insulin therapy initially. The researchers compared how long patients spent as outpatients in the target blood sugar range (5.6 to 10.0mmol/L) throughout a 20-day period.

Patients who used the artificial pancreas spent 53 % in the target range on average, compared to 38% who utilized the control treatment. When compared to the control therapy, this translated to approximately 3.5 more hours per day spent in the target range.

The artificial pancreas resulted in reduced mean blood sugar levels (10.1 vs. 11.6 mmol/L). The artificial pancreas cut the amount of time patients spent with potentially dangerously low blood sugar levels, known as ‘hypos.’

The artificial pancreas’ efficacy improved significantly over the research period as the algorithm evolved, and the time spent in the target blood sugar range climbed from 36% on day one to over 60% by the twentieth day. This conclusion emphasizes the need of employing an adaptive algorithm that can adapt to an individual’s fluctuating insulin requirements over time.

When asked if they would recommend the artificial pancreas to others, everyone who responded indicated they would. Nine out of ten (92%) said they spent less time controlling their diabetes with the artificial pancreas than they did during the control period, and a comparable amount (87%) said they were less concerned about their blood sugar levels when using it.

Other advantages of the artificial pancreas mentioned by study participants included fewer finger-prick blood sugar tests, less time spent managing their diabetes, resulting in more personal time and independence, and increased peace of mind and reassurance. One disadvantage was the pain of wearing the insulin pump and carrying the smartphone.

Professor Roman Hovorka, a senior author from the Wellcome Trust-MRC Institute of Metabolic Science, mentioned:

Not only did the artificial pancreas increase the amount of time patients spent within the target range for the blood sugar levels, but it also gave the users peace of mind. They were able to spend less time having to focus on managing their condition and worrying about the blood sugar levels, and more time getting on with their lives.

The team is currently testing the artificial pancreas in outpatient settings in persons with type 2 diabetes who do not require dialysis, as well as in difficult medical scenarios such as perioperative care.

The artificial pancreas has the potential to become a fundamental part of integrated personalized care for people with complicated medical needs,” said Dr Lia Bally, who co-led the study in Bern.

The authors stated that the study’s shortcomings included a small sample size due to “Brexit-related study funding concerns and the COVID-19 epidemic.”

Boughton concluded:

We would like other clinicians to be aware that automated insulin delivery systems may be a safe and effective treatment option for people with type 2 diabetes and kidney failure in the future.

Main Source:

Boughton, C. K., Tripyla, A., Hartnell, S., Daly, A., Herzig, D., Wilinska, M. E., & Hovorka, R. (2021). Fully automated closed-loop glucose control compared with standard insulin therapy in adults with type 2 diabetes requiring dialysis: an open-label, randomized crossover trial. Nature Medicine, 1-6.

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Artificial pancreas effectively controls type 1 diabetes in children age 6 and up

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/10/08/artificial-pancreas-effectively-controls-type-1-diabetes-in-children-age-6-and-up/

Google, Verily’s Uses AI to Screen for Diabetic Retinopathy

Reporter : Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/04/08/49900/

World’s first artificial pancreas

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/05/16/worlds-first-artificial-pancreas/

Artificial Pancreas – Medtronic Receives FDA Approval for World’s First Hybrid Closed Loop System for People with Type 1 Diabetes

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/09/30/artificial-pancreas-medtronic-receives-fda-approval-for-worlds-first-hybrid-closed-loop-system-for-people-with-type-1-diabetes/

Read Full Post »

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Older Posts »

%d