Feeds:
Posts
Comments

Posts Tagged ‘Artificial intelligence’

ChatGPT Searches and Advent of Meta Threads: What it Means for Social Media and Science 3.0

Curator: Stephen J. Williams, PhD

The following explains how popular ChatGPT has become and how the latest social media platforms, including Meta’s (FaceBook) new platform Threads, is becoming as popular or more popular than older social Platforms.  In fact, since its short inception since last week (Threads launced 7/07/2023), Threads is threatening Twitter for dominance in that market.

The following is taken from an email from Charlie Downing Jones from journoreasearch.org and  https://www.digital-adoption.com/ :

U.S. searches for ChatGPT overtake TikTok, Pinterest, and Zoom

  • Google searches for ChatGPT have overtaken TikTok in the U.S., jumping to 7.1 million monthly searches compared to 5.1 million
  • The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data
  • ChatGPT is now searched more than most major social networks, including LinkedIn, Pinterest, TikTok, and Reddit

Analysis of Google search data reveals that online searches for ChatGPT, the popular AI chatbot, have overtaken most popular social networks in the U.S. This comes when search interest in artificial intelligence is at its highest point in history.

 

The findings by Digital-adoption.com reveal that US-based searches for ChatGPT have exploded and overtaken popular social networks, such as LinkedIn, Pinterest, and Tiktok, some by millions.

 

Ranking Keyword US Search Volume (Monthly)
1 Facebook                                  70,920,000
2 YouTube                                  69,260,000
3 Twitter                                  15,440,000
4 Instagram                                  12,240,000
5 ChatGPT                                  7,130,000
6 LinkedIn                                  6,990,000
7 Pinterest                                  5,790,000
8 TikTok                                  5,130,000
9 Reddit                                  4,060,000
10 Snapchat                                  1,280,000
11 WhatsApp                                  936,000

 

Since its release in November 2022, searches for ChatGPT have overtaken those of most major social networks. According to the latest June search figures by search tool Ahrefs, searches for ‘ChatGPT’ and ‘Chat GPT’ are made 7,130,000 times monthly in the U.S.

That’s more than the monthly search volume for most of the top ten social networks, including LinkedIn, Pinterest, and TikTok. TikTok is one of the largest growing social media apps, with 100 million users in just a year.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data

 

Searches for ChatGPT have eclipsed other major networks in the U.S., such as Reddit, by millions.

Every day search terms such as ‘maps’ and ‘flights’ have even seen their search volumes pale compared to the rising popularity of ChatGPT. ‘Maps’ is currently searched 440,000 times less than the chatbot each month, and ‘Flights’ is now Googled 2.2 million times less.

2023 has been a breakout year for AI, as searches for the term have more than doubled from 17 million in January 2023 to 42 million in May. In comparison, there were 7.9 million searches in January 2022. There has been an 825% increase in searches for ‘AI’ in the US compared to the average over the last five years.

There is a correlation between the uptick and the public releases of accessible AI chatbots such as ChatGPT, released on November 30, 2022, and Bing AI and Google Bard, released in May 2023.

According to TikTok data, interest in artificial intelligence has soared tenfold since 2020, and virtual reality has more than tripled.

AI has been a big topic of conversation this year as accessible AI chatbots and new technologies were released and sparked rapid adoption, prompting tech leaders like Elon Musk to call for AI regulation.

A spokesperson from Digital-adoption.com commented on the findings: “There has been a massive surge in AI interest this year. Apple’s announcement of Vision Pro has captured audiences at the right time, when new AI technologies, like ChatGPT, have become accessible to almost anyone. The rapid adoption of ChatGPT is surprising, with it becoming one of the fastest-growing tools available”.

All data was gathered from Ahrefs and Google Trends.

If using this story, please include a link to https://www.digital-adoption.com/ who conducted this study. A linked credit allows us to keep supplying you with content that you may find useful in the future.

 

If you need anything else, please get in touch.

All the best,
Charlie Dowling-Jones

 

charlie.dowling-jones@journoresearch.org

 

Journo Research

Part of Search Intelligence Ltd. Company registered in England No. 09361526

Why LPBI Needs to consider the new Meta Threads Platform

From Barrons

Threads Hits 100 Million Users Faster Than ChatGPT. Now It Needs Them to Stay.

 

By

Adam ClarkFollow

Updated July 10, 2023 9:00 am ET / Original July 10, 2023 7:44 am ET

The launch of Meta Platforms’ Threads looks to have outpaced even the viral success of ChatGPT in terms of signing up users. The next challenge will be keeping them around.

Since its inception on Thursday 7/07/2023, Meta’s new Threads platform has been signing up new users at an alarming rate.  On rollout date 5 million signed up, then 30 million by next morning and now as of today (7/1/2023) Threads has over 100 million signups.  Compare that to Twitter’s 436 million users, of which are tweeting on average 25% less than a few years ago, and it is easy to see why many social media pundits are calling Threads the new Twitter killer app.

 

Here are a few notes from the New York Times podcast The Daily

The Daily

1 day ago

Will Threads Kill Twitter?

Play • 33 min

Last week, Meta, the parent company of Facebook and Instagram, released Threads, a social media platform to compete with Twitter. In just 16 hours, Threads was downloaded more than 30 million times.

Mike Isaac, who covers tech companies and Silicon Valley for The Times, explains how Twitter became so vulnerable and discusses the challenges Meta faces to create a less toxic alternative.

Guest: Mike Isaac, a technology correspondent for The New York Times.

Background reading:

Here are a few notes from the podcast:

Mike Isaac lamented that Twitter has become user unfriendly for a host of reasons.  These include:

  • The instant reply’guys’ – people who reply but don’t really follow you or your thread
  • Your followers or following are not pushed to top of thread
  • The auto bots – the automated Twitter bots
  • Spam feeds
  • The changes in service and all these new fees: Twitter push to monetize everything – like airlines

Elon Musk wanted to transform Twitter but his history is always cutting, not just trimming the excess but he is known to just eliminate departments just because he either doesn’t want to pay or CAN’T pay.  With Twitter he gutted content moderation.

 

Twitter ad business is plumetting but Musk wants to make Twitter a subscription business (the Blue check mark)

Twitter only gets a couple of million $ per month from Twitter Blue but Musk has to pay billions to just pay the interest on Twitter loan for Twitter puchase years ago

It is known that Musk is not paying rent on some California offices (some are suggesting he defaulted on leases) and Musk is selling Tesla stock to pay for Twitter expenses (why TSLA stock has been falling … the consensus out there)

Twitter is largest compendium of natural language conversations and Musk wanted to limit bots from scraping Twitter data to do AI and NLP on Twitter threads.  This is also a grievance from other companies… that these ‘scrapers’ are not paying enough for Twitter data.  However as Mike asks why do the little Twitter user have to pay in either fees or cutbacks from service.  (the reason why Elon is limiting viewing per day is to limit these bots from scraping Twitter for data)

Another problem is that Twitter does not have its own servers so pays a lot to Google and AWS for server space.  It appears Elon and Twitter are running out of money.

META and THREADS

Zuckerberg has spent billions of infrastructure spending and created a massive advertising ecosystem.  This is one of the thoughts behind his push and entry into this space.  Zuckerberg actually wanted to but Twitter a decade ago.

 

Usage and growth:  The launch of Threads was Thursday 7-07-23. There were 2 million initial signups and by next morning 30 million overnight.  Today Monday 7-10-23 there are 100 million, rivaling Twitter’s 436 million accounts.  And as Musk keeps canceling Twitter accounts, angering users over fees or usage restrictions, people are looking for a good platform.  Mastedon in too technical and not having the adoption like Meta Threads is having.  Mike Isaac hopes Threads will not go the way of Google Hangouts or Plus but Google strategy did not involve social media like Facebook.

Signup and issues: Signup on Threads is easy but you need to go through Instagram.  Some people have concerns about having their instagram thread put on their Threads feed but Mike had talked to the people at Meta and they are working to allow users to keep the feeds separate, mainly because Meta understands that the Instgagram and Twitter social cultures are different and users may want to keep Threads more business-like.

Important issues for LPBI: Twitter had decided, by end of May 2023 to end their relationship with WordPress JetPack service, in which WordPress posts could automatically be posted to your Twitter account and feed.  Twitter is making users like WordPress pay for this API and WordPress said it would be too expensive as Twitter is not making a flat fee but per usage fee.  This is a major hindrance even though the Twitter social share button is still active on posts.

Initial conversations between META and WordPress have indicated META will keep this API service free for WordPress.

 

So a little background on Meta Threads and signup features from Meta (Facebook) website:

Takeaways

  • Threads is a new app, built by the Instagram team, for sharing text updates and joining public conversations.
  • You log in using your Instagram account and posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length.
  • We’re working to soon make Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.

Mark Zuckerberg just announced the initial version of Threads, an app built by the Instagram team for sharing with text. Whether you’re a creator or a casual poster, Threads offers a new, separate space for real-time updates and public conversations. We are working toward making  Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.

Instagram is where billions of people around the world connect over photos and videos. Our vision with Threads is to take what Instagram does best and expand that to text, creating a positive and creative space to express your ideas. Just like on Instagram, with Threads you can follow and connect with friends and creators who share your interests – including the people you follow on Instagram and beyond. And you can use our existing suite of safety and user controls.

Join the Conversation from Instagram

It’s easy to get started with Threads: simply use your Instagram account to log in. Your Instagram username and verification will carry over, with the option to customize your profile specifically for Threads.

Everyone who is under 16 (or under 18 in certain countries) will be defaulted into a private profile when they join Threads. You can choose to follow the same accounts you do on Instagram, and find more people who care about the same things you do. The core accessibility features available on Instagram today, such as screen reader support and AI-generated image descriptions, are also enabled on Threads.

Your feed on Threads includes threads posted by people you follow, and recommended content from new creators you haven’t discovered yet. Posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length. You can easily share a Threads post to your Instagram story, or share your post as a link on any other platform you choose.

Tune Out the Noise

We built Threads with tools to enable positive, productive conversations. You can control who can mention you or reply to you within Threads. Like on Instagram, you can add hidden words to filter out replies to your threads that contain specific words. You can unfollow, block, restrict or report a profile on Threads by tapping the three-dot menu, and any accounts you’ve blocked on Instagram will automatically be blocked on Threads.

As with all our products, we’re taking safety seriously, and we’ll enforce Instagram’s Community Guidelines on content and interactions in the app. Since 2016 we’ve invested more than $16 billion in building up the teams and technologies needed to protect our users, and we remain focused on advancing our industry-leading integrity efforts and investments to protect our community.

Compatible with Interoperable Networks

Soon, we are planning to make Threads compatible with ActivityPub, the open social networking protocol established by the World Wide Web Consortium (W3C), the body responsible for the open standards that power the modern web. This would make Threads interoperable with other apps that also support the ActivityPub protocol, such as Mastodon and WordPress – allowing new types of connections that are simply not possible on most social apps today. Other platforms including Tumblr have shared plans to support the ActivityPub protocol in the future.

We’re committed to giving you more control over your audience on Threads – our plan is to work  with ActivityPub to provide you the option to stop using Threads and transfer your content to another service. Our vision is that people using compatible apps will be able to follow and interact with people on Threads without having a Threads account, and vice versa, ushering in a new era of diverse and interconnected networks. If you have a public profile on Threads, this means your posts would be accessible from other apps, allowing you to reach new people with no added effort. If you have a private profile, you’d be able to approve users on Threads who want to follow you and interact with your content, similar to your experience on Instagram.

The benefits of open social networking protocols go well beyond the ways people can follow each other. Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation. Each compatible app can set its own community standards and content moderation policies, meaning people have the freedom to choose spaces that align with their values. We believe this decentralized approach, similar to the protocols governing email and the web itself, will play an important role in the future of online platforms.

Threads is Meta’s first app envisioned to be compatible with an open social networking protocol – we hope that by joining this fast-growing ecosystem of interoperable services, Threads will help people find their community, no matter what app they use.

What’s Next

We’re rolling out Threads today in more than 100 countries for iOS and Android, and people in those countries can download the app from the Apple App Store and Google Play Store.

In addition to working toward making Threads compatible with the ActivityPub protocol, soon we’ll be adding a number of new features to help you continue to discover threads and creators you’re interested in, including improved recommendations in feed and a more robust search function that makes it easier to follow topics and trends in real time.

 

Should Science Migrate over to Threads Instead of Twitter?

I have written multiple time of the impact of social media, Science and Web 2.0 and the new Science and Web 3.0 including

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Science Has A Systemic Problem, Not an Innovation Problem

 

It, as of this writing, appears it is not crucial that scientific institutions need to migrate over to Threads yet, although the impetus is certainly there.  Many of the signups have of course been through Instagram (which is the only way to signup for now) and a search of @Threads does not show that large scientific organizations have signed up for now.

 

A search for NIH, NCBI, AACR, and Personalized Medicine Coalition or PMC which is the big MGH orgaization on personalized medicine appears to return nothing yet.  Pfizer and most big pharma is on @Threads now but that is because they maintain a marketing thread on Instagram.  How necessary is @Threads for communicating science over Science 3.0 platform remains to be seen.  In addition, how will @Threads be used for real time scientific conference coverage?  Will Meta be able to integrate with virtual reality?

Other articles of Note on this Open Access Scientific Journal Include:

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Science Has A Systemic Problem, Not an Innovation Problem

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Is It Time for the Virtual Scientific Conference?: Coronavirus, Travel Restrictions, Conferences Cancelled

Part One: The Process of Real Time Coverage using Social Media

 

 

 

 

 

Read Full Post »

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Artificial Intelligence (AI) Used to Successfully Determine Most Likely Repurposed Antibiotic Against Deadly Superbug Acinetobacter baumanni

Reporter: Stephen J. Williams, Ph.D.

The World Health Organization has identified 3 superbugs, or infective micororganisms displaying resistance to common antibiotics and multidrug resistance, as threats to humanity:

Three bacteria were listed as critical:

  • Acinetobacter baumannii bacteria that are resistant to important antibiotics called carbapenems. Acinetobacter baumannii are highly-drug resistant bacteria that can cause a range of infections for hospitalized patients, including pneumonia, wound, or blood infections.
  • Pseudomonas aeruginosa, which are resistant to carbapenems. Pseudomonas aeruginosa can cause skin rashes and ear infectious in healthy people but also severe blood infections and pneumonia when contracted by sick people in the hospital.
  • Enterobacteriaceae — a family of bacteria that live in the human gut — that are resistant to both carbepenems and another class of antibiotics, cephalosporins.

 

It has been designated critical need for development of  antibiotics to these pathogens.  Now researchers at Mcmaster University and others in the US had used artificial intelligence (AI) to screen libraries of over 7,000 chemicals to find a drug that could be repurposed to kill off the pathogen.

Liu et. Al. (1) published their results of an AI screen to narrow down potential chemicals that could work against Acinetobacter baumanii in Nature Chemical Biology recently.

Abstract

Acinetobacter baumannii is a nosocomial Gram-negative pathogen that often displays multidrug resistance. Discovering new antibiotics against A. baumannii has proven challenging through conventional screening approaches. Fortunately, machine learning methods allow for the rapid exploration of chemical space, increasing the probability of discovering new antibacterial molecules. Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii. Through this approach, we discovered abaucin, an antibacterial compound with narrow-spectrum activity against A. baumannii. Further investigations revealed that abaucin perturbs lipoprotein trafficking through a mechanism involving LolE. Moreover, abaucin could control an A. baumannii infection in a mouse wound model. This work highlights the utility of machine learning in antibiotic discovery and describes a promising lead with targeted activity against a challenging Gram-negative pathogen.

Schematic workflow for incorporation of AI for antibiotic drug discovery for A. baumannii from 1. Liu, G., Catacutan, D.B., Rathod, K. et al. Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

Figure source: https://www.nature.com/articles/s41589-023-01349-8

Article Source: https://www.nature.com/articles/s41589-023-01349-8

  1. Liu, G., Catacutan, D.B., Rathod, K. et al.Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumanniiNat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

 

 

For reference to WHO and lists of most pathogenic superbugs see https://www.scientificamerican.com/article/who-releases-list-of-worlds-most-dangerous-superbugs/

The finding was first reported by the BBC.

Source: https://www.bbc.com/news/health-65709834

By James Gallagher

Health and science correspondent

Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.

The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.

The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.

The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.

It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.

Stopping the superbugs

Antibiotics kill bacteria. However, there has been a lack of new drugs for decades and bacteria are becoming harder to treat, as they evolve resistance to the ones we have.

More than a million people a year are estimated to die from infections that resist treatment with antibiotics.The researchers focused on one of the most problematic species of bacteria – Acinetobacter baumannii, which can infect wounds and cause pneumonia.

You may not have heard of it, but it is one of the three superbugs the World Health Organization has identified as a “critical” threat.

It is often able to shrug off multiple antibiotics and is a problem in hospitals and care homes, where it can survive on surfaces and medical equipment.

Dr Jonathan Stokes, from McMaster University, describes the bug as “public enemy number one” as it’s “really common” to find cases where it is “resistant to nearly every antibiotic”.

 

Artificial intelligence

To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.

This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.

The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.

The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.

Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.

However, Dr Stokes told me: “This is when the work starts.”

The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.

Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.

Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.

 

In principle, the AI could screen tens of millions of potential compounds – something that would be impractical to do manually.

“AI enhances the rate, and in a perfect world decreases the cost, with which we can discover these new classes of antibiotic that we desperately need,” Dr Stokes told me.

The researchers tested the principles of AI-aided antibiotic discovery in E. coli in 2020, but have now used that knowledge to focus on the big nasties. They plan to look at Staphylococcus aureus and Pseudomonas aeruginosa next.

“This finding further supports the premise that AI can significantly accelerate and expand our search for novel antibiotics,” said Prof James Collins, from the Massachusetts Institute of Technology.

He added: “I’m excited that this work shows that we can use AI to help combat problematic pathogens such as A. baumannii.”

Prof Dame Sally Davies, the former chief medical officer for England and government envoy on anti-microbial resistance, told Radio 4’s The World Tonight: “We’re onto a winner.”

She said the idea of using AI was “a big game-changer, I’m thrilled to see the work he (Dr Stokes) is doing, it will save lives”.

Other related articles and books published in this Online Scientific Journal include the following:

Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases, Reproductive Genomic Endocrinology

(3 book series: Volume 1, 2&3, 4)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

 

 

 

 

 

 

 

 

 

 

  • The Immune System, Stress Signaling, Infectious Diseases and Therapeutic Implications:

 

  • Series D, VOLUME 2

Infectious Diseases and Therapeutics

and

  • Series D, VOLUME 3

The Immune System and Therapeutics

(Series D: BioMedicine & Immunology) Kindle Edition.

On Amazon.com since September 4, 2017

(English Edition) Kindle Edition – as one Book

https://www.amazon.com/dp/B075CXHY1B $115

 

Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic

The Journey of Antibiotic Discovery

FDA cleared Clever Culture Systems’ artificial intelligence tech for automated imaging, analysis and interpretation of microbiology culture plates speeding up Diagnostics

Artificial Intelligence: Genomics & Cancer

Read Full Post »

The Vibrant Philly Biotech Scene: Proteovant Therapeutics Using Artificial Intelligence and Machine Learning to Develop PROTACs

Reporter: Stephen J. Williams, Ph.D.

It has been a while since I have added to this series but there have been a plethora of exciting biotech startups in the Philadelphia area, and many new startups combining technology, biotech, and machine learning. One such exciting biotech is Proteovant Therapeutics, which is combining the new PROTAC (Proteolysis-Targeting Chimera) technology with their in house ability to utilize machine learning and artificial intelligence to design these types of compounds to multiple intracellular targets.

PROTACs (which actually is under a trademark name of Arvinus Operations, but is also refered to as Protein Degraders. These PROTACs take advantage of the cell protein homeostatic mechanism of ubiquitin-mediated protein degradation, which is a very specific targeted process which regulates protein levels of various transcription factors, protooncogenes, and receptors. In essence this regulated proteolyic process is needed for normal cellular function, and alterations in this process may lead to oncogenesis, or a proteotoxic crisis leading to mitophagy, autophagy and cellular death. The key to this technology is using chemical linkers to associate an E3 ligase with a protein target of interest. E3 ligases are the rate limiting step in marking the proteins bound for degradation by the proteosome with ubiquitin chains.

Model of PROTAC Ternarary Complex

A review of this process as well as PROTACs can be found elsewhere in articles (and future articles) on this Open Access Journal.

Protevant have made two important collaborations:

  1. Oncopia Therapeutics: came out of University of Michigan Innovation Hub and lab of Shaomeng Wang, who developed a library of BET and MDM2 based protein degraders. In 2020 was aquired by Riovant Sciences.
  2. Riovant Sciences: uses computer aided design of protein degraders

Proteovant Company Description:

Proteovant is a newly launched development-stage biotech company focusing on discovery and development of disease-modifying therapies by harnessing natural protein homeostasis processes. We have recently acquired numerous assets at discovery and development stages from Oncopia, a protein degradation company. Our lead program is on track to enter IND in 2021. Proteovant is building a strong drug discovery engine by combining deep drugging expertise with innovative platforms including Roivant’s AI capabilities to accelerate discovery and development of protein degraders to address unmet needs across all therapeutic areas. The company has recently secured $200M funding from SK Holdings in addition to investment from Roivant Sciences. Our current therapeutic focus includes but is not limited to oncology, immunology and neurology. We remain agnostic to therapeutic area and will expand therapeutic focus based on opportunity. Proteovant is expanding its discovery and development teams and has multiple positions in biology, chemistry, biochemistry, DMPK, bioinformatics and CMC at many levels. Our R&D organization is located close to major pharmaceutical companies in Eastern Pennsylvania with a second site close to biotech companies in Boston area.

Protein degradation

Source: Protevant

The ubiquitin proteasome system (UPS) is responsible for maintaining protein homeostasis. Targeted protein degradation by the UPS is a cellular process that involves marking proteins and guiding them to the proteasome for destruction. We leverage this physiological cellular machinery to target and destroy disease-causing proteins.

Unlike traditional small molecule inhibitors, our approach is not limited by the classic “active site” requirements. For example, we can target transcription factors and scaffold proteins that lack a catalytic pocket. These classes of proteins, historically, have been very difficult to drug. Further, we selectively degrade target proteins, rather than isozymes or paralogous proteins with high homology. Because of the catalytic nature of the interactions,  it is possible to achieve efficacy at lower doses with prolonged duration while decreasing dose-limiting toxicities.

Biological targets once deemed “undruggable” are now within reach.

About Riovant Sciences: from PRNewsWire https://www.prnewswire.com/news-releases/roivant-unveils-targeted-protein-degradation-platform-301186928.html

Roivant develops transformative medicines faster by building technologies and developing talent in creative ways, leveraging the Roivant platform to launch “Vants” – nimble and focused biopharmaceutical and health technology companies. These Vants include Proteovant but also Dermovant, ImmunoVant,as well as others.

Roivant’s drug discovery capabilities include the leading computational physics-based platform for in silico drug design and optimization as well as machine learning-based models for protein degradation.

The integration of our computational and experimental engines enables the rapid design of molecules with high precision and fidelity to address challenging targets for diseases with high unmet need.

Our current modalities include small molecules, heterobifunctionals and molecular glues.

Roivant Unveils Targeted Protein Degradation Platform

– First therapeutic candidate on track to enter clinical studies in 2021

– Computationally-designed degraders for six targets currently in preclinical development

– Acquisition of Oncopia Therapeutics and research collaboration with lab of Dr. Shaomeng Wang at the University of Michigan to add diverse pipeline of current and future compounds

Clinical-stage degraders will provide foundation for multiple new Vants in distinct disease areas

– Platform supported by $200 million strategic investment from SK Holdings

Other articles in this Vibrant Philly Biotech Scene on this Online Open Access Journal include:

The Vibrant Philly Biotech Scene: PCCI Meeting Announcement, BioDetego Presents Colon Cancer Diagnostic Tool

The Vibrant Philly Biotech Scene: Focus on KannaLife Sciences and the Discipline and Potential of Pharmacognosy

The Vibrant Philly Biotech Scene: Focus on Vaccines and Philimmune, LLC

The Vibrant Philly Biotech Scene: Focus on Computer-Aided Drug Design and Gfree Bio, LLC

Philly Biotech Scene: Biobots and 3D BioPrinting (Now called Allevi)

Philly Biotech Scene: November 2015 PCCI Meeting Showcasing ViFant (Penn Center For Innovation)

Spark Therapeutics’ $4.8Billion deal Confirmed as Biggest VC-backed Exit in Philadelphia

Read Full Post »

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Al is on the way to lead critical ED decisions on CT

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Artificial intelligence (AI) has infiltrated many organizational processes, raising concerns that robotic systems will eventually replace many humans in decision-making. The advent of AI as a tool for improving health care provides new prospects to improve patient and clinical team’s performance, reduce costs, and impact public health. Examples include, but are not limited to, automation; information synthesis for patients, “fRamily” (friends and family unpaid caregivers), and health care professionals; and suggestions and visualization of information for collaborative decision making.

In the emergency department (ED), patients with Crohn’s disease (CD) are routinely subjected to Abdomino-Pelvic Computed Tomography (APCT). It is necessary to diagnose clinically actionable findings (CAF) since they may require immediate intervention, which is typically surgical. Repeated APCTs, on the other hand, results in higher ionizing radiation exposure. The majority of APCT performance guidance is clinical and empiric. Emergency surgeons struggle to identify Crohn’s disease patients who actually require a CT scan to determine the source of acute abdominal distress.

Image Courtesy: Jim Coote via Pixabay https://www.aiin.healthcare/media/49446

Aid seems to be on the way. Researchers employed machine learning to accurately distinguish these sufferers from Crohn’s patients who appear with the same complaint but may safely avoid the recurrent exposure to contrast materials and ionizing radiation that CT would otherwise wreak on them.

The study entitled “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department” was published on July 9 in Digestive and Liver Disease by gastroenterologists and radiologists at Tel Aviv University in Israel.

Retrospectively, Jacob Ollech and his fellow researcher have analyzed 101 emergency treatments of patients with Crohn’s who underwent abdominopelvic CT.

They were looking for examples where a scan revealed clinically actionable results. These were classified as intestinal blockage, perforation, intra-abdominal abscess, or complex fistula by the researchers.

On CT, 44 (43.5 %) of the 101 cases reviewed had such findings.

Ollech and colleagues utilized a machine-learning technique to design a decision-support tool that required only four basic clinical factors to test an AI approach for making the call.

The approach was successful in categorizing patients into low- and high-risk groupings. The researchers were able to risk-stratify patients based on the likelihood of clinically actionable findings on abdominopelvic CT as a result of their success.

Ollech and co-authors admit that their limited sample size, retrospective strategy, and lack of external validation are shortcomings.

Moreover, several patients fell into an intermediate risk category, implying that a standard workup would have been required to guide CT decision-making in a real-world situation anyhow.

Consequently, they generate the following conclusion:

We believe this study shows that a machine learning-based tool is a sound approach for better-selecting patients with Crohn’s disease admitted to the ED with acute gastrointestinal complaints about abdominopelvic CT: reducing the number of CTs performed while ensuring that patients with high risk for clinically actionable findings undergo abdominopelvic CT appropriately.

Main Source:

Konikoff, Tom, Idan Goren, Marianna Yalon, Shlomit Tamir, Irit Avni-Biron, Henit Yanai, Iris Dotan, and Jacob E. Ollech. “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department.” Digestive and Liver Disease (2021). https://www.sciencedirect.com/science/article/abs/pii/S1590865821003340

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Al App for People with Digestive Disorders

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/24/ai-app-for-people-with-digestive-disorders/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Al System Used to Detect Lung Cancer

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/07/06/yet-another-success-story-machine-learning-to-predict-immunotherapy-response/

Systemic Inflammatory Diseases as Crohn’s disease, Rheumatoid Arthritis and Longer Psoriasis Duration May Mean Higher CVD Risk

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2017/10/09/systemic-inflammatory-diseases-as-crohns-disease-rheumatoid-arthritis-and-longer-psoriasis-duration-may-mean-higher-cvd-risk/

Autoimmune Inflammatory Bowel Diseases: Crohn’s Disease & Ulcerative Colitis: Potential Roles for Modulation of Interleukins 17 and 23 Signaling for Therapeutics

Curators: Larry H Bernstein, MD FCAP and Aviva Lev-Ari, PhD, RN https://pharmaceuticalintelligence.com/2016/01/23/autoimmune-inflammtory-bowl-diseases-crohns-disease-ulcerative-colitis-potential-roles-for-modulation-of-interleukins-17-and-23-signaling-for-therapeutics/

Inflammatory Disorders: Inflammatory Bowel Diseases (IBD) – Crohn’s and Ulcerative Colitis (UC) and Others

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/gama-delta-epsilon-gde-is-a-global-holding-company-absorbing-lpbi/subsidiary-5-joint-ventures-for-ip-development-jvip/drug-discovery-with-3d-bioprinting/ibd-inflammatory-bowl-diseases-crohns-and-ulcerative-colitis/

Read Full Post »

This AI Just Evolved From Companion Robot To Home-Based Physician Helper

Reporter: Ethan Coomber, Research Assistant III, Data Science and Podcast Library Development 

Article Author: Gil Press Senior Contributor Enterprise & Cloud @Forbes 

Twitter: @GilPress I write about technology, entrepreneurs and innovation.

Intuition Robotics announced today that it is expanding its mission of improving the lives of older adults to include enhancing their interactions with their physicians. The Israeli startup has developed the AI-based, award-winning proactive social robot ElliQ which has spent over 30,000 days in older adults’ homes over the past two years. Now ElliQ will help increase patient engagement while offering primary care providers continuous actionable data and insights for early detection and intervention.

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

Higher patient engagement leads to lower costs of delivering care and the quality of the physician-patient relationship is positively associated with improved functional health, studies have found. Typically, however, primary care physicians see their patients anywhere from once a month to once a year, even though about 85% of seniors in the U.S. have at least one chronic health condition. ElliQ, with the consent of its users, can provide data on the status of patients in between office visits and facilitate timely and consistent communications between physicians and their patients.

Supporting the notion of a home-based physician assistant robot is the transformation of healthcare delivery in the U.S. More and more primary care physicians are moving from a fee-for-service business model, where doctors are paid according to the procedures used to treat a patient, to “capitation,” where doctors are paid a set amount for each patient they see. This shift in how doctors are compensated is gaining momentum as a key solution for reducing the skyrocketing costs of healthcare: “…inadequate, unnecessary, uncoordinated, and inefficient care and suboptimal business processes eat up at least 35%—and maybe over 50%—of the more than $3 trillion that the country spends annually on health care. That suggests more than $1 trillion is being squandered,” states “The Case for Capitation,” a Harvard Business Review article.

Under this new business model, physicians have a strong incentive to reduce or eliminate visits to the ER and hospitalization, so ElliQ’s assistance in early intervention and support of proactive and preventative healthcare is highly valuable. ElliQ’s “new capabilities provide physicians with visibility into the patient’s condition at home while allowing seamless communication… can assist me and my team in early detection and mitigation of health issues, and it increases patients’ involvement in their care through more frequent engagement and communication,” says in a statement Dr. Peter Barker of Family Doctors, a Mass General Brigham-affiliated practice in Swampscott, MA, that is working with Intuition Robotics.

With the new stage in its evolution, ElliQ becomes “a conversational agent for self-reported data on how people are doing based on what the doctor is telling us to look for and, at the same time, a super-simple communication channel between the physician and the patient,” says Skuler. As only 20% of the individual’s health has to do with the administration of healthcare, Skuler says the balance is already taken care of by ElliQ—encouraging exercise, watching nutrition, keeping mentally active, connecting to the outside world, and promoting a sense of purpose.

A recent article in The Communication of the ACM pointed out that “usability concerns have for too long overshadowed questions about the usefulness and acceptability of digital technologies for older adults.” Specifically, the authors challenge the long-held assumption that accessibility and aging research “fall under the same umbrella despite the fact that aging is neither an illness nor a disability.”

For Skuler, a “pyramid of value” is represented in Intuition Robotics offering. At the foundation is the physical product, easy to use and operate and doing what it is expected to do. Then there is the layer of “building relationships based on trust and empathy,” with a lot of humor and social interaction and activities for the users. On top are specific areas of value to older adults, and the first one is healthcare. There will be more in the future, anything that could help older adults live better lives, such as direct connections to the local community. ”Healthcare is an interesting experiment and I’m very much looking forward to see what else the future holds for ElliQ,” says Skuler.

Original. Reposted with permission, 7/7/2021.

Other related articles published in this Open Access Online Scientific Journal include the Following:

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber
https://pharmaceuticalintelligence.com/2021/06/23/the-future-of-speech-based-human-computer-interaction/

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

This image has an empty alt attribute; its file name is morethanthes.jpg
Seeing “through” the cancer with the power of data analysis — possible with the help of artificial intelligence. Credit: MPI f. Molecular Genetics/ Ella Maru Studio
Image Source: https://medicalxpress.com/news/2021-04-sum-mutations-cancer-genes-machine.html

Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low-risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) and Artificial Intelligence (AI) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions by predicting new algorithms.

In the majority of human cancers, heritable loss of gene function through cell division may be mediated as often by epigenetic as by genetic abnormalities. Epigenetic modification occurs through a process of interrelated changes in CpG island methylation and histone modifications. Candidate gene approaches of cell cycle, growth regulatory and apoptotic genes have shown epigenetic modification associated with loss of cognate proteins in sporadic pituitary tumors.

On 11th November 2020, researchers from the University of California, Irvine, has established the understanding of epigenetic mechanisms in tumorigenesis and publicized a previously undetected repertoire of cancer driver genes. The study was published in “Science Advances

Researchers were able to identify novel tumor suppressor genes (TSGs) and oncogenes (OGs), particularly those with rare mutations by using a new prediction algorithm, called DORGE (Discovery of Oncogenes and tumor suppressor genes using Genetic and Epigenetic features) by integrating the most comprehensive collection of genetic and epigenetic data.

The senior author Wei Li, Ph.D., the Grace B. Bell chair and professor of bioinformatics in the Department of Biological Chemistry at the UCI School of Medicine said

Existing bioinformatics algorithms do not sufficiently leverage epigenetic features to predict cancer driver genes, even though epigenetic alterations are known to be associated with cancer driver genes.

The Study

This study demonstrated how cancer driver genes, predicted by DORGE, included both known cancer driver genes and novel driver genes not reported in current literature. In addition, researchers found that the novel dual-functional genes, which DORGE predicted as both TSGs and OGs, are highly enriched at hubs in protein-protein interaction (PPI) and drug/compound-gene networks.

Prof. Li explained that the DORGE algorithm, successfully leveraged public data to discover the genetic and epigenetic alterations that play significant roles in cancer driver gene dysregulation and could be instrumental in improving cancer prevention, diagnosis and treatment efforts in the future.

Another new algorithmic prediction for the identification of cancer genes by Machine Learning has been carried out by a team of researchers at the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin and the Institute of Computational Biology of Helmholtz Zentrum München combining a wide variety of data analyzed it with “Artificial Intelligence” and identified numerous cancer genes. They termed the algorithm as EMOGI (Explainable Multi-Omics Graph Integration). EMOGI can predict which genes cause cancer, even if their DNA sequence is not changed. This opens up new perspectives for targeted cancer therapy in personalized medicine and the development of biomarkers. The research was published in Nature Machine Intelligence on 12th April 2021.

In cancer, cells get out of control. They proliferate and push their way into tissues, destroying organs and thereby impairing essential vital functions. This unrestricted growth is usually induced by an accumulation of DNA changes in cancer genes—i.e. mutations in these genes that govern the development of the cell. But some cancers have only very few mutated genes, which means that other causes lead to the disease in these cases.

The Study

Overlap of EMOGI’s positive predictions with known cancer genes (KCGs) and candidate cancer genes
Image Source: https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-021-00325-y/MediaObjects/42256_2021_325_MOESM1_ESM.pdf

The aim of the study has been represented in 4 main headings

  • Additional targets for personalized medicine
  • Better results by combination
  • In search of hints for further studies
  • Suitable for other types of diseases as well

The team was headed by Annalisa Marsico. The team used the algorithm to identify 165 previously unknown cancer genes. The sequences of these genes are not necessarily altered-apparently, already a dysregulation of these genes can lead to cancer. All of the newly identified genes interact closely with well-known cancer genes and be essential for the survival of tumor cells in cell culture experiments. The EMOGI can also explain the relationships in the cell’s machinery that make a gene a cancer gene. The software integrates tens of thousands of data sets generated from patient samples. These contain information about DNA methylations, the activity of individual genes and the interactions of proteins within cellular pathways in addition to sequence data with mutations. In these data, a deep-learning algorithm detects the patterns and molecular principles that lead to the development of cancer.

Marsico says

Ideally, we obtain a complete picture of all cancer genes at some point, which can have a different impact on cancer progression for different patients

Unlike traditional cancer treatments such as chemotherapy, personalized treatments are tailored to the exact type of tumor. “The goal is to choose the best treatment for each patient, the most effective treatment with the fewest side effects. In addition, molecular properties can be used to identify cancers that are already in the early stages.

Roman Schulte-Sasse, a doctoral student on Marsico’s team and the first author of the publication says

To date, most studies have focused on pathogenic changes in sequence, or cell blueprints, at the same time, it has recently become clear that epigenetic perturbation or dysregulation gene activity can also lead to cancer.

This is the reason, researchers merged sequence data that reflects blueprint failures with information that represents events in cells. Initially, scientists confirmed that mutations, or proliferation of genomic segments, were the leading cause of cancer. Then, in the second step, they identified gene candidates that are not very directly related to the genes that cause cancer.

Clues for future directions

The researcher’s new program adds a considerable number of new entries to the list of suspected cancer genes, which has grown to between 700 and 1,000 in recent years. It was only through a combination of bioinformatics analysis and the newest Artificial Intelligence (AI) methods that the researchers were able to track down the hidden genes.

Schulte-Sasse says “The interactions of proteins and genes can be mapped as a mathematical network, known as a graph.” He explained by giving an example of a railroad network; each station corresponds to a protein or gene, and each interaction among them is the train connection. With the help of deep learning—the very algorithms that have helped artificial intelligence make a breakthrough in recent years – the researchers were able to discover even those train connections that had previously gone unnoticed. Schulte-Sasse had the computer analyze tens of thousands of different network maps from 16 different cancer types, each containing between 12,000 and 19,000 data points.

Many more interesting details are hidden in the data. Patterns that are dependent on particular cancer and tissue were seen. The researchers were also observed this as evidence that tumors are triggered by different molecular mechanisms in different organs.

Marsico explains

The EMOGI program is not limited to cancer, the researchers emphasize. In theory, it can be used to integrate diverse sets of biological data and find patterns there. It could be useful to apply our algorithm for similarly complex diseases for which multifaceted data are collected and where genes play an important role. An example might be complex metabolic diseases such as diabetes.

Main Source

New prediction algorithm identifies previously undetected cancer driver genes

https://advances.sciencemag.org/content/6/46/eaba6784  

Integration of multiomics data with graph convolutional networks to identify new cancer genes and their associated molecular mechanisms

https://www.nature.com/articles/s42256-021-00325-y#citeas

Other Related Articles published in this Open Access Online Scientific Journal include the following:

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis

Curator & Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/02/cancer-detection-and-therapeutics/

Free Bio-IT World Webinar: Machine Learning to Detect Cancer Variants

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2016/05/04/free-bio-it-world-webinar-machine-learning-to-detect-cancer-variants/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning

https://pharmaceuticalintelligence.com/2021-medical-text-analysis-nlp/premalata-pati-phd-postdoc-in-pharmaceutical-sciences-medical-text-analysis-with-machine-learning/

Read Full Post »

Artificial intelligence predicts the immunogenic landscape of SARS-CoV-2

Reporter: Irina Robu, PhD

Artificial intelligence makes it imaginable for machines to learn from experience, adjust to new inputs and perform human-like tasks. Using the technologies, computer can be trained to achieve specific tasks by processing large amount of data and recognizing patterns. Scientists from NEC OncoImmunity use artificial intelligence to forecast designs for designing universal vaccines for COVID 19, that contain a broad spectrum of T-cell epitopes capable of providing coverage and protection across the global population. To help test their hypothesis, they profiled the entire SARS COV2 proteome across the most frequent 100 HLA-A, HLA-B and HLA-DR alleles in the human population using host infected cell surface antigen and immunogenicity predictors from NEC Immune Profiler suite of tools, and generated comprehensive epitope maps. They use the epitope maps as a starting point for Monte Carlo simulation intended to identify the most significant epitope hotspot in the virus. Then they analyzed the antigen arrangement and immunogenic landscape to recognize a trend where SARS-COV-2 mutations are expected to have minimized potential to be accessible by host-infected cells, and subsequently noticed by the host immune system. A sequence conservation analysis then removed epitope hotspots that occurred in less-conserved regions of the viral proteome.

By merging the antigen arrangement to the infected-host cell surface and immunogenicity estimates of the NEC Immune Profiler with a Monte Carlo and digital twin simulation, the researchers have outlined the entire SARS-CoV-2 proteome and recognized a subset of epitope hotspots that could be used  in a vaccine formulation to provide a wide-ranging coverage across the global population.

By using the database of HLA haplotypes of approximately 22,000 individuals to design  a “digital twin” type simulation to model how efficient various  combinations of hotspots would work in a varied human population. 

SOURCE

https://www.nature.com/articles/s41598-020-78758-5?utm_content=buffer4ebb7

Read Full Post »

Older Posts »

%d bloggers like this: