Feeds:
Posts
Comments

Archive for the ‘Machine Learning’ Category

Nobel Prize in Physics 2024 to J.J. Hopfield and to G.E. Hinton

Reporter: Aviva Lev- Ari, PhD, RN 

 

UPDATED on 10/19/2024

Why the Nobel Prize in Physics Went to AI Research 

Nobel committee recognizes scientists for foundation research in neural networks

 

The Nobel Prize Committee for Physics caught the academic community off-guard by handing the 2024 award to John J. Hopfield and Geoffrey E. Hinton for their foundational work in neural networks.

The pair won the prize for their seminal papers, both published in the 1980s, that described rudimentary neural networks. Though much simpler than the networks used for modern generative AI like ChatGPT or Stable Diffusion, their ideas laid the foundations on which later research built.

Even Hopfield and Hinton didn’t believe they’d win, with the latter telling The Associated Press he was “flabbergasted.” After all, AI isn’t what comes to mind when most people think of physics. However, the committee took a broader view, in part because the researchers based their neural networks on “fundamental concepts and methods from physics.”

“Initially, I was surprised, given it’s the Nobel Prize in Physics, and their work was in AI and machine learning,” says Padhraic Smyth, a distinguished professor at the University of California, Irvine. “But thinking about it a bit more, it was clearer to me why [the Nobel Prize Committee] did this.” He added that physicists in statistical mechanics have “long thought” about systems that display emergent behavior.

Hopfield first explored these ideas in a 1982 paper on neural networks. He described a type of neural network, later called a Hopfield network, formed by a single layer of interconnected neurons. The paper, which was originally categorized under biophysics, said a neural network could retain “memories” from “any reasonably sized subpart.”

Hinton expanded on that work to conceptualize the Boltzmann machine, a more complex neural network described in a 1985 paper Hinton co-authored with David H. Ackley and Terrence J. Sejnowski. They introduced the concept of “hidden units,” additional layers of neurons which exist between the input and output layers of a neural network but don’t directly interact with either. This makes it possible to handle tasks that require a more generalized understanding, like classifying images.

So, what’s the connection to physics?

Hopfield’s paper references the concept of a “spin glass,” a material in which disordered magnetic particles lead to complex interactions. Hinton and his co-authors drew on statistical mechanics, a field of physics that uses statistics to describe the behavior of particles in a system. They even named their network in honor of Ludwig Boltzmann, the physicist whose work formed the foundation of statistical mechanics.

And the connection between neural networks and physics isn’t a one-way street. Machine learning was crucial to the discovery of the Higgs boson, where it sorted the data generated by billions of proton collisions. This year’s Nobel Prize for Chemistry further underscored machine learning’s importance in research, as the award went to a trio of scientists who built an AI model to predict the structures of proteins.

While Hopfield and Hinton authored influential papers, their contributions to machine learning were cemented by their continued work, and both won multiple awards before the Nobel Prize. Among others, Hopfield won the Boltzmann Medal in 2022; Hinton received the IEEE Frank Rosenblatt Award in 2014, the IEEE James Clerk Maxwell Medal in 2016, and the Turing Award in 2018 (that last one alongside Yann LeCun and Yoshua Bengio).

Smyth saw Hopfield’s efforts first-hand as a student at the California Institute of Technology. “Hopfield was able to bring together mathematicians, engineers, computer scientists, and physicists. He got them in the same room, got them excited about modeling the brain, doing pattern recognition and machine learning, unified by mathematical theories he brought in from physics.”

In 2012, Hinton co-founded a company called DNNResearch with two of his students; Ilya Sutskever, who later co-founded OpenAI, and Alex Krizhevsky. Together, the trio collaborated on AlexNet, a hugely influential neural network for computer vision. Hinton also taught at the University of Toronto, where he continued to champion machine learning.

Navdeep Jaitly, now a deep learning researcher at Apple, said Hinton inspired new generations of engineers and researchers. In Jaitly’s case, the influence was direct; Jaitly studied under Hinton at the University of Toronto.

“I came in with experience in statistical modeling,” says Jaitly, “but Hinton still managed to entirely change how I think about problem solving. In terms of his contributions to machine learning, his methods are central to almost everything we do.”

SOURCE

https://spectrum.ieee.org/nobel-prize-in-physics

 

UPDATED on 10/13/2024

Website: https://www.bbc.co.uk/newsnight

https://youtu.be/MGJpR591oaM?si=6DhusxGt_B8dUpT_

 

10/8/2024

John J. Hopfield
Princeton University, NJ, USA

Born 1933 in Chicago, IL, USA. PhD 1958 from Cornell University, Ithaca, NY, USA. Professor at Princeton University, NJ, USA.

Geoffrey E. Hinton
University of Toronto, Canada

Born 1947 in London, UK. PhD 1978 from The University of Edinburgh, UK. Professor at University of Toronto, Canada.

was announced on 10/8/2024 in Stockholm, Sweden.

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

Reference

Popular science background: They used physics to find patterns in information (pdf)


Scientific background: “For foundational discoveries and inventions that enable machine learning with artificial neural networks” (pdf)

 

SOURCE

https://www.nobelprize.org/prizes/physics/2024/press-release/

Read Full Post »

Drug Development Process been Revolutionized by Artificial Intelligence (AI) Technologies

Curators: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN

 

The Voice of Stephen J. Williams, PhD

LPBI Group, CSO

PENDING

The Voice of Aviva Lev-Ari, PhD, RN

1.0 LPBI Group and 2.0 LPBI Group, Founder

With the advent of AI in the last 5-7 years in our fields: Pharmaceutical, Life Sciences and Medicine, LPBI Group had launched several initiatives to advance the frontier of knowledge by using our own contents repositories of +8 giga bytes for experimenting with Machine Learning (ML) technologies for Medical Text Analysis.

These AI Technologies include

  • Natural Language Processing (NLP): Statistical ML and Deep Learning ML
  • ChatGPT and GPT-4
  • Generative AI
A quote by Brad Power, Co-founder and CEO, Cancer Patient Lab made in January 2024
LPBI Group is in the admirable position of sitting on a treasure trove of medical literature that would be useful input in the current environment of customized ChatGPTs looking for reliable medical content.

In the Drug Development (DD) field, AI technologies are been employed  chiefly, for these tasks:

(a) Generation of molecular information libraries

(b) Explorations and combinatorial experiments on protein structures, and

(c) measurements of biochemical interactions

The A.I. learns from patterns in the data to suggest possible useful drug candidates, as if matching chemical keys to the right protein locks.

Because A.I. for drug development is powered by precise scientific data, toxic “hallucinations” are far less likely than with more broadly trained chatbots. And any potential drug must undergo extensive testing in labs and in clinical trials before it is approved for patients.

“Generative A.I. is transforming the field, but the drug-development process is messy and very human,” said David Baker, a biochemist and director of the Institute for Protein Design at the University of Washington.

As of December 2023,

  • 24 AI-discovered molecules had completed Phase I trials, with 21 of them being successful. This success rate of 80–90% is higher than the historical industry average of 40–65%. 
  • In Phase II trials, the success rate is around 40%, which is similar to the historical average. 

SOURCE

AI Overview

https://www.google.com/search?q=A.I.-developed+drugs+are+in+clinical+trials&oq=A.I.-developed+drugs+are+in+clinical+trials&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigAdIBCTIxNzZqMGoxNagCCLACAQ&sourceid=chrome&ie=UTF-8

 

Examples of drugs developed by AI technologies:

NCI definition of AI Drugs:

AI drug

A drug that blocks the activity of an enzyme called aromatase, which the body uses to make estrogen in the ovaries and other tissues. Blocking aromatase lowers the amount of estrogen made by the body, which may stop the growth of cancer cells that need estrogen to grow. AI drugs are used to treat some types of breast cancer or to keep it from coming back. They may also be used to help prevent breast cancer in some women who are at a high risk of developing it. Examples of AI drugs are anastrozole, letrozole, and exemestane. AI drugs are a type of hormone therapy. Also called aromatase inhibitor.
 

SOURCE

https://www.cancer.gov/publications/dictionaries/cancer-terms/def/ai-drug

 

More examples of AI Drugs, Drugs developed with AI technologies

#1: 

INS018_055
Developed by Insilico Medicine, a Hong Kong-based biotech startup, to treat idiopathic pulmonary fibrosis (IPF). IPF is a chronic lung disease that causes scarring and can be fatal if left untreated. In January 2023, Insilico Medicine announced positive results from a Phase I safety trial of INS018_055. In February 2023, the FDA granted breakthrough status to a small molecule inhibitor identified by Insilico Medicine’s AI platforms for the drug. As of November 2023, INS018_055 was in mid-stage trials in the US and China, with some results expected in early 2025.
 
The first fully A.I. -generated drug enters clinical trials in human patients. Insilico Medicine, a Hong Kong-based biotech startup with more than $400 million in funding, created the drug as a treatment for idiopathic pulmonary fibrosis, a chronic lung disease.Jun 29, 2023
 
  • Schrödinger’s AI-driven platform
    Uses predictive modeling to optimize the molecular structure of drugs. AI is expected to integrate more advanced simulation techniques, such as quantum computing, to more accurately predict molecular behavior.

Drug discovery software developer Schrodinger Inc. (NASDAQ: SDGR) stock has been trying to recover after plummeting over 80% off its all-time high of $117 in January 2021. Schrodinger’s artificial intelligence (AI) powered software technology platform utilizes physics-based modeling and sophisticated machine learning algorithms to help clients identify the suitable molecules to treat the desired ailments. Its programs can help predict the behavior of molecules and potential outcomes.

This entails finding suitable molecules that effectively target specific cells and proteins, transcend through cell walls, are absorbed and dissolved well without interfering with other drugs or producing bad reactions to other drugs, and are scalable.

Big Name Pharma Customers of Schrödinger, Inc.

Its technology platform allows for the faster and cheaper discovery of novel molecules with a higher success rate than traditional methods. Its clients include the top 20 pharmaceutical companies in the world, including Pfizer Inc. (NYSE: PFE)Merck & Co. Inc. (NYSE: MRK), Takeda, AstraZeneca PLC (NYSE: AZN), and GlaxoSmithKline plc  (NYSE: GSK). It closed new agreements with Eli Lilly & Co. (NYSE: LLY) and Otsuka Pharmaceuticals out of Tokyo, Japan.

SOURCE

Schrodinger is an AI-Powered Drug Discovery Developer to Watch

https://www.nasdaq.com/articles/schrodinger-is-an-ai-powered-drug-discovery-developer-to-watch

 

Schrodinger’s Pipelines include:

  • SGR-1505 (MALT1)

Hematologic Malignancies

DISCOVERYPRECLINICALPHASE 1
 
  • SGR-2921 (CDC7)

AML/MDS

DISCOVERYPRECLINICALPHASE 1
 
  • SGR-3515 (Wee1/Myt1)

Solid Tumors

DISCOVERYPRECLINICALPHASE 1
 
  • SOS1

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • PRMT5-MTA

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • EGFRC797S

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • NLRP3

Immunology

DISCOVERYPRECLINICALPHASE 1
 
  • LRRK2

Neurology

DISCOVERYPRECLINICALPHASE 1
 
  • Undisclosed Programs

Multiple Areas

SOURCE

https://www.schrodinger.com/pipeline/#Proprietary-Pipeline

 

In high-tech labs, workers are generating data to train A.I. algorithms to design better medicine, faster. But the transformation is just getting underway.

Terray Therapeutics campus in Monrovia, Calif., June 17, 2024

https://www.nytimes.com/2024/06/17/business/ai-drugs-development-terray.html?smid=nytcore-ios-share&referringSource=articleShare

 

Five AI drug discovery companies you should know about

  1. Atomwise 
  2. Cradle 
  3. Exscientia 
  4. Iktos 
  5. Insilico Medicine 

According to Grand View Research, the global AI in drug discovery market size was valued at $1.1 billion in 2022, and is expected to expand at a compound annual growth rate (CAGR) of 29.6% from 2023 to 2030. The report states that the growing demand for the discovery and development of novel drug therapies and increasing manufacturing capacities of the life science industry are driving the demand for AI-empowered solutions in the drug discovery processes. 

As this report suggests, AI for drug discovery is clearly a growing field within the biopharma industry. Inevitably, as it grows even larger, we will see more companies come to the forefront of the field, hoping to change the face of drug discovery – and also the biopharma industry as a whole – so that the entire drug development process can become faster, more consistent, more accurate, and more scalable.

SOURCE

https://www.labiotech.eu/best-biotech/ai-drug-discovery-companies/#:~:text=to%20%2433%20million.-,Exscientia,manner%20using%20its%20AI%20technology.

 

At LPBI Group, Of Note is our Journal PharmaceuticalIntelligence.com

it represents our commitment to AI technologies in the following research categories and How many articles have been written in each of these topics: 

  • A total of x articles have been categorized 511 times among the following Artificial Intelligence research categories

Artificial Intelligence – General

113

An executive’s guide to AI

9

Artificial Intelligence – Breakthroughs in Theories and Technologies

94

Artificial Intelligence Applications in Health Care

81

Artificial Intelligence in CANCER

29

Artificial Intelligence in Health Care – Tools & Innovations

55

Artificial Intelligence in Medicine – Application for Diagnosis

44

Artificial intelligence applications for cardiology

21

AI-assisted Cardiac MRI

9

Artificial Intelligence in Psychiatry

5

Artificial Intelligence in Medicine – Applications in Therapeutics

50

 

LPBI Group’s involvement in Conceptual Drug Development covers the following two areas:

DrugDiscovery @LPBI Group, 2016 – 2018

Synthetic Biology in Drug Discovery, 2021 – Present

 

Applications of Artificial Intelligence to Medicine

Artificial Intelligence: Genomics & Cancer, 2021 – Present

Medicine with GPT-4 & ChatGPT, 2023 – Present

 

LPBI Group commitment to Medical Text Analysis using Machine Learning

2021-2025 Medical Text Analysis (NLP), 2020 – Present

ChatGPT + Wolfram PlugIn, 2023 – Present

 

LPBI Group Team members published two books on Drug Delivery Technologies

We had covered drug delivery technologies in two of our books. See all the Books: 

https://www.amazon.com/s?k=Aviva+Lev-Ari&i=digital-text&rh=n%3A133140011&ref=nb_sb_noss

 

  • Series E, Volume Four 

Medical 3D BioPrinting – The Revolution in Medicine, Technologies for Patient-centered Medicine: From R&D in Biologics to New Medical Devices. 

https://www.amazon.com/dp/B078QVDV2W

and

  • Series C, Volume Two 

Cancer Therapies: Metabolic, Genomics, Interventional, Immunotherapy and Nanotechnology in Therapy Delivery (Series C Book 2).

http://www.amazon.com/dp/B071VQ6YYK

 

The Table of Contents of these two book can be found in our

Spanish-language Edition, as well

  • Serie E, Volumen 4

Bioimpresión médica en 3D: la revolución de la medicina: Tecnologías para una medicina centrada en el paciente: de la I+D en agentes biológicos a los nuevos … en el paciente nº 4) (Spanish Edition) 2023

(Spanish Edition) Kindle Edition

https://www.amazon.com/dp/B0BRNVDB1P $56

 

  • Serie C, Volumen 2

Tratamientos contra el cáncer: Metabólicos, genómicos, intervencionistas, inmunoterapia y nanotecnología para la administración de tratamientos (Serie … y la oncología nº 2) 2022

(Spanish Edition) Kindle Edition

http://www.amazon.com/dp/B0BQTM44SM $75

Read Full Post »

The Health Care Dossier on Clarivate PLC: How Cortellis Is Changing the Life Sciences Industry

Curator: Stephen J. Williams, Ph.D.

Source: https://en.wikipedia.org/wiki/Clarivate 

Clarivate Plc is a British-American publicly traded analytics company that operates a collection of subscription-based services, in the areas of bibliometrics and scientometricsbusiness / market intelligence, and competitive profiling for pharmacy and biotechpatents, and regulatory compliancetrademark protection, and domain and brand protection. In the academy and the scientific community, Clarivate is known for being the company that calculates the impact factor,[4] using data from its Web of Science product family, that also includes services/applications such as PublonsEndNoteEndNote Click, and ScholarOne. Its other product families are Cortellis, DRG, CPA Global, Derwent, MarkMonitor, CompuMark, and Darts-ip, [3] and also the various ProQuest products and services.

Clarivate was formed in 2016, following the acquisition of Thomson Reuters‘ Intellectual Property and Science business by Onex Corporation and Baring Private Equity Asia. Clarivate has acquired various companies since then, including, notably, ProQuest in 2021.

Further information: Thomson Scientific

Clarivate (formerly CPA Global) was formerly the Intellectual Property and Science division of Thomson Reuters. Before 2008, it was known as Thomson Scientific. In 2016, Thomson Reuters struck a $3.55 billion deal in which they spun it off as an independent company, and sold it to private-equity firms Onex Corporation and Baring Private Equity Asia.

In May 2019, Clarivate merged with the Churchill Capital Corp SPAC to obtain a public listing on the New York Stock Exchange (NYSE) It currently trades with symbol NYSE:CLVT.

Acquisitions

  • June 1, 2017: Publons, a platform for researchers to share recognition for peer review.
  • April 10, 2018: Kopernio, AI-tech startup providing ability to search for full-text versions of selected scientific journal articles.
  • October 30, 2018: TrademarkVision, provider of Artificial Intelligence (AI) trademark research applications.
  • September 9, 2019: SequenceBase, provider of patent sequence information and search technology to the biotech, pharmaceutical and chemical industries.
  • December 2, 2019: Darts-ip, provider of case law data and analytics for intellectual property (IP) professionals.
  • January 17, 2020: Decision Resources Group (DRG), a leading healthcare research and consulting company, providing high-value healthcare industry analysis and insights.
  • June 22, 2020: CustomersFirst Now, in intellectual property (“IP”) software and tech-enabled services.
  • October 1, 2020: CPA Global, intellectual property (“IP”) software and tech-enabled services.
  • December 1, 2021: ProQuest, software, data and analytics provider to academic, research and national institutions.[27]It was acquired for $5.3 billion from Cambridge Information Group in what was described as a “huge deal in the library and information publishing world”. The company said that the operational concept behind the acquisition was integrating ProQuest’s products and applications with Web of Science. Chairman of ProQuest Andy Snyder became the vice chairman of Clarivate. The Scholarly Publishing and Academic Resources Coalition, an advocacy group for open access to scholarship, voiced antitrust concerns. The acquisition had been delayed mid-year due to a Federal Trade Commission antitrust probe.

Divestments

How Clarivate Has Changed Since 2019

2019 Strategy

From 2019 Manager Discussion Yearly Report

We are a leading global information services and analytics company serving the scientific research, intellectual property and life sciences end-markets. We provide structured information and analytics to facilitate the discovery, protection and commercialization of scientific research, innovations and brands.  Our product porfolio includes well-established market-leading brands such as Web of Science, Derwent Innovation, Life Sciences, CompuMark and MarkMonitor (which they later divested).  We believe that the stron balue proposition of our content, user interfaces, visualization and analytical tools, combined with the integration of our products and services into customers’ daily workflows, leads to our substantial customer loyalty as evidenced by their willingness to renew subscriptions with us.

Our structure, enabling a sharp focus on cross-selling opportunities within markets, is comprised of two product groups:

  • Science Group: consists of Web of Science and Life Science Product Lines
  • Intellectual Property Group: consists of Derwent, CompuMark and MarkMonitor

Corporations, government agencies, universities, law firms depend on our high-value curated content, analytics and services.  Unstructured data has grown exponentially over the last decade.  The trend has resulted in a critical need for unstructured data to be meaningfully filtered, analyzed and curated into relvent information that facilitates key operational and strategic decision making.  Our highly curated, proprietary information created through our sourcing, aggregation, verification, translation, and categorization (ONTOLOGY) of data has resulted in our solutions being embedded in our customers’ workflow and decision-making processes.

Overview of Clarivate PLC five year strategy in 2019. Note that in 2019 the Science Group accounted for 56.2% of revenue! This was driven by their product Cortellis!

Figure.  Overview of Clarivate PLC five year strategy in 2019. Note that in 2019 the Science Group accounted for 56.2% of revenue! This was driven by their product Cortellis!

Also Note nowhere in the M&A Discussion in years before 2023 was anything mentioned concerning AI or Large Language Models.

The Clarivate of Today:  Built for Life Sciences with Cortellis

Clarivate PLC has integrated multiple platforms into their offering Cortellis, which integrated AI and LLM into the structured knowledge bases (see more at https://clarivate.com/products/cortellis-family/)

“Life sciences organizations are tasked, now more than ever, to discover and develop treatments that challenge the status quo, increase ROI, and improve patient lives. However, its become increasingly difficult to find, integrate and analyze the key data your teams need to make critical decisions and get your Cortellis products to patients faster.

The Cortellis solutions help research and developmentportfolio strategy and business development, and regulatory and compliance professionals gather and assess the information you need to discover innovative drugs, differentiate your treatments, and increase chances of successful regulatory approval.

Some of Cortellis solutions include:

  1. Cortellis Competitive Intelligence: maximize ROI and improve patient outcomes
  2. Cortellis Deals Intelligence: Portfolio Strategy and Business Development (find best deal)
  3. Cortellis Clinical Intelligence: Clinical Trial Support and Regulatory
  4. Cortellis Digital Health Intelligence: understand digital health ecosystem
  5. Cortellis Drug Discovery: improve drug development speed and efficiency
  6. MetaBase and MetaCore: integrated omics knowledge bases for drug discovery
  7. Cortellis Regulatory: help with filings
  8. Cortellis HTA: health tech compliance (HIPAA)
  9. CMC Intelligence: new drug marketing
  10. Generics Intelligence
  11. Drug Safety Intelligence: both preclinical safety and post marketing pharmacovigilence

Watch Videos on Cortellis for Drug Discovery

Watch Video on Qiagen Site to see how Cortellis Integrates with Qiagen Omics Platform IPA with Clarivate Meta Core to gain more insights into genomic and proteomic data

https://digitalinsights.qiagen.com/products-overview/discovery-insights-portfolio/analysis-and-visualization/qiagen-ipa/?cmpid=QDI_GA_Comp&gad_source=2&gclid=EAIaIQobChMIwu6HtvHGhQMVnZ9aBR1iCgHTEAEYASAAEgJiWPD_BwE

From the Qiagen website on Ingenuity Pathway Analysis: https://digitalinsights.qiagen.com/products-overview/discovery-insights-portfolio/analysis-and-visualization/qiagen-ipa/ 

Understand complex ‘omics data to accelerate your research

Discover why QIAGEN Ingenuity Pathway Analysis (IPA) is the leading pathway analysis application among the life science research community and is cited in tens of thousands of articles for the analysis, integration and interpretation of data derived from ‘omics experiments. Such experiments include:

  • RNA-seq
  • Small RNA-seq
  • Metabolomics
  • Proteomics
  • Microarrays including miRNA and SNP
  • Small-scale experiments

With QIAGEN IPA you can predict downstream effects and identify new targets or candidate biomarkers. QIAGEN Ingenuity Pathway Analysis helps you perform insightful data analysis and interpretation to understand your experimental results within the context of various biological systems.

Articles Relevant to Drug Development, Natural Language Processing in Drug Development, and Clarivate on this Open Access Scientific Journal Include:

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

New York Academy of Sciences Symposium: The New Wave of AI in Healthcare 2024. May 1-2, 2024 New York City, NY

Clarivate Analytics – a Powerhouse in IP assets and in Pharmaceuticals Informercials

Read Full Post »

Live Notes from JP Morgan Healthcare Conference Virtual Endpoints Preview: January 8-9 2024

Reporter: Stephen J. Williams, Ph.D.

Endpoints at #JPM24 | Primed to unlock biopharma’s next dealmaking wave
Endpoints at JP Morgan Healthcare Conference
January 8-9 | San Francisco, CA80 Mission St, San Francisco, CA

An oasis has emerged in the biopharma money desert as backers look to replenish capital — still, uncertainty remains on whether it’s a mirage or the much needed dealmaking bump the industry needs. Yet spirits run high as JPM24 marks the triumphant return of inking strategic alliances and peering into the industry crystal ball — while keeping an eye out for some major M&A.

We’re back live from San Francisco for JPM Monday and Tuesday — our calendar of can’t-miss panels and fireside chats will feature prominent biopharma leaders to watch. The Endpoints Hub provides the ultimate coworking space with everything you need — 1:1 and group meeting spots plus guest pass capabilities and more. Join us in-person at the Endpoints Hub or watch online to stay plugged into all the action.

8 JAN
Welcome remarks
8:05 AM – 8:25 AM PST
Pfizer vet Mikael Dolsten has some thoughts on Big Pharma R&D

Endpoints News founding editor John Carroll will sit down with longtime Pfizer CSO Mikael Dolsten to talk about Pfizer’s pipeline, what he’s learned on the job about preclinical research and development and what’s ahead for the pharma giant in drug development and deals.

Mikael Dolsten

Chief Scientific Officer, President, Pfizer Research & Development

Pfizer

Pfizer Mikael Dolsten: Pfizer produced a series of AI generated molecules with new properties. Sees rapid adoption of AI in the area of drug discovery and molecular design.

 
 
8:25 AM – 9:05 AM PST
What pharma wants: The industry’s dealmakers look ahead at 2024

The drug industry’s appetite for new assets hasn’t slowed down. Top business development execs will give their outlook on the year, what they’re looking for and how they see the market.

Glenn Hunzinger

Pharmaceutical & Life Sciences Consulting Solutions Leader

PwC US

Rachna Khosla

SVP, Head of Business Development

Amgen

James Sabry

Global Head of Pharma Partnering

Roche

Devang Bhuva

SVP, Corporate Development

Gilead Sciences, Inc.

Endpoints News

Dealmaking panel

Glenn Hunzinger: if you do not have a GLP1 will have a tough time getting a good market price for your company; capital markets are not where they want to be; sees a tough deal making climate like last year.  The problem with many biotech companies are they are coming earlier to the venture capital because of greater funding needs and so it is imperative that they articulate the potential of their company in scientific detail

Rachna Khosla:  Make sure your investors are not just CAPITAL PARTNERS but use their expertise and involve them in development issues you may have, especially ones that a young firm will face.  The problem is most investments assume what the future looks like (for example how antibody drug conjugates, once a field left for dead, has been rejuvenated because of advances in chemistry). 

James Sabry: noted that cardiac and metabolic drugs are now at the focus of many investors, especially with the new anti-obesity drugs on market

Devang Bhuva: Most deals we see start as collaborations or partnerships.  You want to involve an alliance management team early in the deal making process.  This process could take years.

 
9:05 AM – 9:20 AM PST
The IPO: How Apogee Therapeutics went public in the most challenging market in years

Not many biotechs went public in 2023. And of those that did, not many have had a great time of it. Apogee is the exception and our panel will offer a behind-the-scenes look at their decision to enter the market and what life is like as a young public company.

Michael Henderson

CEO

Apogee Therapeutics

Kyle LaHucik

MODERATOR

Senior Reporter

Endpoints News

Michael Henderson:  Not many biotech IPOs deals happened in 2023.  Michael feels it is because too many biotechs focused on building platforms, which was a hard sell in 2023.  He felt not many biotechs had clear milestones and investors wanted a clear primary validated target.  He said many biotech startups are in a funding crunch and most need at least $440M on their balance sheet to get to 2026.

9:50 AM – 10:10 AM PST
Top predictions for biotech in 2024

Catalent CEO Alessandro Maselli will be back at the big JPM healthcare confab to talk with Endpoints News founder John Carroll about their top predictions of what’s coming up for the biotech industry in 2024. The stakes couldn’t be higher as the industry grapples with headwinds and new opportunities in a gale of market forces. Two top observers share their thoughts on the year ahead.

Alessandro Maselli

President & CEO

Catalent

10:15 AM – 10:35 AM PST
Innovation at a crossroads: Keys to unlocking the value of science and technology

The industry has long discussed the promise of technology and the acceleration it provides in scientific advancement and across the industry value chain. However, the promise of its impact has yet to fully be realized. This discussion will outline the keys to unleashing this promise and the implications and actions to be taken by the biopharmaceutical companies across the industry.

Ray Pressburger

North America Life Sciences Industry Lead & Global Life Sciences Strategy Lead

Accenture

SPONSORED BY

10:35 AM – 11:05 AM PST
Activism and Investing: In conversation with Elliott Investment Management’s Marc Steinberg

Elliott has been behind many of 2023’s highest-profile healthcare investments, including multiple activist engagements and taking Syneos Health private. What has made large healthcare companies such interesting investment opportunities for firms like Elliott? What’s Elliott’s investing strategy in healthcare? And what should companies expect when an activist calls?

Marc Steinberg

Senior Portfolio Manager

Elliott Investment Management

Andrew Dunn

MODERATOR

Biopharma Correspondent

Endpoints News

11:05 AM – 11:35 AM PST
Creating ROI from AI

AI is predicted to transform the way drugs are made, from discovery to clinical trials to market. But beyond the initial hype and early adoption, where has AI made meaningful contributions to R&D? How does it help drug developers advance science? Endpoints publisher Arsalan Arif is convening a panel of leading experts to discuss the state of AI in the pharmaceutical landscape and the outlook for 2024. How does AI impact the drug pipeline, from the early steps of discovery to reducing trial failure rate?

Thomas Clozel

Co-Founder & CEO

Owkin

Venkat Sethuraman

SVP, Global Biometrics & Data Sciences

Bristol Myers Squibb

Frank O. Nestle

Global Head of Research & Chief Scientific Officer

Sanofi

Matthias Evers

Chief Business Officer

Evotec

Arsalan Arif

MODERATOR

Founder & Publisher

Endpoints News

SPONSORED BY

11:35 AM – 12:00 PM PST
Biopharma’s dealmaker: Behind the scenes with Centerview Partners co-president Eric Tokat

Almost every major biopharma deal in 2023 had Centerview’s name attached to it. And much of the time, Eric Tokat was the banker making those deals happen. Hear his outlook for 2024, how transactions are getting done and what’s placed his firm at the center of so much action.

E. Eric Tokat

Co-President, Investment Banking

Centerview Partners

CenterView Partners Eric Tokat feels dealmaking will improve in 2024, given the recent flurry of dealmaking at end of last year and right before main JPM Healthcare Conference.  He says Centerview wants to help the biotechs they invest in on their strategic path.  This may translate into buyers more actively involved (more than startups want) and buyers now are in the drivers seat as far as the timeline of deals and development.

Is the megamerger dead for this year?  He says it is very hard to see two major mergers happening but there will be many smaller and mid size biotech deals happening, but these deals will be more speculative in nature..  The focus for large pharma is top line growth.  Most of the buyers have an infrastructure and value is more of buying and dropping it in their business so there is now a huge emphasis on due diligence on whether synergies exist or not

 
12:00 PM – 12:30 PM PST
Founder, legend, leader: In conversation with Nobel laureate Carolyn Bertozzi

Carolyn Bertozzi’s discoveries around bioorthogonal chemistry won the Nobel Prize in Chemistry in 2022 and are at the heart of new therapies being tested in patients. Join us as we discuss what inspires her and where she sees the next big advances.

Carolyn Bertozzi

Prof. of Chemistry, Stanford University and Baker Family Director of Sarafan ChEM-H

Stanford University

Nicole DeFeudis

MODERATOR

Editor

Endpoints News

Bioorthogonal chemistry: class of high yielding chemical reactions that proceed rapidly and selectively in biological environments without side reactions toward endogenous functions.  This is also a type of ‘click chemistry’ in biological system where only specifically alter the biomolecule of interest.

Orthogonal: two chemicals not interacting with each other

Dr. Bertozzi noted she has started a new Antibody-Drug-Conjugate (ADC) company which involves designing with biorthogonal chemistry to make new functional molecules with varying properties

She noted hardly any biologists knew anything about glycobiology when she first started.  However now she feels pharma and academia are working very well with each other

Bioorthogonal and Click Chemistry
Curated by Prof. Carolyn R. Bertozzi, 2022 winner of the Nobel Prize in Chemistry

Source: https://pubs.acs.org/page/vi/bioorthogonal-click-chemistry

The 2022 Nobel Prize in Chemistry has been awarded jointly to ACS Central Science Editor-in-Chief, Carolyn R. Bertozzi of Stanford University, Morten Meldal of the University of Copenhagen, and K. Barry Sharpless of Scripps Research, for the development of click chemistry and bioorthogonal chemistry.

To celebrate this remarkable achievement, 2022 Nobel Prize winner Professor Carolyn R. Bertozzi has curated this Bioorthogonal and Click Chemistry Virtual Issue, highlighting papers published across ACS journals that have built upon the foundational work in this exciting area of chemistry.

From Mechanism to Mouse: A Tale of Two Bioorthogonal Reactions

Ellen M. Sletten and Carolyn R. Bertozzi* Acc. Chem. Res. 2011, 44, 9, 666-676 August 15, 2011

Abstract

Bioorthogonal reactions are chemical reactions that neither interact with nor interfere with a biological system. The participating functional groups must be inert to biological moieties, must selectively reactive with each other under biocompatible conditions, and, for in vivo applications, must be nontoxic to cells and organisms. Additionally, it is helpful if one reactive group is small and therefore minimally perturbing of a biomolecule into which it has been introduced either chemically or biosynthetically. Examples from the past decade suggest that a promising strategy for bioorthogonal reaction development begins with an analysis of functional group and reactivity space outside those defined by nature. Issues such as stability of reactants and products (particularly in water), kinetics, and unwanted side reactivity with biofunctionalities must be addressed, ideally guided by detailed mechanistic studies. Finally, the reaction must be tested in a variety of environments, escalating from aqueous media to biomolecule solutions to cultured cells and, for the most optimized transformations, to live organisms.

9 JAN

9:40 AM – 10:10 AM PST

Biotech downturn survival school

Our panelists have seen the worst, and made it through to the other side. Join us for downturn survival school as our panelists talk about what sets apart the ones who make it through tough times.

These panalists think it will be specialist capital year to shine while the general capital is still sitting on the sidelines

JJ Kang

CEO

Appia Bio

“2023 was a tough year while 2020 was a boon year to start a company.  We will continue to see these cycles; many of these new CEOs have never seen a biotech downturn yet and may not know how to preserve capital for the downturn”.

“Doing a partnership with Kite Pharmaceuticals early in our startp allowed us to get work done without risking a lot of capital, even if it means equity and asset dilution.  That makes sense. However even if you are small insist on being an equal partner.”

“There are many investors we talk to who do not want to invest in cell therapy.  Too risky now”

Carl Gordon

Managing Partner

OrbiMed Advisors

There are many macroeconomic factors affecting investment and capital today which will carry on through 2024.   Not raising money when you do not need money is a bad philosophy.  Always bbe raising captial.  This is especially true when you have to rely on hedge funds.  Parnerships howeve are sometimes the only way for small biotechs to leverage their strengths.

Joshua Boger

Executive Chair

Alkeus Pharmaceuticals, Inc.

Boger: Expect volatility for 2024.  This environment feels very different than past downturns.

Even in downturns there is still lots of capital; remember access to human capital is better in a downturn and is easier to access;  however it has become harder to get drug approvals

The panelists agree that access to capital and funding will be as tricky in 2024 than 2023.  They did

suggest that a new funding avenue, private credit, may be a source of capital.  This is discussed below:

When thinking about a private alternative investment asset class, the first thing that springs to mind is private equity. But there’s one more asset class with the word private in its name that has recently gained much attention. We’re talking about private credit

Indeed, this once little-known investment strategy is now growing rapidly in popularity, offering private investors worldwide an exciting opportunity to diversify their portfolio with, in theory, less risky investments that yield significant returns. 

  • Private credit investments refer to investors lending money to companies who then repay the loan at a given interest rate within the predetermined period.
  • The private credit market has grown significantly over the past years, rising from $875 million in 2020 to $1.4 trillion at the beginning of 2023. 

Please WATCH VIDEO BY GOLDMAN SACHS ON PRIVATE CREDIT

 

 

 

 

10:50 AM – 11:20 AM PST

The New Molecule: How breakthrough technologies are actually changing pharma R&D

Join us for a look at how AI, machine learning and generative technologies are actually being applied inside drugmakers’ labs. We’ll explore how new technologies are being used, their implications, how they intersect with regulatory and IP issues and how this fast-changing field is likely to evolve.

Kailash Swarna

Managing Director & Global Life Sciences Clinical Development Lead

Accenture

Artificial Intelligence is making impact in a grand way on biology in three aspects:

  1. Speeding up target validation: now we can get through 300 molecules a day
  2. Predicition like AlphaFold is doing; molecular simulations
  3. Document submission especially with regulatory and IND submissions

Pamela Carroll

COO

Isomorphic Labs formerly of AlphaFold

We were first with Novartis at last year JPM and was one year old but parnering with them in that initial year was very important for sealing the deal.

They are looking now at neurologic diseases like ALS.  She wondered whether ALS is actually multiple diseases and we need to stratify patients like we do in oncology trials.  Their main competion is the whole tech world like Amazon, Google and other Machine Learning companies so being a tech player in the biotech world means you are not just competing with other biotechs but large tech companies as well.

Jorge Conde

General Partner

Andreessen Horowitz

Need is still great for drug discovery; early adopters show AI tools can be used in big pharma. There are lots of applications of AI in managing care; a lot of back office applications including patient triaging.  He does not see big AI mergers with pharma companies –  this will be mainly partnerships not M&A deals

Alicyn Campbell

Chief Scientific Officer

Evinova, a Healthtech Subsidiary of the AstraZeneca Group

There is a need to turn AI for real world example.  For example AI tools were used in clinical trials to determine patient cohorts with pneumonitis.  At Evinova they are determining how AI can hel[p show clinical benefit with respect to efficacy and safety

Joshua Boger at #JPM24 (Brian Benton Photography)

  January 12, 2024 09:06 AM ESTUpdated 10:00 AM PeopleStartups

Vertex founder Joshua Boger on surviving downturns, ‘painful’ partnerships, and the importance of culture: #JPM24

Andrew Dunn

Biopharma Correspondent

Source: https://endpts.com/jpm24-vertex-founder-joshua-boger-on-surviving-downturns-painful-partnerships-and-the-importance-of-culture/

While the JP Morgan Healthcare Conference was full of voices of measured optimism, rooting for the market to bounce back in 2024, one longtime biotech leader warned against setting any firm expectations.

Instead of predicting when the downturn may end, Vertex Pharmaceuticals founder Joshua Boger said he advises biotech leaders to expect — and plan for — volatility. Speaking Tuesday on an Endpoints News panel alongside OrbiMed’s Carl Gordon and Appia Bio CEO JJ Kang, Boger shared lessons learned on surviving downturns, striking pharma deals, and the importance of keeping a company’s culture based on his two decades of founding and leading Vertex as CEO from 1989 to 2009. The 72-year-old is now serving as executive chairman of Alkeus Pharmaceuticals, a startup developing a rare disease drug.

“I never experienced a straight line up,” Boger said. “Everything had its cycles, and it was how you respond to the cycle, not by predicting when the end is going to be, but just by responding to the present situation.”

At Boger’s first appearance at the JP Morgan conference in 1991, he said the conference’s theme was the end of biotech financing. Just a few months later, Regeneron successfully went public, rapidly changing the outlook for the whole field.

“We had no idea we were ever going to take public money,” he said. “When Regeneron did their IPO, we went, ‘Whoa, there’s something happening here,’ and we pivoted quickly.”

Vertex went public later that year. Throughout his 20-year tenure, Boger said no pharma company ever made an acquisition offer for Vertex, which now commands a market value of $110 billion and recently won the first FDA approval for a CRISPR gene editing therapy.

“We had an uber corporate policy to always make ourselves more expensive than anyone would stomach,” Boger said.

However, Vertex did strike a range of partnerships with Big Pharmas, which Boger described as a painful but necessary part of running a biotech startup.

“It’s impossible for a partnership not to slow you down,” he said. “You can and should try as hard as you can not to do that, but just count on it. They’ll slow you down.”

Boger said startups should insist on being equal partners in pharma deals, at least making sure they have a seat at a partner’s development meetings.

“Realize they’re going to be painful, it’s going to be horrible, and you need to do it,” Boger said.

While Vertex suffered through layoffs, stock price plunges, and trial failures, Boger credited a focus on culture as key to its long-term success.

“It’s the most important ingredient for a successful company,” he said. “Technology is acquirable. Culture is not acquirable. There are 10 companies that will fail because of culture for every one that succeeds, and the successful companies in retrospect will almost always have special cultural aspects that kept them through those downtimes.”

JPM24 opens with ADCs the hottest ticket in San Francisco

By Annalee ArmstrongJan 8, 2024 6:30am

Source: https://www.fiercebiotech.com/biotech/jpm24-opens-adcs-hottest-ticket-san-francisco

The overall deal flow in biopharma tapered off in 2023 but the big companies sure know what they want (what they really, really want), according to a new report from J.P. Morgan.

And that’s antibody-drug conjugates, which drove a fourth-quarter spike in licensing deal proceeds and provided a glimmer of hope to an industry battered by outside forces and grim financing prospects.

J.P. Morgan’s annual 2023 Biopharma Licensing and Venture Report arrived on the eve of the firm’s famous conference, which is set to welcome thousands of attendees in San Francisco today—East Coast weather permitting.

2023 was tough, but clinical biotechs still had a lot of opportunities to wheel and deal, according to J.P. Morgan. While licensing deals, venture investments, M&A and IPOs were down overall in the fourth quarter, deal values stayed fairly high thanks to a flurry of late-stage tie ups.

Follow the Fierce team’s coverage of the 2024 J.P. Morgan Healthcare Conference here

Biopharma licensing partnerships accounted for $63 billion in total value during the fourth quarter from 108 deals. Just one deal—Merck’s ADC partnership with Daiichi Sankyo—accounted for $22 billion of that. Another huge one was another ADC bet, with Bristol Myers Squibb signing on to work with SystImmune for a total value of $8.4 billion. If you exclude the Merck deal, the total value of these partnerships is still higher than the previous quarter, which ended with $32.1 billion.

The total number of licensing deals compares to 149 in the same quarter a year earlier, 195 for Q4 2021 and 223 for Q4 2022.

As for venture investments, the year closed out with $17 billion total across 250 rounds, thanks to $3.5 billion earned through 79 rounds in the last quarter. Aiolos Bio snagged the title of largest venture round of the quarter with $245 million, which also proved to be the largest series A, too.

There was just one IPO in all of the fourth quarter—Cargo Therapeutics making the plunge for $300 million—and 13 overall for the year. It’s a far cry from the heyday of 2021 and experts are still unsure what 2024 will hold. J.P. Morgan reported $2.5 billion raised from 12 completed biopharma IPOs for the year on Nasdaq and NYSE. Nine out of the 12 companies had clinical programs when they took the leap to the public markets. As of December 13, five of the companies were trading above their IPO price.

As for M&A, December saw a rush of Big Pharmas snapping up companies around Christmas. J.P. Morgan tallied the fourth quarter at $37.6 billion and $128.8 billion across 112 total acquisitions for all of 2023.

AbbVie was the top buyer of the quarter with the two largest acquisitions thanks to the $10 billion outlay for ImmunoGen and $8.7 billion buy of Cerevel Therapeutics.

All of this adds up to 270 total deals in the fourth quarter total, which is lower than the third quarter which exceeded 300.

J.P. Morgan sees some big potential for smaller biopharmas looking for licensing partners, as Big Pharmas have been handing out larger upfront payments for the deals they really want.

Cancer was once again the most in-demand therapeutic areas, reaching a new height of $86.1 billion in 2023. Followed by $21.1 billion for neurological disorders.

For More Articles on Real Time Conference Coverage in this Open Access Scientific Journal see:

Part One: The Process of Real Time Coverage using Social Media

Part Two: List of BioTech Conferences 2013 to Present

https://worldmedicalinnovation.org/

https://pharmaceuticalintelligence.com/2022/05/01/2022-world-medical-innovation-forum-gene-cell-therapy-may-2-4-2022-boston-in-person/

 

https://event.technologyreview.com/emtech-digital-2022/agenda-overview

 

Read Full Post »

The Continued Impact and Possibilities of AI in Medical and Pharmaceutical Industry Practices

Reporter: Adam P. Tubman, MSc Biotechnology, Research Associate 3, Computer Graphics and AI in Drug Discovery

 

Researchers have been able to discover many ways to incorporate AI into the practices of healthcare, both in terms of medical healthcare and also in pharmaceutical drug development. For example, given the situation where a doctor provides an inaccurate diagnosis to a patient because the doctor had an incomplete or inaccurate medical record/history, AI presents a solution that has the potential to rapidly and correctly account for human error and predict the correct diagnosis based on the patterns identified in other patient’s medical history to disease diagnosis indication. In the pharmaceutical industry, companies are changing and expanding approaches to drug discovery and development given the possibilities that AI can offer. One company, Reverie Labs, located in Cambridge, MA, is a pharmaceutical company utilizing AI for application of machine learning and computational chemistry to discover new possible compounds to be used in the development of cancer treatments.

Today, AI uses have had many other applications in medicine including managing healthcare data and performing robotic surgery, both of which transform the in-person patient and doctor experience. AI has even been used to change in-person cancer patient experiences. For example, Freenome, a company in San Francisco, CA uses AI in initial screenings, blood tests and diagnostic tests when a patient is being initially tested for cancer. The hope is that this technology will aide in speeding up cancer diagnoses and lead to new treatment developments.

The future will continue to bring many possibilities of AI, provided an acceptable level of accuracy is still maintained by AI technologies and that the technology remains beneficial. If research continues to focus on diagnosing diseases at a faster rate given the potential human errors in having an inaccurate or incomplete medical record upon diagnosis, AI could provide an improved experience for patients given the quicker diagnosis and treatment combined with less time spent either treating the wrong underlying condition or not knowing what condition to treat when accounting for an incomplete medical record. If this technology is proven to be successful not just in theory, but in practice, technology would then be available and could be beneficially applied to all diagnoses and treatment plans, across the world.

However, the reality regarding AI development is that its evolution depends on how much human effort is involved in its development. Therefore, the world won’t know or see the full benefits of AI until it is developed and actively applied. Similarly, the impact that AI will have in medical and pharmaceutical practices won’t be known until scientists fully develop and apply the technologies. Many possibilities, including a possible drastic lowering of the cost for pharmaceutical drugs across the board once drugs are much more readily discovered and produced, may carry a profound benefit to patients who currently struggle to afford their own treatment plans. Additionally, unforeseen advances in the medicinal and pharmaceutical fields because of AI development will lead to unforeseen effects on the global economy and many other life changing variables for the entire world.

For more information on this topic, please check out the article below.

SOURCE

Daley, S. (2018). Artificial Intelligence in healthcare: 39 examples Improving the Future of Medicine. Built In. https://builtin.com/artificial-intelligence/artificial-intelligence-healthcare

Read Full Post »

ChatGPT Chemistry Assistant for Text Mining and the Prediction of MOF Synthesis

Reporter: Aviva Lev-Ari, PhD, RN

ChatGPT Chemistry Assistant for Text Mining and the Prediction of MOF Synthesis

  • Zhiling Zheng
  • Oufan Zhang
  • Christian Borgs
  • Jennifer T. Chayes
  • Omar M. Yaghi*

Cite this: J. Am. Chem. Soc. 2023, 145, 32, 18048–18062 Publication Date:August 7, 2023 https://doi.org/10.1021/jacs.3c05819 Copyright © 2022 American Chemical Society. This publication is licensed under these Terms of Use.https://pubs.acs.org/doi/10.1021/jacs.3c05819

 

Abstract

We use prompt engineering to guide ChatGPT in the automation of text mining of metal–organic framework (MOF) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT’s tendency to hallucinate information, an issue that previously made the use of large language models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different trade-offs among labor, speed, and accuracy. We deploy this system to extract 26 257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90–99%. Furthermore, with the data set built by text mining, we constructed a machine-learning model with over 87% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions about chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry subdisciplines.

This publication is licensed for personal use by The American Chemical Society.

Concluding Remarks


Our research has successfully demonstrated the potential of LLMs, particularly GPT models, in the domain of chemistry research. We presented a ChatGPT Chemistry Assistant that includes three different but connected approaches to text mining with ChemPrompt Engineering: Process 3 is capable of conducting search and filtration, Processes 2 and 3 classify synthesis paragraphs, and Processes 1, 2, and 3 are capable of summarizing synthesis conditions into structured data sets. Enhanced by three fundamental principles of prompt engineering specific to chemistry text processing, coupled with the interactive prompt refinement strategy, the ChatGPT-based assistant has substantially advanced the extraction and analysis of the MOF synthesis literature, with precision, recall, and F1 scores exceeding 90%.
We elucidated two crucial insights from the data set of synthesis conditions. First, the data can be employed to construct predictive models for reaction outcomes, which shed light on the key experimental factors that influence the MOF crystallization process. Second, it is possible to create an MOF chatbot that can provide accurate answers based on text mining, thereby improving access to the synthesis data set and achieving a data-to-dialogue transition. This investigation illustrates the potential for rapid advancement inherent in ChatGPT and other LLMs as a proof of concept.
On a fundamental level, this study provides guidance on interacting with LLMs to serve as AI assistants for chemists, accelerating research with minimal prerequisite coding expertise and thus bridging the gap between chemistry and the realms of computational and data science more effectively. Through interaction and chatting, the code and design of experiments can be modified, democratizing data mining and enhancing the landscape of scientific research. Our work sets a foundation for further exploration and application of LLMs across various scientific domains, paving the way for a new era of AI-assisted chemistry research.

SOURCE

https://pubs.acs.org/doi/10.1021/jacs.3c05819

ChatGPT accelerates chemistry discovery for climate response, study shows

Yaghi said. “AI has transformed many other sectors of our society – commerce, banking, travel. Why not transform science?”
These datasets on the synergy of the highly-porous materials known as metal-organic frameworks (MOFs) will inform predictive models. The models will accelerate chemists’ ability to create or optimize MOFs, including ones that alleviate water scarcity and capture air pollution. All chemists – not just coders – can build these databases due to the use of AI-fueled chatbots.

To help them teach ChatGPT to generate accurate and relevant information, they modified an approach called “prompt engineering” into “ChemPrompt Engineering.” They developed prompts that avoided asking ChatGPT for made up or misleading content; laid out detailed directions that explained to the chatbot the context and format for the response; and provided the large language model a template or instructions for extracting data.

The chatbot’s literature review – and the experts’ approach – was successful. ChatGPT finished in a fraction of an hour what would have taken a student years to complete, said Borgs, BIDMaP’s director. It mined the synthetic conditions of MOFs with 95% accuracy, Yaghi said.

Read Full Post »

ChatGPT Searches and Advent of Meta Threads: What it Means for Social Media and Science 3.0

Curator: Stephen J. Williams, PhD

The following explains how popular ChatGPT has become and how the latest social media platforms, including Meta’s (FaceBook) new platform Threads, is becoming as popular or more popular than older social Platforms.  In fact, since its short inception since last week (Threads launced 7/07/2023), Threads is threatening Twitter for dominance in that market.

The following is taken from an email from Charlie Downing Jones from journoreasearch.org and  https://www.digital-adoption.com/ :

U.S. searches for ChatGPT overtake TikTok, Pinterest, and Zoom

  • Google searches for ChatGPT have overtaken TikTok in the U.S., jumping to 7.1 million monthly searches compared to 5.1 million
  • The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data
  • ChatGPT is now searched more than most major social networks, including LinkedIn, Pinterest, TikTok, and Reddit

Analysis of Google search data reveals that online searches for ChatGPT, the popular AI chatbot, have overtaken most popular social networks in the U.S. This comes when search interest in artificial intelligence is at its highest point in history.

 

The findings by Digital-adoption.com reveal that US-based searches for ChatGPT have exploded and overtaken popular social networks, such as LinkedIn, Pinterest, and Tiktok, some by millions.

 

Ranking Keyword US Search Volume (Monthly)
1 Facebook                                  70,920,000
2 YouTube                                  69,260,000
3 Twitter                                  15,440,000
4 Instagram                                  12,240,000
5 ChatGPT                                  7,130,000
6 LinkedIn                                  6,990,000
7 Pinterest                                  5,790,000
8 TikTok                                  5,130,000
9 Reddit                                  4,060,000
10 Snapchat                                  1,280,000
11 WhatsApp                                  936,000

 

Since its release in November 2022, searches for ChatGPT have overtaken those of most major social networks. According to the latest June search figures by search tool Ahrefs, searches for ‘ChatGPT’ and ‘Chat GPT’ are made 7,130,000 times monthly in the U.S.

That’s more than the monthly search volume for most of the top ten social networks, including LinkedIn, Pinterest, and TikTok. TikTok is one of the largest growing social media apps, with 100 million users in just a year.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data

 

Searches for ChatGPT have eclipsed other major networks in the U.S., such as Reddit, by millions.

Every day search terms such as ‘maps’ and ‘flights’ have even seen their search volumes pale compared to the rising popularity of ChatGPT. ‘Maps’ is currently searched 440,000 times less than the chatbot each month, and ‘Flights’ is now Googled 2.2 million times less.

2023 has been a breakout year for AI, as searches for the term have more than doubled from 17 million in January 2023 to 42 million in May. In comparison, there were 7.9 million searches in January 2022. There has been an 825% increase in searches for ‘AI’ in the US compared to the average over the last five years.

There is a correlation between the uptick and the public releases of accessible AI chatbots such as ChatGPT, released on November 30, 2022, and Bing AI and Google Bard, released in May 2023.

According to TikTok data, interest in artificial intelligence has soared tenfold since 2020, and virtual reality has more than tripled.

AI has been a big topic of conversation this year as accessible AI chatbots and new technologies were released and sparked rapid adoption, prompting tech leaders like Elon Musk to call for AI regulation.

A spokesperson from Digital-adoption.com commented on the findings: “There has been a massive surge in AI interest this year. Apple’s announcement of Vision Pro has captured audiences at the right time, when new AI technologies, like ChatGPT, have become accessible to almost anyone. The rapid adoption of ChatGPT is surprising, with it becoming one of the fastest-growing tools available”.

All data was gathered from Ahrefs and Google Trends.

If using this story, please include a link to https://www.digital-adoption.com/ who conducted this study. A linked credit allows us to keep supplying you with content that you may find useful in the future.

 

If you need anything else, please get in touch.

All the best,
Charlie Dowling-Jones

 

charlie.dowling-jones@journoresearch.org

 

Journo Research

Part of Search Intelligence Ltd. Company registered in England No. 09361526

Why LPBI Needs to consider the new Meta Threads Platform

From Barrons

Threads Hits 100 Million Users Faster Than ChatGPT. Now It Needs Them to Stay.

 

By

Adam ClarkFollow

Updated July 10, 2023 9:00 am ET / Original July 10, 2023 7:44 am ET

The launch of Meta Platforms’ Threads looks to have outpaced even the viral success of ChatGPT in terms of signing up users. The next challenge will be keeping them around.

Since its inception on Thursday 7/07/2023, Meta’s new Threads platform has been signing up new users at an alarming rate.  On rollout date 5 million signed up, then 30 million by next morning and now as of today (7/1/2023) Threads has over 100 million signups.  Compare that to Twitter’s 436 million users, of which are tweeting on average 25% less than a few years ago, and it is easy to see why many social media pundits are calling Threads the new Twitter killer app.

 

Here are a few notes from the New York Times podcast The Daily

The Daily

1 day ago

Will Threads Kill Twitter?

Play • 33 min

Last week, Meta, the parent company of Facebook and Instagram, released Threads, a social media platform to compete with Twitter. In just 16 hours, Threads was downloaded more than 30 million times.

Mike Isaac, who covers tech companies and Silicon Valley for The Times, explains how Twitter became so vulnerable and discusses the challenges Meta faces to create a less toxic alternative.

Guest: Mike Isaac, a technology correspondent for The New York Times.

Background reading:

Here are a few notes from the podcast:

Mike Isaac lamented that Twitter has become user unfriendly for a host of reasons.  These include:

  • The instant reply’guys’ – people who reply but don’t really follow you or your thread
  • Your followers or following are not pushed to top of thread
  • The auto bots – the automated Twitter bots
  • Spam feeds
  • The changes in service and all these new fees: Twitter push to monetize everything – like airlines

Elon Musk wanted to transform Twitter but his history is always cutting, not just trimming the excess but he is known to just eliminate departments just because he either doesn’t want to pay or CAN’T pay.  With Twitter he gutted content moderation.

 

Twitter ad business is plumetting but Musk wants to make Twitter a subscription business (the Blue check mark)

Twitter only gets a couple of million $ per month from Twitter Blue but Musk has to pay billions to just pay the interest on Twitter loan for Twitter puchase years ago

It is known that Musk is not paying rent on some California offices (some are suggesting he defaulted on leases) and Musk is selling Tesla stock to pay for Twitter expenses (why TSLA stock has been falling … the consensus out there)

Twitter is largest compendium of natural language conversations and Musk wanted to limit bots from scraping Twitter data to do AI and NLP on Twitter threads.  This is also a grievance from other companies… that these ‘scrapers’ are not paying enough for Twitter data.  However as Mike asks why do the little Twitter user have to pay in either fees or cutbacks from service.  (the reason why Elon is limiting viewing per day is to limit these bots from scraping Twitter for data)

Another problem is that Twitter does not have its own servers so pays a lot to Google and AWS for server space.  It appears Elon and Twitter are running out of money.

META and THREADS

Zuckerberg has spent billions of infrastructure spending and created a massive advertising ecosystem.  This is one of the thoughts behind his push and entry into this space.  Zuckerberg actually wanted to but Twitter a decade ago.

 

Usage and growth:  The launch of Threads was Thursday 7-07-23. There were 2 million initial signups and by next morning 30 million overnight.  Today Monday 7-10-23 there are 100 million, rivaling Twitter’s 436 million accounts.  And as Musk keeps canceling Twitter accounts, angering users over fees or usage restrictions, people are looking for a good platform.  Mastedon in too technical and not having the adoption like Meta Threads is having.  Mike Isaac hopes Threads will not go the way of Google Hangouts or Plus but Google strategy did not involve social media like Facebook.

Signup and issues: Signup on Threads is easy but you need to go through Instagram.  Some people have concerns about having their instagram thread put on their Threads feed but Mike had talked to the people at Meta and they are working to allow users to keep the feeds separate, mainly because Meta understands that the Instgagram and Twitter social cultures are different and users may want to keep Threads more business-like.

Important issues for LPBI: Twitter had decided, by end of May 2023 to end their relationship with WordPress JetPack service, in which WordPress posts could automatically be posted to your Twitter account and feed.  Twitter is making users like WordPress pay for this API and WordPress said it would be too expensive as Twitter is not making a flat fee but per usage fee.  This is a major hindrance even though the Twitter social share button is still active on posts.

Initial conversations between META and WordPress have indicated META will keep this API service free for WordPress.

 

So a little background on Meta Threads and signup features from Meta (Facebook) website:

Takeaways

  • Threads is a new app, built by the Instagram team, for sharing text updates and joining public conversations.
  • You log in using your Instagram account and posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length.
  • We’re working to soon make Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.

Mark Zuckerberg just announced the initial version of Threads, an app built by the Instagram team for sharing with text. Whether you’re a creator or a casual poster, Threads offers a new, separate space for real-time updates and public conversations. We are working toward making  Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.

Instagram is where billions of people around the world connect over photos and videos. Our vision with Threads is to take what Instagram does best and expand that to text, creating a positive and creative space to express your ideas. Just like on Instagram, with Threads you can follow and connect with friends and creators who share your interests – including the people you follow on Instagram and beyond. And you can use our existing suite of safety and user controls.

Join the Conversation from Instagram

It’s easy to get started with Threads: simply use your Instagram account to log in. Your Instagram username and verification will carry over, with the option to customize your profile specifically for Threads.

Everyone who is under 16 (or under 18 in certain countries) will be defaulted into a private profile when they join Threads. You can choose to follow the same accounts you do on Instagram, and find more people who care about the same things you do. The core accessibility features available on Instagram today, such as screen reader support and AI-generated image descriptions, are also enabled on Threads.

Your feed on Threads includes threads posted by people you follow, and recommended content from new creators you haven’t discovered yet. Posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length. You can easily share a Threads post to your Instagram story, or share your post as a link on any other platform you choose.

Tune Out the Noise

We built Threads with tools to enable positive, productive conversations. You can control who can mention you or reply to you within Threads. Like on Instagram, you can add hidden words to filter out replies to your threads that contain specific words. You can unfollow, block, restrict or report a profile on Threads by tapping the three-dot menu, and any accounts you’ve blocked on Instagram will automatically be blocked on Threads.

As with all our products, we’re taking safety seriously, and we’ll enforce Instagram’s Community Guidelines on content and interactions in the app. Since 2016 we’ve invested more than $16 billion in building up the teams and technologies needed to protect our users, and we remain focused on advancing our industry-leading integrity efforts and investments to protect our community.

Compatible with Interoperable Networks

Soon, we are planning to make Threads compatible with ActivityPub, the open social networking protocol established by the World Wide Web Consortium (W3C), the body responsible for the open standards that power the modern web. This would make Threads interoperable with other apps that also support the ActivityPub protocol, such as Mastodon and WordPress – allowing new types of connections that are simply not possible on most social apps today. Other platforms including Tumblr have shared plans to support the ActivityPub protocol in the future.

We’re committed to giving you more control over your audience on Threads – our plan is to work  with ActivityPub to provide you the option to stop using Threads and transfer your content to another service. Our vision is that people using compatible apps will be able to follow and interact with people on Threads without having a Threads account, and vice versa, ushering in a new era of diverse and interconnected networks. If you have a public profile on Threads, this means your posts would be accessible from other apps, allowing you to reach new people with no added effort. If you have a private profile, you’d be able to approve users on Threads who want to follow you and interact with your content, similar to your experience on Instagram.

The benefits of open social networking protocols go well beyond the ways people can follow each other. Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation. Each compatible app can set its own community standards and content moderation policies, meaning people have the freedom to choose spaces that align with their values. We believe this decentralized approach, similar to the protocols governing email and the web itself, will play an important role in the future of online platforms.

Threads is Meta’s first app envisioned to be compatible with an open social networking protocol – we hope that by joining this fast-growing ecosystem of interoperable services, Threads will help people find their community, no matter what app they use.

What’s Next

We’re rolling out Threads today in more than 100 countries for iOS and Android, and people in those countries can download the app from the Apple App Store and Google Play Store.

In addition to working toward making Threads compatible with the ActivityPub protocol, soon we’ll be adding a number of new features to help you continue to discover threads and creators you’re interested in, including improved recommendations in feed and a more robust search function that makes it easier to follow topics and trends in real time.

 

Should Science Migrate over to Threads Instead of Twitter?

I have written multiple time of the impact of social media, Science and Web 2.0 and the new Science and Web 3.0 including

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Science Has A Systemic Problem, Not an Innovation Problem

 

It, as of this writing, appears it is not crucial that scientific institutions need to migrate over to Threads yet, although the impetus is certainly there.  Many of the signups have of course been through Instagram (which is the only way to signup for now) and a search of @Threads does not show that large scientific organizations have signed up for now.

 

A search for NIH, NCBI, AACR, and Personalized Medicine Coalition or PMC which is the big MGH orgaization on personalized medicine appears to return nothing yet.  Pfizer and most big pharma is on @Threads now but that is because they maintain a marketing thread on Instagram.  How necessary is @Threads for communicating science over Science 3.0 platform remains to be seen.  In addition, how will @Threads be used for real time scientific conference coverage?  Will Meta be able to integrate with virtual reality?

Other articles of Note on this Open Access Scientific Journal Include:

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Science Has A Systemic Problem, Not an Innovation Problem

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Is It Time for the Virtual Scientific Conference?: Coronavirus, Travel Restrictions, Conferences Cancelled

Part One: The Process of Real Time Coverage using Social Media

 

 

 

 

 

Read Full Post »

OpenAI and ChatGPT face unique legal challenges over CopyRight Laws

Reporter: Stephen J. Williams, PhD

In previous weeks on this page and on the sister page ChatGPT applied to Cancer & Oncology, a comparison between ChatGPT, OpenAI, and Google large language model based search reveals a major difference between the algorithms with repect to citation and author credit.  In essence while Google returns a hyperlink to the information used to form an answer, ChatGPT and OpenAI are agnostic in crediting or citing the sources of information used to generate answers to queries.  With ChatGPT the source data, or more specifically the training set used for the AI algorithm is never properly cited in the query results.

This, as outlined below, is making a big problem when it comes to copyright law and intelectual property.  Last week a major lawsuit has been filed because of incorrect and citing, referencing, and attribution of ownership of intellectual property.

 

As Miles Klee reports in The Rolling Stone

“OpenAI faces allegations of privacy invasion and violating authors’ copyright — but this may be just the tip of the iceberg”

 

The burgeoning AI industry has just crossed another major milestone, with two new class-action lawsuits calling into question whether this technology violates privacy rights, scrapes intellectual property without consent and negatively affects the public at large. Experts believe they’re likely to be the first in a wave of legal challenges to companies working on such products. Both suits were filed on Wednesday and target OpenAI, a research lab consisting of both a nonprofit arm and a corporation, over ChatGPT software, a “large language model” capable of generating human-like responses to text input. One, filed by Clarkson, a public interest law firm, is wide-ranging and invokes the potentially “existential” threat of AI itself. The other, filed by the Joseph Saveri Law Firm and attorney Matthew Butterick, is focused on two established authors, Paul Tremblay and Mona Awad, who claim that their books were among those ChatGPT was trained on — a violation of copyright, according to the complaint. (Saveri and Butterick are separately pursuing legal action against OpenAI, GitHub and Microsoft over GitHub Copilot, an AI-based coding product that they argue “appears to profit from the work of open-source programmers by violating the conditions of their open-source licenses.”)

Saveri and Butterick’s latest suit goes after OpenAI for direct copyright infringement as well as violations of the Digital Millennium Copyright Act (DMCA). Tremblay (who wrote the novel The Cabin at the End of the World) and Awad (author of 13 Ways of Looking at a Fat Girl and Bunny) are the representatives of a proposed class of plaintiffs who would seek damages as well as injunctive relief in the form of changes to ChatGPT. The filing includes ChatGPT’s detailed responses to user questions about the plots of Tremblay’s and Awad’s books — evidence, the attorneys argue, that OpenAI is unduly profiting off of infringed materials, which were scraped by the chat bot. While the suits venture into uncharted legal territory, they were more or less inevitable, according to those who research AI tech and privacy or practice law around those issues.

 

“[AI companies] should have and likely did expect these types of challenges,” says Ben Winters, senior counsel at the Electronic Privacy Information Center and head of the organization’s AI and Human Rights Project. He points out that OpenAI CEO Sam Altman mentioned a few prior “frivolous” suits against the company during his congressional testimony on artificial intelligence in May. “Whenever you create a tool that implicates so much personal data and can be used so widely for such harmful and otherwise personal purposes, I would be shocked there is not anticipated legal fire,” Winters says. “Particularly since they allow this sort of unfettered access for third parties to integrate their systems, they end up getting more personal information and more live information that is less publicly available, like keystrokes and browser activity, in ways the consumer could not at all anticipate.”

Source: https://www.rollingstone.com/culture/culture-features/chatgtp-openai-lawsuits-copyright-artificial-intelligence-1234780855/

At the heart of the matter is ChatGPT and OpenAI use of ‘shadow libraries’ for AI training datasets, in which the lawsuit claims is illegal.

 

An article by Anne Bucher in topclassactions.com explains this:

Source: https://topclassactions.com/lawsuit-settlements/class-action-news/class-action-lawsuit-claims-chatgpt-uses-copyrighted-books-without-authors-consent/

They say that OpenAI defendants “profit richly” from the use of their copyrighted materials and yet the authors never consented to the use of their copyrighted materials without credit or compensation.

ChatGPT lawsuit says OpenAI has previously utilized illegal ‘shadow libraries’ for AI training datasets

Although many types of material are used to train large language models, “books offer the best examples of high-quality longform writing,” according to the ChatGPT lawsuit.

OpenAI has previously utilized books for its AI training datasets, including unpublished novels (the majority of which were under copyright) available on a website that provides the materials for free. The plaintiffs suggest that OpenAI may have utilized copyrighted materials from “flagrantly illegal shadow libraries.”

Tremblay and Awad note that OpenAI’s March 2023 paper introducing GPT-4 failed to include any information about the training dataset. However, they say that ChatGPT was able to generate highly accurate summaries of their books when prompted, suggesting that their copyrighted material was used in the training dataset without their consent.

They filed the ChatGPT class action lawsuit on behalf of themselves and a proposed class of U.S. residents and entities that own a U.S. copyright for any work used as training data for the OpenAI language models during the class period.

Earlier this year, a tech policy group urged federal regulators to block OpenAI’s GPT-4 AI product because it does not meet federal standards.

 

What is the general consensus among legal experts on generative AI and copyright?

 

From Bloomberg Law: https://www.bloomberglaw.com/external/document/XDDQ1PNK000000/copyrights-professional-perspective-copyright-chaos-legal-implic

Copyright Chaos: Legal Implications of Generative AI

Contributed by Shawn Helms and Jason Krieser, McDermott Will & Emery

Copyright Law Implications – The Ins and Outs

Given the hype around ChatGPT and the speculation that it could be widely used, it is important to understand the legal implications of the technology. First, do copyright owners of the text used to train ChatGPT have a copyright infringement claim against OpenAI? Second, can the output of ChatGPT be protected by copyright and, if so, who owns that copyright?

To answer these questions, we need to understand the application of US copyright law.

Copyright Law Basics

Based on rights in Article I, Section 8 of the Constitution, Congress passed the first copyright law in 1790. It has been amended several times. Today, US copyright law is governed by the Copyright Act of 1976. This law grants authors of original works exclusive rights to reproduce, distribute, and display their work. Copyright protection applies from the moment of creation, and, for most works, the copyright term is the life of the author plus 70 years after the author’s death. Under copyright law, the copyright holder has the exclusive right to make copies of the work, distribute it, display it publicly, and create derivative works based on it. Others who want to use the work must obtain permission from the copyright holder or use one of the exceptions to copyright law, such as fair use.

The purpose of copyright law is to incentivize authors to create novel and creative works. It does this by granting authors exclusive rights to control the use of their work, thus allowing them to financially benefit from their works. Copyright law also encourages the dissemination of knowledge by allowing others to use copyrighted works under certain conditions, such as through the fair use doctrine, which allows for limited use of copyrighted material for the purposes of criticism, commentary, news reporting, teaching, scholarship, or research. By protecting the rights of authors and creators while also allowing for the use of copyrighted works for the public benefit, copyright law aims to strike a balance between the interests of authors and the public.

Inputs – Training ChatGPT with Copyrighted Material

ChatGPT was trained on a large training dataset sourced from the internet, including a vast amount of text from websites, articles, books, social media posts, and academic papers. Importantly, ChatGPT was not aware of the source of the data, it only saw the text and learned the patterns and relationships between the words, phrases and sentences. The vast majority of the text used to train ChatGPT was certainly subject to copyright protection—except for text that is in the public domain like facts or discoveries or works whose term of copyright protection has ended.

Did the ChatGPT training process violate the copyright of the original authors? To answer that question we must determine: if during the training process OpenAI copied the underlying works—because copyright infringement requires an act of copying works that are protected by copyright; is the output of ChatGPT a “derivative work” under copyright law; and would such copying or production of a derivate work fall under the “fair use” exception of copyright law.

Copying During Training

We are not aware of publicly available information regarding whether, during the training process, the training text for ChatGPT was copied or simply referenced. Therefore, we asked ChatGPT and received this answer:

Prompt: During the ChatGPT training process, was the underlying training text copied into a OpenAI database?

Answer: During the training process, text from various sources is used to train the ChatGPT model. It is likely that the text was copied into an internal database at OpenAI for the purpose of training the model. However, the training text is typically preprocessed to remove any sensitive information such as personal identifiers, copyrighted text, etc. In addition, the text is also used to train the model and not stored in a database. The model uses the text to learn patterns and structure of the language and once the training is done, it doesn’t require the text anymore.

So, it seems that during the training process, text was copied. It is also interesting that ChatGPT stated that the training text was “preprocessed” to remove any copyrighted text. That seems highly unlikely since otherwise nearly all text would have been removed.

Is ChatGPT Output a Derivative Work?

Under US copyright law, the owner of a copyright has the exclusive right “to prepare derivative works based upon the copyrighted work.” A “derivative work” is “a work based upon one or more preexisting works.” ChatGPT is trained on preexisting works and generates output based on that training.

As Daniel Gervais, a professor at Vanderbilt Law School who specializes in intellectual property law, says, the definition of a derivative work under copyright law “could loosely be used as a definition of machine learning when applied to the creation of literary and artistic productions because AI machines can produce literary and artistic content (output) that is almost necessarily ‘based upon’ a dataset consisting of preexisting works.” Under this view, it seems that all ChatGPT output is a derivative work under copyright law.

On a related point, it is worth noting that in producing its output, ChatGPT is not “copying” anything. ChatGPT generates text based on the context of the input and the words and phrase patterns it was trained on. ChatGPT is not “copying” and then changing text.

What About Fair Use?

Let’s assume that the underlying text was copied in some way during the ChatGPT training process. Let’s further assume that outputs from Chatto are, at least sometimes, derivative works under copyright law. If that is the case, do copyright owners of the original works have a copyright infringement claim against OpenAI? Not if the copying and the output generation are covered by the doctrine of “fair use.” If a use qualifies as fair use, then actions that would otherwise be prohibited would not be deemed an infringement of copyright.

In determining whether the use made of a work in any particular case is a fair use, the factors include:

  •  The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.
  •  The nature of the copyrighted work.
  •  The amount and substantiality of the portion used in relation to the copyrighted work as a whole.
  •  The effect of the use upon the potential market for or value of the copyrighted work.

In this case, assuming OpenAI copied copyrighted text as part of the ChatGPT training process, such copying was not for a commercial purpose and had no economic impact on the copyright owner. Daniel Gervais says “it is much more likely than not” that training systems on copyrighted data will be covered by fair use.

In determining if a commercial use will be considered “fair use,” the courts will primarily look at the scope and purpose of the use and the economic impact of such use. Does the use in question change the nature of the underlying copyright material in some material way (described as a “transformative” use) and does it economically impact the original copyright holder?

Without a specific example, it is difficult to determine exactly if a resulting output from ChatGPT would be fair use. The fact that ChatGPT does not copy and has been trained on millions of underlying works, it seems likely most output would be fair use—without using significant portions of any one protected work. In addition, because of the vast corpus of text used to train ChatGPT, it seems unlikely that ChatGPT output will have a negative economic impact on any one copyright holder. But, given the capabilities of ChatGPT, that might not always be the case.

Imagine if you asked ChatGPT to “Write a long-form, coming of age, story in the style of J.K. Rowling, using the characters from Harry Potter and the Chamber of Secrets.” In that case, it would seem that the argument for fair use would be weak. This story could be sold to the public and could conceivably have a negative economic impact on J.K. Rowling. A person that wants to read a story about Harry Potter might buy this story instead of buying a book by J. K. Rowling.

Finally, it is worth noting that OpenAI is a non-profit entity that is a “AI research and deployment company.” It seems that OpenAI is the type of research company, and ChatGPT is the type of research project, that would have a strong argument for fair use. This practice has been criticized as “AI Data Laundering,” shielding commercial entities from liability by using a non-profit research institution to create the data set and train AI engines that might later be used in commercial applications.

Outputs – Can the Output of ChatGPT be Protected by Copyright

Is the output of ChatGPT protected by copyright law and, if so, who is the owner? As an initial matter, does the ChatGPT textual output fit within the definition of what is covered under copyright law: “original works of authorship fixed in any tangible medium of expression.”

The text generated by ChatGPT is the type of subject matter that, if created by a human, would be covered by copyright. However, most scholars have opined, and the US Copyright Office has ruled that the output of generative AI systems, like ChatGPT, are not protectable under US copyright law because the work must be an original, creative work of a human author.

In 2022, the US Copyright Office, ruling on whether a picture generated completely autonomously by AI could be registered as a valid copyright, stated “[b]because copyright law as codified in the 1976 Act requires human authorship, the [AI Generated] Work cannot be registered.” The U.S. Copyright Office has issued several similar statements, informing creators that it will not register copyright for works produced by a machine or computer program. The human authorship requirement of the US Copyright Office is set forth as follows:

The Human Authorship Requirement – The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879).

While such policies are not binding on the courts, the stance by the US Copyright Office seems to be in line with the purpose of copyright law flowing from the Constitution: to incentivize humans to produce creative works by giving them a monopoly over their creations for a limited period of time. Machines, of course, need and have no such motivation. In fact, copyright law expressly allows a corporation or other legal entity to be the owner of a copyright under the “work made for hire” doctrine. However, to qualify as a work made for hire, the work must be either work prepared by an employee within the scope of his or her employment, or be prepared by a party who “expressly agrees in a written instrument signed by them that the work shall be considered a work made for hire.” Only humans can be employees and only humans or corporations can enter a legally binding contract—machines cannot.

Other articles of note in this Open Access Scientific Journal on ChatGPT and Open AI Include:

Medicine with GPT-4 & ChatGPT

ChatGPT applied to Cancer & Oncology

ChatGPT applied to Medical Imaging & Radiology

ChatGPT applied to Cardiovascular diseases: Diagnosis and Management

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

 

 

 

 

Read Full Post »

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Reporter: Frason Francis Kalapurakal, Research Assistant II

Researchers from MIT and Technion have made a significant contribution to the field of machine learning by developing an adaptive algorithm that addresses the challenge of determining when a machine should follow a teacher’s instructions or explore on its own. The algorithm autonomously decides whether to use imitation learning, which involves mimicking the behavior of a skilled teacher, or reinforcement learning, which relies on trial and error to learn from the environment.

The researchers’ key innovation lies in the algorithm’s adaptability and ability to determine the most effective learning method throughout the training process. To achieve this, they trained two “students” with different learning approaches: one using a combination of reinforcement and imitation learning, and the other relying solely on reinforcement learning. The algorithm continuously compared the performance of these two students, adjusting the emphasis on imitation or reinforcement learning based on which student achieved better results.

The algorithm’s efficacy was tested through simulated training scenarios, such as navigating mazes or reorienting objects with touch sensors. In all cases, the algorithm demonstrated superior performance compared to non-adaptive methods, achieving nearly perfect success rates and significantly outperforming other methods in terms of both accuracy and speed. This adaptability could enhance the training of machines in real-world situations where uncertainty is prevalent, such as robots navigating unfamiliar buildings or performing complex tasks involving object manipulation and locomotion.

Furthermore, the algorithm’s potential applications extend beyond robotics to various domains where imitation or reinforcement learning is employed. For example, large language models like GPT-4 could be used as teachers to train smaller models to excel in specific tasks. The researchers also suggest that analyzing the similarities and differences between machines and humans learning from their respective teachers could provide valuable insights for improving the learning experience.The MIT and Technion researchers’ algorithm stands out due to its principled approach, efficiency, and versatility across different domains. Unlike existing methods that require brute-force trial-and-error or manual tuning of parameters, their algorithm dynamically adjusts the balance between imitation and trial-and-error learning based on performance comparisons. This robustness, adaptability, and promising results make it a noteworthy advancement in the field of machine learning.

References:

“TGRL: TEACHER GUIDED REINFORCEMENT LEARNING ALGORITHM FOR POMDPS” Reincarnating Reinforcement Learning Workshop at ICLR 2023 https://openreview.net/pdf?id=kTqjkIvjj7

https://arxiv.org/abs/2301.01219

Reinforcement Learning: A Survey by L. P. Kaelbling, M. L. Littman, A. W. Moore https://doi.org/10.48550/arXiv.cs/9605103

Concrete Problems in AI Safety by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané https://arxiv.org/abs/1606.06565

Other related articles published in this Open Access Online Scientific Journal include the following:

92 articles in the Category:

‘Artificial Intelligence – Breakthroughs in Theories and Technologies’ 

https://pharmaceuticalintelligence.com/category/artificial-intelligence-general/artificial-intelligence-breakthroughs-in-theories-and-technologies/

Read Full Post »

« Newer Posts - Older Posts »