Feeds:
Posts
Comments

Archive for the ‘Deep Learning’ Category

AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view

Curator: Aviva Lev-Ari, PhD, RN

We had researched the topic of AI Initiatives in Big Pharma in the following article:

  • Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

https://pharmaceuticalintelligence.com/2025/11/15/authentic-relevance-of-lpbi-groups-portfolio-of-ip-as-proprietary-training-data-corpus-for-ai-initiatives-at-big-pharma/

 

We are publishing a Series of Five articles that demonstrate the Authentic Relevance of Five of the Ten Digital IP Asset Classes in LPBI Group’s Portfolio of IP for AI Initiatives at Big Pharma.

  • For the Ten IP Asset Classes in LPBI Group’s Portfolio, See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

The following Five Digital IP Asset classes are positioned as Proprietary Training Data and Inference for Foundation Models in Health care.
This Corpus comprises of Live Repository of Domain Knowledge Expert-Written Clinical Interpretations of Scientific Findings codified in the following five Digital IP ASSETS CLASSES:
 IP Asset Class I: Journal: PharmaceuticalIntelligence.com
6,250 scientific articles (70% curations, creative expert opinions.  30% scientific reports).
2.4MM Views, equivalent of $50MM if downloading an article is paid market rate of $30.

https://pharmaceuticalintelligence.com/vision/pharmaceuticalintelligence-com-journal-projecting-the-annual-rate-of-article-views/

 

 

• IP Asset Class II: 48 e-Books: English Edition & Spanish Edition.
152,000 pages downloaded under pay-per-view. The largest number of downloads for one e-Publisher (LPBI)
• IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025

• IP Asset Class V: 7,500 Biological Images in our Digital Art Media Gallery, as prior art. The Media Gallery resides in WordPress.com Cloud of LPBI Group’s Web site

• IP Asset Class X: +300 Audio Podcasts: Interviews with Scientific Leaders
BECAUSE THE ABOVE ASSETS ARE DIGITAL ASSETS they are ready for use as Proprietary TRAINING DATA and INFERENCE for AI Foundation Models in HealthCare.
Expert‑curated healthcare corpus mapped to a living ontology, already packaged for immediate model ingestion and suitable for safe pre-training, evals, fine‑tuning and inference. If healthcare domain data is on your roadmap, this is a rare, defensible asset.
The article TITLE of each of the five Digital IP Asset Classes matched to AI Initiatives in Big Pharma, an article per IP Asset Class are:
  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-data-training-and-inference-by-lpbi-groups-ip-asset-class-i-pharmaceuticalintelligence-com-journal-2-5mm-views-6250-scientific-article/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-ii-48-e-books-english-edition-spanish-edition-152000/

Article Conclusions by @grok:

Conclusions and Implications
LPBI’s e-books are “ready-to-ingest” for Big Pharma AI, enabling from efficiency gains to diagnostic breakthroughs. No prior comprehensive ML attempts highlight untapped value [by Big Pharma. However, we conducted in-house ML on two of the e-Books]; bilingual editions support global/equitable applications. Technical Implications: Powers multilingual small models for precision medicine. Business Implications: Fuels ROI on investments like Pfizer’s $500M push; licensing potential for partnerships. Unique Insight: In AI’s scale race, these assets provide a “rare moat” via curated human opus—unlike raw data, they embed clinical foresight for transformative inference. The article is promotional yet substantive, with dense Amazon links and calls to resources (e.g., BioMed e-Series page, IP portfolio). It builds on the prior Class I piece by shifting to long-form, creative text for deeper AI personalization.

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-iii-100-e-proceedings-and-50-tweet-collections-of-top-biotech/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as prior art

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-v-7500-biological-images-in-lpbi-groups-digital-art/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class X: +300 Audio Podcasts Library: Interviews with Scientific Leaders

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-x-300-audio-podcasts-library-interviews-with-scientific-leaders/

 

In the series of five articles, as above, we are presenting the key AI Initiatives in Big Pharma as it was created by our prompt to @Grok on 11/18/2025:

  • What are PFIZER’s AI INITIATIVES?

@Grok Response:

x.com/i/grok/share/0ol5VOJsEYs11baXq4xkzNb0h

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines
Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker /Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on AI in Manufacturing

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view

The Left Column was written @Grok

The Right Column was written by Aviva Lev-Ari, PhD, RN

AI Initiative at Big Pharma

i.e., Pfizer

e-Books

Domain-aware Editorials and Curations

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis. The electronic Table of Contents of every e-book is a CONCEPTUAL MASTER PIECE of one unique occurrence in Nature generated by the Editor, or the Editors that had

–       Commissioned articles for the e-Book

–       Had selected articles from collections of Categories of Research created by domain knowledge experts

–       Had reviewed the TOTALITY of the Journal’s Ontology and found new concept to cover in the e-Book not originally planned

–       The vision of the Editor-in-Chief of the BioMed e-Series that reflects the BIG PICTURE of Patient care delivery.

–       UC, Berkeley PhD’83

–       Knowledge student and Knowledge worker, 10/1970 to Present

–       Conceptual pioneer of 26 algorithms in Decision Science of Operations Management decision support tools

–       2005 to Present in the Healthcare field.

–       2005-2012: Clinical Nurse Manager in Post-acute SNF settings and Long-term Acute care Hospital Supervisor – had developed a unique view on Diagnosis, Therapeutics and Patient care delivery

–       The BioMed e-Series is the EPITOM of human CREATIVITY in Healthcare an OPUS MAGNUM created by collaboration of top Scientists, Physicians and MD/PhDs

–       The 48 e-Books Published by LPBI Group – represent the ONLY one Publisher on Amazon.com with +151,000 pages downloaded since the 1st e-book published and Pay-per-View was launched by Amazon.com in 2016.

Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Two volumes on the BioMed e-Series were subjected to Medical Text Analysis with ML, Natural Language Processing (NLP).

–       Cancer, Volume 1 (In English part of the Spanish Edition, Series C)

–       Genomics, Volume 2 (In English part of the Spanish Edition, Series B)

–       GPT capabilities are warranted to attempt to subject to ML every book of the MUTUALLY EXCLUSIVE 48 URLs provided by Amazon.com to LPBI Group, the Publisher.

–       5 URLs for 5 Bundles in The English Edition:

–       Series A,B,C,D,E – English Edition

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in English

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in Spanish

–       5 URLs for 5 Bundles in The Spanish Edition:

–       Series A,B,C,D,E –Spanish Edition

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

–       No one had attempted ML on every book, only two books were analyzed by ML.

–       No one had attempted ML on all the Volumes in any of the 5 Series.

–       No one had attempted ML on all the 48 books

–       WHEN that will be done – a REVOLUTION on Disease Detection and Diagnostics will be seen for the first time

 

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Add the content of all the Books to Charlie Platform
Partnerships and Education

 

Collaborations: IMI Big Picture for 3M – sample disease database

 

AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

 

Webinars of AI for biomedical data integration

 

Webinard on Ai in Manufacturing

e-Books are the SOURCE for Education

–       Offer the books as Partnership sustenance

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

URLs for the English-language Edition by e-Series:

 

Series A: Cardiovascular Diseases ($515)

https://www.amazon.com/gp/product/B07P981RCS?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series B: Frontiers in Genomics ($200)

https://www.amazon.com/gp/product/B0BSDPG2RX?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series C: Cancer & Oncology ($175)

https://www.amazon.com/gp/product/B0BSDWVB3H?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series D: Immunology ($325)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series E: Patient-Centered Medicine ($274)

https://www.amazon.com/gp/product/B0BSDW2K6C?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

 

 

CONCLUSIONS: The Voice of Dr. Stephen J. Williams PhD

Article Summary of the ArticleTitle: by @grok
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view
Publication Date: November 22, 2025
Author/Curator: Aviva Lev-Ari, PhD, RN
(Posted by 2012pharmaceutical)
@Grok SOURCE:


Overview: This is the second installment in a five-article series on LPBI Group’s digital IP assets for AI in pharma. It focuses on IP Asset Class II—48 e-books (bilingual English/Spanish editions)—as a proprietary, expert-curated textual corpus for training and inference in healthcare AI models. Drawing from a November 18, 2025, Grok prompt on Pfizer’s AI efforts, the article maps e-book content to pharma applications, highlighting untapped ML/NLP potential for small language models. Unlike Class I (journal articles), this emphasizes long-form editorial creativity and bilingual scalability, positioning the assets as a “defensible moat” for Big Pharma’s AI acceleration.
Main Thesis and Key Arguments

  • Core Idea: LPBI’s e-books, with 152,000 pay-per-view downloads (largest for any single Amazon e-publisher since 2016), offer domain-specific, human-curated content (e.g., conceptual tables of contents as “masterpieces” reflecting patient care visions) that outperforms generic data in AI training. This enables precise inference for drug discovery, diagnostics, and efficiency, fostering human-AI synergy.
  • Value Proposition: The BioMed e-Series (5 series: A-E, each bundled as a corpus) totals 48 volumes from collaborations with top scientists/MD/PhDs. Editor-in-Chief’s expertise (UC Berkeley PhD ’83, decision science algorithms, clinical nursing) infuses “big-picture” insights. Valued for multilingual models; only two volumes (Cancer Vol. 1, Genomics Vol. 2) have seen ML analysis—full application could “revolutionize” disease detection.
  • Broader Context: Part of LPBI’s 10 IP classes; five (I, II, III, V, X) are AI-ready via living ontology. Contrasts with open-source data by emphasizing ethical, diverse, creative inputs for foundation models.

AI Initiatives in Big Pharma (Focus on Pfizer)Reuses the Grok prompt on Pfizer’s AI, with key highlights (verbatim from article’s table):

Initiative Category
Description
Generative AI Tools
Saves up to 16,000 hours annually in literature searches/data analysis.
Drug Discovery Acceleration
AI, supercomputing, ML to streamline R&D timelines.
Clinical Trials & Regulatory Efficiency
Predictive tools, decentralized trials, inventory management.
Disease Detection & Diagnostics
ATTR-CM Initiative, rare diseases focus.
Generative AI & Operational Tools
Charlie Platform; AWS-powered Scientific Data Cloud; SageMaker/Bedrock for manufacturing; Pfizer Foundation’s AI Learning Lab for equitable care.
Partnerships & Education
IMI Big Picture (3M sample disease database); AIPM Symposium (drug discovery/precision medicine); Webinars on AI for biomedical integration and manufacturing.
Strategic Focus
$500M R&D reinvestment by 2026 for AI productivity; part of $7.7B cost savings; ethical AI with diverse DBs; global advances (e.g., China’s CRISPR AI).

Mapping to LPBI’s Proprietary DataA core table aligns Pfizer initiatives with e-book alignments, showcasing ingestion for AI enhancement:

Pfizer AI Initiative
e-Books Alignment
Generative AI Tools (16,000 hours saved)
Electronic TOCs as conceptual masterpieces: Editor commissions/selections/ontology reviews reflect big-picture patient care (UC Berkeley PhD ’83, decision science pioneer, clinical experience); BioMed e-Series as opus magnum of human creativity; 48 e-books with 152,000+ downloads since 2016.
Drug Discovery Acceleration
ML/NLP applied to Cancer Vol. 1 (Series C) and Genomics Vol. 2 (Series B); Extend GPT to all 48 books via 5 English bundles (Series A-E) and 5 Spanish bundles as corpuses for domain-aware small language models.
Clinical Trials & Regulatory Efficiency
(No specific mapping provided.)
Disease Detection & Diagnostics (ATTR-CM, rare diseases)
Untapped: No prior ML on all books/volumes/series; Full analysis promises revolution in detection/diagnostics.
Generative AI & Operational Tools (Charlie, AWS, etc.)
Ingest all book content into Charlie Platform for centralized ML.
Partnerships & Education (IMI, AIPM, webinars)
e-Books as education source; Offer for partnership sustenance.
Strategic Focus ($500M reinvestment, ethics)
Bundles enable diverse, ethical DBs; URLs for English Series: A (Cardiovascular, $515): [link]; B (Genomics, $200): [link]; C (Cancer, $175): [link]; D (Immunology, $325): [link]; E (Patient-Centered, $274): [link]. Spanish equivalents implied.

Methodologies and Frameworks

  • AI Training Pipeline: Package e-books (textual corpus) for pre-training (ontology-mapped ingestion), fine-tuning (e.g., ML on series bundles), and inference (e.g., diagnostics queries). Bilingual support for global models.
  • ML/NLP Demo: Applied to two volumes; scale to 48 books via GPT for small language models (5 corpuses/language).
  • Productivity Model: Amplifies Pfizer’s savings (16,000 hours, $7.7B total) by injecting expert creativity; ethical focus on diverse, compliant data for equity (e.g., AI Learning Lab).
  • Insights: Quotes editor’s vision as “epitome of human creativity”; predicts full ML rollout as first-of-its-kind revolution, differing from Class I’s article-based ontology by focusing on editorial depth.

Conclusions and Implications

LPBI’s e-books are “ready-to-ingest” for Big Pharma AI, enabling from efficiency gains to diagnostic breakthroughs. No prior comprehensive ML attempts highlight untapped value [by Big Pharma. However, we conducted in-house ML on two of the e-Books]; bilingual editions support global/equitable applications. Technical Implications: Powers multilingual small models for precision medicine. Business Implications: Fuels ROI on investments like Pfizer’s $500M push; licensing potential for partnerships. Unique Insight: In AI’s scale race, these assets provide a “rare moat” via curated human opus—unlike raw data, they embed clinical foresight for transformative inference. The article is promotional yet substantive, with dense Amazon links and calls to resources (e.g., BioMed e-Series page, IP portfolio). It builds on the prior Class I piece by shifting to long-form, creative text for deeper AI personalization.
If you’d like comparisons across the series, deeper dives into specific e-books, or analysis of bilingual AI potential, just say the word!

1 web page

Read Full Post »

AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology

Curator: Aviva Lev-Ari, PhD, RN

We had researched the topic of AI Initiatives in Big Pharma in the following article:

  • Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

https://pharmaceuticalintelligence.com/2025/11/15/authentic-relevance-of-lpbi-groups-portfolio-of-ip-as-proprietary-training-data-corpus-for-ai-initiatives-at-big-pharma/

 

We are publishing a Series of Five articles that demonstrate the Authentic Relevance of Five of the Ten Digital IP Asset Classes in LPBI Group’s Portfolio of IP for AI Initiatives at Big Pharma.

  • For the Ten IP Asset Classes in LPBI Group’s Portfolio, See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

The following Five Digital IP Asset classes are positioned as Proprietary Training Data and Inference for Foundation Models in Health care.
This Corpus comprises of Live Repository of Domain Knowledge Expert-Written Clinical Interpretations of Scientific Findings codified in the following five Digital IP ASSETS CLASSES:
 IP Asset Class I: Journal: PharmaceuticalIntelligence.com
6,250 scientific articles (70% curations, creative expert opinions.  30% scientific reports).
2.4MM Views, equivalent of $50MM if downloading an article is paid market rate of $30.

https://pharmaceuticalintelligence.com/vision/pharmaceuticalintelligence-com-journal-projecting-the-annual-rate-of-article-views/

 

• IP Asset Class II: 48 e-Books: English Edition & Spanish Edition.
152,000 pages downloaded under pay-per-view. The largest number of downloads for one e-Publisher (LPBI)
• IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025

• IP Asset Class V: 7,500 Biological Images in our Digital Art Media Gallery, as prior art. The Media Gallery resides in WordPress.com Cloud of LPBI Group’s Web site

• IP Asset Class X: +300 Audio Podcasts: Interviews with Scientific Leaders
BECAUSE THE ABOVE ASSETS ARE DIGITAL ASSETS they are ready for use as Proprietary TRAINING DATA and INFERENCE for AI Foundation Models in HealthCare.
Expert‑curated healthcare corpus mapped to a living ontology, already packaged for immediate model ingestion and suitable for safe pre-training, evals, fine‑tuning and inference. If healthcare domain data is on your roadmap, this is a rare, defensible asset.
The article TITLE of each of the five Digital IP Asset Classes matched to AI Initiatives in Big Pharma, an article per IP Asset Class are:
  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-data-training-and-inference-by-lpbi-groups-ip-asset-class-i-pharmaceuticalintelligence-com-journal-2-5mm-views-6250-scientific-article/

Article conclusions by @Grok

Conclusions and Implications
The article concludes that LPBI’s assets are primed for Big Pharma adoption, enabling AI to evolve from tools to “knowledge amplifiers.” It hints at licensing opportunities without explicit calls to action. Technical Implications: Improves AI precision in complex pharma tasks like rare disease modeling. Business Implications: Accelerates ROI on AI investments (e.g., Pfizer’s $500M push) while ensuring defensible IP. Unique Insight: In an era of open-source AI, curated expert data like this is a “rare moat” for competitive edge.Overall, the piece is insightful for AI-pharma intersections, blending real-world examples with promotional asset positioning. It’s dense with links to LPBI resources (e.g., e-books, podcasts) for deeper dives.
  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-ii-48-e-books-english-edition-spanish-edition-152000/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-iii-100-e-proceedings-and-50-tweet-collections-of-top-biotech/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-v-7500-biological-images-in-lpbi-groups-digital-art/

 

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class X: +300 Audio Podcasts Library: Interviews with Scientific Leaders

https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-training-data-and-inference-by-lpbi-groups-ip-asset-class-x-300-audio-podcasts-library-interviews-with-scientific-leaders/

 

In the series of five articles, as above, we are presenting the key AI Initiatives in Big Pharma as it was created by our prompt to @Grok on 11/18/2025:

  • What are PFIZER’s AI INITIATIVES?

@Grok Response:

x.com/i/grok/share/0ol5VOJsEYs11baXq4xkzNb0h

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines
Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker /Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on AI in Manufacturing

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

  • AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology

The Left Column was written @Grok

The Right Column was written by Aviva Lev-Ari, PhD, RN

AI Initiative at Big Pharma

i.e., Pfizer

Journal articles

N = 6,250

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis. Generative AI tools searching LPBI’s Proprietary data in addition to Public Domain data sources

Journal ONTOLOGY used to optimize context classification selected for search

Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines –       Run prompts by category of research on three dyads

–       Run ML across categories of research for these three dyads

-Gene-disease

-Disease-drug

-Gene-drug

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

content creation across the drug lifecycle, from lab data to regulatory docs
Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Heart Failure Diagnosis by Early detection of transthyretin amyloid cardiomyopathy

–       Journal published ~30 curations by Dr. Larry on this subject

–       Run NLP on this Corpus

Rare disease:

Journal published 560 articles on Rare diseases

–       Run ML on this Corpus

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker /Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Content generated in the Journal can become “generated compliant content” if run by the

Charlie Platform.

–       For REUSE content in context

 

Entire Corpus of 9 Giga bytes can be ingested to Pfizer Foundation’s AI Learning Lab

–       Rum prompts against it

–       Journal’s Content to be used for Internal staff expertise development

–       Journal’s Content for Leadership development

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on AI in Manufacturing

The Journal had published 547 articles in Precision Medicine

 

The Journal had published 1,114 articles in Drug Discovery

 

The Journal had published 701  articles in Drug Delivery

 

The Journal had published 3,615 articles on subject matter “Disease”

 

The Journal had published 738 articles on Biomedical topic

 

The Journal had published 425 articles in Artificial Intelligence

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

The Journal had published 432 articles on CRISPR

 

Productivity derived from Journal’s content:

–       As a result of the fact that ~70% of the Journal’s articles are curations written by Domain Knowledge Experts subjectively expressing theirs clinical interpretations of basic and primary research – the productivity of the knowledge workers at any Big Pharma would increase vastly.

–       If Grok and Claude would run on LPBI Group’s Corpus – A scientific revolution will emerge

–       It is not combinatorics applied to molecules with 98% futile results!!! – it is the IQ of Gifted HUMANS, of domain knowledge experts generating content using individual CREATIVITY no Quantum or Super Intelligence which is not in existence YET.

–       Foundation Models in Healthcare depends on the OUTPUT of the human creative mind. AI takes keyword (classic search) and concepts (semantic search) and run frequency of occurrence, predict the nest word, one after the next one.

CONCLUSIONS: The Voice of Dr. Stephen J. Williams PhD

PENDING

@grok Summary of this article:

AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology https://pharmaceuticalintelligence.com/2025/11/22/ai-initiatives-in-big-pharma-grog-prompt-proprietary-data-training-and-inference-by-lpbi-groups-ip-asset-class-i-pharmaceuticalintelligence-com-journal-2-5mm-views-6250-scientific-article/

Summary of the ArticleTitle:

AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology

Publication Date: November 22, 2025

Author/Curator: Aviva Lev-Ari, PhD, RN

Overview: This article, part of a series on LPBI Group’s digital IP assets, explores how the group’s proprietary content—particularly IP Asset Class I (PharmaceuticalIntelligence.com Journal)—can serve as high-value training data for AI models in the pharmaceutical industry. It uses a Grok AI prompt to detail Pfizer’s AI efforts and maps them to the journal’s expert-curated content, emphasizing the shift from generic AI to domain-specific, human-enhanced models for drug discovery, clinical trials, and beyond. The piece positions LPBI’s assets as “defensible” resources for pre-training, fine-tuning, and inference in healthcare foundation models, contrasting them with public-domain data.Main Thesis and Key Arguments

  • Core Idea: Traditional AI in pharma often relies on combinatorial predictions from vast but uncurated data, leading to inefficiencies. LPBI’s corpus, with 70% expert interpretations (curations and opinions), injects human creativity and clinical insights, enabling more productive AI outputs. This could spark a “scientific revolution” by powering models like Grok or Claude, boosting knowledge worker efficiency and reducing R&D costs.
  • Value Proposition: The journal’s 6,250 articles (30% scientific reports, 70% expert analyses) form a 9GB digital repository with a live ontology for semantic mapping. Valued at ~$50MM (based on $30/article download equivalent), it supports AI applications across gene-disease-drug dyads, NLP for rare diseases, and regulatory content generation.
  • Broader Context: Part of a five-article series showcasing LPBI’s ten IP classes (e.g., e-books, podcasts, images) as ready-to-ingest assets for Big Pharma’s AI strategies.

AI Initiatives in Big Pharma (Focus on Pfizer)The article leverages a November 18, 2025, Grok prompt (“What are PFIZER’s AI INITIATIVES?”) to outline Pfizer’s efforts, sourced from real-time AI generation. Key highlights include:

Initiative Category
Description
Key Metrics/Examples
Generative AI Tools
Automates literature searches, data analysis, and report writing.
Saves 16,000 hours annually; integrates with Pfizer’s Charlie Platform for compliant content.
Drug Discovery Acceleration
Uses ML, supercomputing, and AI to identify targets and optimize molecules.
Part of $500M R&D reinvestment by 2026; contributes to $7.7B in cost savings.
Clinical Trials & Regulatory Efficiency
Predictive modeling for patient recruitment, decentralized trials, and inventory management.
Partnerships like IMI Big Picture; AI for faster FDA submissions.
Disease Detection & Diagnostics
Focus on rare diseases (e.g., ATTR-CM via transthyretin amyloid cardiomyopathy initiative).
NLP and ML for early detection; equitable care via AI Learning Lab.
Operational & Strategic Tools
Scientific Data Cloud with AWS ML/SageMaker; ethical AI with diverse datasets.
Webinars for education; global expansions (e.g., China’s CRISPR AI).

No other pharma companies are deeply covered, but the framework implies applicability to peers like those in prior LPBI articles.Mapping to LPBI’s Proprietary DataA central table links Pfizer’s initiatives to the journal’s content, demonstrating ingestion potential:

Pfizer AI Initiative
LPBI Journal Alignment
Productivity Boost
Generative AI for searches
Ontology-optimized semantic searches across 6,250 articles.
Enhances accuracy in 425 AI-related articles.
Drug Discovery ML
1,114 articles on discovery; dyad mapping (e.g., gene-drug).
Supports ML training on precision medicine (547 articles).
Clinical Trials Efficiency
3,615 “Disease” articles; NLP for trial simulations.
Reduces time via expert-curated protocols.
Rare Disease Diagnostics
Content on ATTR-CM and similar (e.g., 432 CRISPR articles).
Fine-tuning for targeted inference.
Operational Platforms
Full corpus ingestion for Charlie-like labs.
2.5MM views indicate high-engagement data.

This shows how the live ontology enables context classification, outperforming frequency-based AI.Methodologies and Frameworks

  • AI Training Pipeline: Ingest corpus for pre-training (broad knowledge), fine-tuning (domain-specific), and inference (real-time queries). Use prompts by research categories; apply ML across dyads; generate lifecycle content.
  • Productivity Model: 70% expert curations amplify human-AI synergy, avoiding “futile” combinatorial results. Ethical focus: Diverse, compliant data for global equity.
  • Insights: Quotes Dr. Stephen J. Williams: Foundation models thrive on “gifted humans’ creativity,” not just scale. Predicts vast productivity gains (e.g., beyond Pfizer’s 16,000 hours saved).

Conclusions and Implications

The article concludes that LPBI’s assets are primed for Big Pharma adoption, enabling AI to evolve from tools to “knowledge amplifiers.” It hints at licensing opportunities without explicit calls to action. Technical Implications: Improves AI precision in complex pharma tasks like rare disease modeling. Business Implications: Accelerates ROI on AI investments (e.g., Pfizer’s $500M push) while ensuring defensible IP. Unique Insight: In an era of open-source AI, curated expert data like this is a “rare moat” for competitive edge.Overall, the piece is insightful for AI-pharma intersections, blending real-world examples with promotional asset positioning. It’s dense with links to LPBI resources (e.g., e-books, podcasts) for deeper dives.
If you’d like a full review (e.g., strengths/weaknesses, comparisons to industry trends), or analysis of specific sections, let me know!

Read Full Post »

Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

Curators: Aviva Lev-Ari, PhD, RN and Stephen J. Williams, PhD

Updated on 11/29/2025

 

The Voice of Stephen J. Williams, PhD

Since the formation of the LPBI group in 2012, we have set as a goal to digitally curate and ontologize the biomedical literature.  These aspects of our mission, although not understandable to most at that time of our inception, was at the forefront of many academic efforts which became the springboard and initial conceptualization of today’s large language models.  In select universities like University of Illinois, University of Delaware, and the Technion, academics and researchers at Microsoft and Google, were laying the framework for  a semantic web, or Web 3.0.  Another company at the forefront of this idea was a company springing forth from Mathematica, Wolfram Research. This idea for a semantic web would entail the processes of curation, digitization, and ontology creation.  Their belief, such as ours, was  a precursor and much needed key to the puzzle of moving language from machine learning algorithms to the generative transformers used by artificial intelligence.

However as most efforts in this genre focused on general language, our efforts at LPBI Group were focused on the biomedical field, as we understood, from 2012, that the biomedical literature was unique, and so a unique strategy had to be developed to semantically understand biomedical text, even though at the time of 2012 GPTs were not even a concept.  However the potential for doing biomedical text analysis was there, and LPBI Group responded by developing a methodology of scientific curation which involved a multimodal strategy to curate, digitize, and ontologize biomedical findings and text.

It was about at the time of 2012 that other groups, mainly focused of drug development applications (for example at University of Indiana) recognized that new computational power of machine learning algorthims could be  useful in analyzing complex biological questions.  Please see our Synthetic Biology in Drug Discovery section of our Journal for more information on this. For instance, an early adopter of this strategy, a company called  Data2Discovery, one of the earliest AI for drug discovery startups, stated

We are able to improve drug discovery now as well as demonstrating new fast-cycle AI-driven processes that will have a revolutionary impact on drug discovery if fully implemented. We have had some dramatic successes, but we are just starting to discover the impact that data, knowledge graphs, AI and machine learning can together have on drug discovery.

We need all the expertise of academics, consortia, AI companies and pharma to make his happen, and it’s going to require some serious investment, and a big change of thinking. But the opportunity to get drug discovery out of the death spiral and framed for data-driven success is too important to pass up.

However the LPBI Group was cognizant of these changes occuring and pivoted to the developing natural language processing arena as well as ideas for the developing Blockchain technology.  This was more of a natural progression for the LPBI Group than a pivot (please read here).

This would be our Vision 2.0, to make biomedical text amenable for Natural Language Processing.   We utilized a few strategies in this regard, partnering with a company who was developing NLP for biomedical text analysis, and developing in house machine learning and NLP methods using the Wolfram language environment.  Our focus on structuring biomedical text (versus the highly structured genomics and omics data found in many omics related databanks) was prescient for the time.  As NLP and machine learning  efforts realized, biomedical text needs to have a structure much like genes, proteins and other molecular databases had been organized.  Therefore it was realized that structured data was imperative for efficient NLP analysis, a crux for the new GPT which was being developed (and in this mind still is a crux for current GPT and LLM models when it comes to biomedical text analysis).

Our strategy using our scientific curation methodology (as described below in links form our founder Dr. Aviva Lev-Ari, was proven to be highly efficient and amenable to NLP analysis, as a pilot with an NLP company noticed.  Most of the data they were using was unstructured and their first step involved annotation and structuring the text, as we had already performed for years.  This was critical as our text was able to pull out more concepts, relationships, in a faster time than NLP on sources such as PubMed available text.  We had also developed our own in house algorithms for NLP on our material, which is shown in some of our book offerrings and individual articles.

However with the advent of GPT it was thought all this was unnecessary.  However this idea that our strategy was outdated or irrelevent in the era of GPT was wholly  incorrect to the advocates of a sole GPT strategy to analyze biomedical text and data.  It is now understood that structure is needed as some of biomedical-centric GPT projects would find out, such as BioGPT.  We have many articles which attest to the lack of  accuracy and efficiency of these GPT architectures (seen here). These include failure rates in many areas of healthcare and biomedicine by sole reliance on GPT,

It was realized by many in the biomedical arena, especially those involved in NLP efforts, that there was much value in the semantic web 3.0 idea, and this was readily picked up by those spearheading effort to incorporate knowledge graphs with the new generative AI or GPT technology.  We have shown a clear example our scientific methodology of curation with ontology has better inference when combined with knowledge graphs and GPT than reliance on GPT alone

please read this article

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for Non-Small Cell Lung Cancer

at https://pharmaceuticalintelligence.com/2014/09/05/multiple-lung-cancer-genomic-projects-suggest-new-targets-research-directions-for-non-small-cell-lung-cancer/

As shown here in this article

This update was performed by the following methods:
A. GPT 5 Text analysis and Reasoning
B. Insertion of Knowledge Graph on topic Curation of Genomic Analysis from Non Small Cell Lung Cancer Studies  from Nodus Labs using InfraNodus software
C. Domain Knowledge Expert evaluation of the Update outcomes
This article has the following Structure:
Part A: Introduction to LLM, Knowledge Graph software InfraNodus, ChatGPT5 and Background Information on curated material for Test Case
Part B: InfraNodus Analysis of manual curation and Knowledge Graph Creation
Part C: Chat GPT 5 Analysis of Manually Curated Material
Part D: Curation entitled Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for Non-Small Cell Lung Cancer originally published on 09/05/2014
Results of Article Update with GPT 5
1. GPT5 alone was not able to understand the goal of the article, namely to determine knowledge gaps in a particular research area involving 5 genomic studies on lung cancer patients
2. GPT5 alone was not able to group concepts or comonalities between biological pathways unless supplied with a manually curated list of KEGG pathways from a list of mutated genes.  However this precluded any effect that fusion proteins had on the analysis and so GPT5 would only concentrate on mutated genes commonly found in literature
3. GPT was not able to access some of the open Access databases like NCBI Gene Ontology database
Results of Article Update with KnowledgeGraph presentation to GPT 5
4. As the Knowledge Graph understood the importance of fusion proteins and transversions, the knowledgegraph augmented the GPT analysis and so enriched the known pathways as well as could correctly identify the less represented pathways in the knowledge graph
5.  This led to the identification of many novel signaling pathways not identified in the original analysis, and was able to perform this task with ease and speed

6. GPT with InfraNodus Analysis was able to propose pertinent questions for future research (the goal of the original curation) such as:

  • How does the interaction between [[EGFR]] mutations and sex-specific gene alterations, including [[RBM10]], influence treatment outcomes in lung adenocarcinoma?
  • How does the intersection of mutational patterns from smoking influence pathway activation in NSCLC, and can identifying these interactions improve targeted therapy development?
Novelty in comparison to Original article published on 09/05/2014
7. it appears that manual curation is necessary to assist in the building of relevant knowledge graphs in the biomedical fields to augment generative AI analysis
8. by itself, generative AI is not optimized for inference of higher concepts from biomedical text, and therefore, at this point, requires the input from human curators developing domain-specific knowledge graphs
9.  The combination of ChatGPT5 and Knowledge graphs of this manually curated biomedical text added a further layer of complexity of gaps of knowledge not seen in the original curations including the need to study noncanonical signaling pathways like WNT and Hedgehog in smoker versus nonsmoker cohorts of lung cancer patients

The Voice of Aviva Lev-Ari, PhD, RN

LPBI Group’s Portfolio of Digital IP Assets as Proprietary Training Data Corpus for AI in Medicine, in Life Sciences, in Pharmaceutical and in Health Care Applications

The Portfolio of Digital IP Assets by Class is a rare, defensible asset, privately-held debt-free by LPBI Group’s founder. The content, aka a Data Corpus is best designed for the Training and Pre-Training of Foundation Multimodal Models in Health Care. 

#HealthcareAI

#FoundationModels

#ProprietaryTrainingData

LPBI Group is offering transfer of ownership, in full, a privately held, multimodal healthcare training corpus leveraging propriety unique data set curated by domain experts and mapped to a living ontology for GenAI creating defensibility.

The Portfolio of IP spans:

  • 6,250+ articles (~2.5MM views),
  • 48 e‑books (EN/ES) (+152,000 page downloads),
  • 100+ e‑Proceedings with +50 Tweet collections,
  • 7,500+ biological images with expert context, and
  • 300+ Audio podcasts on Life Sciences breakthroughs.

Each asset (Use Case: Scientific Article) has timestamps, author/role labels, crosslinks, and view histories.

  • Metadata export exists; full text and media transfer via WordPress/Amazon account control for immediate ingestion.
  • Rights are centrally assigned with explicit model‑training data by domain-aware for model implementation for Small Language Models or Large Language Models.

Strategic acquirers in Big Pharma of Vertical AI startups (i.e., LPBI Group) with data‑moat strategies

Pharma strategics Acquire LPBI’s end‑to‑end, rights‑clean healthcare knowledge base to accelerate R&D, medical affairs, and safety. Ideal for and with acceleration of R&D, medical affairs, and safety. Emphasize compliant internal copilots and evidence synthesis enabled by expert curation and living ontology. Close with rapid onboarding under NDA  Metadata export plus full text/media transfer for rapid onboarding. Full acquisition only.

Subject: Buy the moat: full acquisition of expert healthcare corpus with clean rights

We’re selling the entire asset: a privately held, multimodal healthcare corpus with centralized training rights and an exportable ontology, validated on gene–disease–drug extraction. It’s ingest‑ready and transfers cleanly via account control plus a metadata export. If owning differentiated data is critical for your agent or workflow, we can provide a diligence preview under NDA.

compliant internal copilots and evidence synthesis enabled by expert curation and living ontology. Close with rapid onboarding under NDA

Five Examples of Domain-aware for model implementation for Small Language Models – English Edition & Spanish Edition

Series A: Cardiovascular Diseases ($515) – Six Volumes

https://www.amazon.com/gp/product/B07P981RCS?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Six Examples of Domain-aware in the Specialty of Cardiovascular Diseases

  • Series A, Volume One

Perspectives on Nitric Oxide in Disease Mechanisms2013

http://www.amazon.com/dp/B00DINFFYC $75

  • Series A, Volume Two 

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation, 2015

http://www.amazon.com/dp/B018Q5MCN8 $75

  • Series A, Volume Three

Etiologies of Cardiovascular Diseases – Epigenetics, Genetics and Genomics2015

http://www.amazon.com/dp/B018PNHJ84 $75

  • Series A, Volume Four

Therapeutic Promise: Cardiovascular Diseases, Regenerative & Translational Medicine, 2015

http://www.amazon.com/dp/B019UM909A $75

  • Series A, Volume Five

Pharmacological Agents in Treatment of Cardiovascular Diseases2018

https://www.amazon.com/dp/B07MGSFDWR  $115

  • Series A, Volume Six:

Interventional Cardiology for Disease Diagnosis and Cardiac Surgery for Condition Treatment2018

https://www.amazon.com/dp/B07MKHDBHF $100

 

Series B: Frontiers in Genomics ($200) – Two Volumes

https://www.amazon.com/gp/product/B0BSDPG2RX?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series C: Cancer & Oncology ($175) – Two Volumes

https://www.amazon.com/gp/product/B0BSDWVB3H?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series D: Immunology ($325) – Four Volumes

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series E: Patient-Centered Medicine ($274) – Four Volumes

https://www.amazon.com/gp/product/B0BSDW2K6C?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

One Example of Domain-aware for model implementation for Large Language Models

Eighteen volumes in the English Edition and 19 volumes in the Spanish Edition including 2,728 articles by biomedical professionals are available.

https://www.amazon.com/s?k=Aviva+Lev-Ari&i=digital-text&rh=n%3A133140011&ref=nb_sb_noss

The electronic books are collections of curated articles in biomedical science. The electronic Tables of Contents (eTOCs) of each volume was designed by a senior editor with expertise in the subjects covered in that volume. The curations use as sources published research findings in peer-reviewed scientific journals together with expert added interpretations.

The e-books are designed to make the latest research in the Five Bilingual BioMed e-Series – 37 volumes accessible to practicing health care professionals. These five e-Series cover the following medical specialties:

  • Cardiovascular diseases and therapies,
  • Genomics,
  • Cancer etiology and oncological therapies,
  • Immunology, and
  • Patient-centered precision medicine.

The material in these volumes can greatly enhance medical education and provide a resource for continued updating and education for health care professionals. In addition to the 37 e-books, LPBI has published more than 6,000 articles in its online scientific journal “PharmaceuticalIntelligence.com”, which has received 2.5 million views since its launch in 4/2012, Top articles had more than 18,000 views.

The Portfolio is:

  • rights‑clean,
  • expert‑curated healthcare corpus
  • mapped to a living Ontology,
  • already packaged for immediate model ingestion and
  • suitable for safe pre-training, evals, and fine‑tuning.

If healthcare domain data is on your roadmap, this is a rare, defensible asset worth a preview.

LPBI Group is offering transfer of ownership, in full, a privately held, multimodal healthcare training corpus leveraging propriety unique data set curated by domain experts and mapped to a living ontology for GenAI creating defensibility. It spans 6,250+ articles (~2.5MM views), 48 e‑books (EN/ES) (+151,000 page downloads), 100+ e‑proceedings with +50 tweet collections, 7,500+ biological images with expert context, and 300+ Audio podcasts on Life Sciences breakthroughs. Each asset has timestamps, author/role labels, crosslinks, and view histories. Rights are centrally assigned with explicit model‑training data by domain-aware for model implementation for Small LMs or LLMs. Metadata export exists; full text and media transfer via WordPress/Amazon account control for immediate ingestion.

Leaders in Pharmaceutical Business Intelligence Group, LLC, Doing Business As LPBI Group, Newton, MA

Full acquisition only: LPBI Group’s Healthcare Training Data Corpus

  • Scientific articles
  • e‑Books in Medicine
  • e‑Proceedings,
  • Biological images
  • Podcasts

#HealthcareAI #FoundationModels #TrainingData

Contact Founder: avivalev-ari@alum.berkeley.edu

PharmaceuticalIntelligence.com

About the Founder

  • LPBI Group’s Legacy and Biography of Aviva Lev-Ari, PhD, RN, Founder & Director, INTERACTIVE CHAT with Grok, created by xAI

https://pharmaceuticalintelligence.com/2025/04/30/interactive-chat-with-grok-created-by-xai-lpbi-groups-legacy-and-biography-of-aviva-lev-ari-phd-rn-founder-director/ 

For Recognitions and a DETAILED BIO by Perplexity.ai and Gemini 2.5 Pro (experimental), see

  • “Show Thinking” by AI Deep Research: The AI-generated Biography Description of Aviva Lev-Ari, PhD, RN, Founder of PharmaceuticalIntelligence.com by Perplexity.ai and by Google’s Gemini 2.5 Pro (experimental)

https://pharmaceuticalintelligence.com/2025/04/29/show-thinking-by-ai-deep-research-the-ai-biography-description-of-aviva-lev-ari-phd-rn-founder-of-pharmaceuticalintelligence-com-by-perplexity-ai-and-by-googles-gemini-2/

Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

On 11/14, 2025, Dr. Lev-Ari asked @Grok about AI Initiatives at a Big Pharma, i.e., Pfizer.

@Grok response is presented, below

URL SOURCE:

https://x.com/i/grok/share/0ol5VOJsEYs11baXq4xkzNb0h

Pfizer’s AI Initiatives Overview

Pfizer, a global pharmaceutical leader, has integrated artificial intelligence (AI) and machine learning (ML) across its operations to accelerate drug discovery, enhance clinical trials, improve manufacturing, and drive patient-centric innovations. These efforts aim to reduce development timelines, boost efficiency, and address challenges like rare disease detection. As of late 2025, Pfizer’s AI strategy emphasizes internal expertise, cloud-based tools (primarily with AWS), and cross-industry collaborations, with a focus on multimodal data analysis and generative AI. Below is a breakdown of key initiatives, drawn from recent developments.

1. Pfizer AI Cloud Transformation (PACT) Program Launched in 2021 with AWS, PACT is a prototyping accelerator for AI/ML applications in lab, clinical manufacturing, and supply chain efforts. It has pursued 14 projects, with five now in production. Key outcomes include:

  • Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
  • 55% reduction in infrastructure costs through cloud optimization.
  • Exploration of virtual/augmented reality for training, with rapid prototyping to test feasibility. This program fosters “fail-fast” innovation, enabling quick iteration on ideas like AI-driven predictive modeling.

2. Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D, cutting years off traditional timelines (e.g., from 8–10 years to months for certain candidates).

  • COVID-19 Initiatives: AI optimized molecular searches for PAXLOVID (oral antiviral), enabling intravenous-to-oral pivots. During vaccine development, the Safety Data Quality (SDQ) tool—a custom AI for clinical trial data management—saved a full month of analysis via the Breakthrough Change Accelerator sandbox.
  • Precision Medicine and Subpopulation Identification: Advanced ML (classical and deep learning) analyzes molecular datasets from trials to predict patient responses, informing trial design, first-in-human studies, and oncology/inflammatory disease targeting. This has doubled clinical success rates to ~20% (vs. industry 7–10%).
  • ML Research Hub: Pfizer’s Cambridge, MA-based Biomedical Data Science group builds internal talent for AI in genome sciences, predicting disease outcomes and treatment efficacy using techniques from tech giants like Google.

3. Clinical Trials and Regulatory Efficiency AI automates content creation across the drug lifecycle, from lab data to regulatory docs and patient booklets, ensuring quality and speed.

  • Predictive Regulatory Tools: ML models forecast regulator queries, proactively addressing safety concerns to build trust and shorten approvals.
  • Decentralized Trials: The “Clinical Trial Anywhere” model uses AI to integrate remote data (e.g., home health visits), improving recruitment/retention and generating real-world evidence.
  • Inventory and Supply Chain: AI predicts vaccine storage issues, pioneered during COVID distribution.

4. Disease Detection and Diagnostics

  • ATTR-CM Initiative: Through the ASPIRE collaboration (with Amyloidosis Research Consortium), AI analyzes EHRs, ECGs, and echocardiograms to detect transthyretin amyloid cardiomyopathy earlier, targeting underdiagnosed heart failure cases.
  • Rare Diseases: ML models identify symptoms in large datasets, mapping the immune system for better predictions.

5. Generative AI and Operational Tools

  • Charlie Platform: Rolled out in 2024 for marketing, this gen AI tool (named after co-founder Charles Pfizer) generates compliant content like emails, presentations, and medical article drafts. Trained on approved product data, it streamlines physician targeting and customer insights.
  • Scientific Data Cloud: AWS-powered centralization of data/ML capabilities, enabling faster research and treating 1.3 billion patients in 2022. It supports gen AI via Amazon SageMaker/Bedrock for manufacturing efficiency.
  • Global Health Grants: Through the Pfizer Foundation’s AI Learning Lab, funding supports AI for equitable healthcare access, e.g., biometric tools for community care.

6. Partnerships and Education

  • Collaborations: With AWS for cloud AI; IMI Big Picture for a 3M-sample disease database; IIT Madras for the 2025 AIPM Symposium on AI in pharma (focusing on drug discovery and precision medicine).
  • Talent and Events: Hosting webinars on AI in manufacturing (e.g., predictive maintenance, regulatory submissions) and engaging leaders like Dr. Subha Madhavan on AI for biomedical data integration.
  • Strategic Focus: $500M R&D reinvestment by 2026 targets AI for productivity, part of $7.7B cost savings. Emphasis on ethical AI, diverse datasets, and competing with global biotech advances (e.g., China’s AI in CRISPR).

Pfizer’s approach balances operational efficiency with transformative R&D, positioning AI as a core enabler rather than a standalone focus. Ongoing investments, like cloud migrations saving $47M annually and reducing CO2 emissions, underscore sustainability. For the latest, monitor Pfizer’s investor updates or events like AWS re:Invent.

Responsiveness to Big Pharma AI Initiatives by LPBI Group’s Digital IP Portfolio:

  • Scientific articles
  • e‑Books in Medicine
  • e‑Proceedings,
  • Biological images
  • Podcasts
AI Initiative at Big Pharma

i.e., Pfizer

Composed @Grok

LPBI Group’s Digital IP Asset:

Journal articles

N = 6,250

Composed by Aviva Lev-Ari

Generative AI tools

that save scientists up to 16,000 hours annually in literature searches and data analysis.

Generative AI tools searching LPBI’s Proprietary data in addition to Public Domain data sources

Journal ONTOLOGY used to optimize context classification selected for search

Drug Discovery and Development Acceleration

Pfizer uses AI, supercomputing, and ML to streamline R&D timelines

–       Run prompts by category of research on the following three dyads

–       Run ML across categories of research for these three dyads

-Gene-disease

-Disease-drug

-Gene-drug

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Curation competences:

  • content creation across the drug lifecycle, from lab data to regulatory docs
Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Heart Failure Diagnosis by Early detection of transthyretin amyloid cardiomyopathy

–       Journal published ~30 curations by Dr. Larry on this subject ATTR-CM

–       Run NLP on this Corpus

Rare diseases:

Journal published 560 articles on Rare diseases

–       Run ML on this Corpus

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker /Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Content generated in the Journal can become “generated compliant content” if run on the Charlie Platform.

–       For REUSE content in context

 

Entire Corpus of 9 Giga bytes can be ingested to Pfizer Foundation’s AI Learning Lab

–       Run prompts against it

–       Journal’s Content to be used for Internal staff expertise development

–       Journal’s Content for Leadership development

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on AI in Manufacturing

The Journal had published 547 articles in Precision Medicine

 

The Journal had published 1,114 articles in Drug Discovery

 

The Journal had published 701  articles in Drug Delivery

 

The Journal had published 3,615 articles on subject matter “Disease”

 

The Journal had published 738 articles on Biomedical topics

 

The Journal had published 425 articles on Artificial Intelligence (AI)

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

The Journal had published 432 articles on CRISPR

 

Productivity derived from Journal’s content:

–       As a result of the fact that ~70% of the Journal’s articles are curations written by Domain Knowledge Experts subjectively expressing theirs clinical interpretations of basic and primary research – the productivity of the knowledge workers at any Big Pharma would increase vastly.

–       If Grok and Claude would run on LPBI Group’s Digital IP Corpus, a scientific revolution will emerge

–       It is not combinatorics applied to molecules with 98% futile results!!!

it is the IQ of Gifted HUMANS, of domain knowledge experts generating content using individual CREATIVITY no Quantum or Super Intelligence which is not in existence, YET.

–       Foundation Models in Healthcare depends on the OUTPUT of the human creative mind. AI takes keyword (classic search) and concepts (semantic search) and run frequency of occurrence and predict the nest word, one word after the next one.

@@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

e-Books

Domain-aware Editorials and Curations

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis. The electronic Table of Contents of every e-book is a CONCEPTUAL MASTER PIECE of one unique occurrence in Nature generated by the Editor, or the Editors that had

–       Commissioned articles for the e-Book

–       Had selected articles from collections of Categories of Research created by domain knowledge experts

–       Had reviewed the TOTALITY of the Journal’s Ontology and found new concept to cover in the e-Book not originally planned

Had incorporated Highlights of Lectures given at 100 Conferences LPBI Group’s Dr. Lev-Ari and Dr. Willians had cover in Real Real, by invitation, only as PRESS.

–       The vision of the Editor-in-Chief of the BioMed e-Series reflects the BIG PICTURE of Patient care delivery.

–       UC, Berkeley PhD’83

–       Knowledge student and Knowledge worker, 10/1970 to Present

–       Conceptual pioneer of 26 algorithms in Decision Science of Operations Management decision support systems

–       2005 to Present in the Healthcare field.

–       2005-2012: Clinical Nurse Manager in Post-acute SNF settings and Long-term Acute care Hospital Supervisor – had developed a unique view on Diagnosis, Therapeutics and Patient care delivery

–       The BioMed e-Series is the EPITOM of human CREATIVITY in Healthcare an OPUS MAGNUM created by collaboration of top Scientists, Physicians and MD/PhDs

–       The 48 e-Books Published by LPBI Group – represent the ONLY one Publisher on Amazon.com with +151,000 pages downloaded since the 1st e-book published on 6/2013 and since Pay-per-View was launched by Amazon.com in 2016.

Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Two volumes on the BioMed e-Series were subjected to Medical Text Analysis with AI, ML, Natural Language Processing (NLP).

–       Cancer, Volume 1 (In English, part of the Spanish Edition, Series C)

–       Genomics, Volume 2 (In English, part of the Spanish Edition, Series B)

–       GPT capabilities are warranted to attempt to subject to ML Analytics every book of the MUTUALLY EXCLUSIVE 48 URLs provided by Amazon.com to LPBI Group, the Publisher.

–       5 URLs for 5 Bundles in The English Edition: Series A,B,C,D,E – English Edition

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in English

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in Spanish

–       5 URLs for 5 Bundles in The Spanish Edition: Series A,B,C,D,E –Spanish Edition

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

–       No one had attempted ML on every book, only two books were analyzed by ML.

–       No one had attempted ML on all the Volumes in any of the 5 Series.

–       No one had attempted ML on all the 48 books

–       WHEN that will be done – a REVOLUTION on Disease Detection and Diagnostics will be seen for the first time because the totality of these 48 books represent the Brains of Human Experts

 

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Add the content of all the Books to Charlie Platform
Partnerships and Education

 

Collaborations: IMI Big Picture for 3M – sample disease database

 

AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

 

Webinars of AI for biomedical data integration

 

Webinard on Ai in Manufacturing

e-Books are the SOURCE for Education

–       Offer the books as Partnership sustenance

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

URLs for the English-language Edition by e-Series:

Series A: Cardiovascular Diseases ($515)

https://www.amazon.com/gp/product/B07P981RCS?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series B: Frontiers in Genomics ($200)

https://www.amazon.com/gp/product/B0BSDPG2RX?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series C: Cancer & Oncology ($175)

https://www.amazon.com/gp/product/B0BSDWVB3H?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series D: Immunology ($325)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series E: Patient-Centered Medicine ($274)

https://www.amazon.com/gp/product/B0BSDW2K6C?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

 

@@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

e-Proceedings: N = +100, and

Tweet Collections: N = +50

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines List of all e-Proceeding of +100 TOP Conferences in Biotech, in Medicine, in Genomics, in Precision Medicine

https://pharmaceuticalintelligence.com/press-coverage/part-two-list-of-biotech-conferences-2013-to-present/

In these conferences the Frontier of Science was presented, ofter BEFORE publication findings were revealed. These Proceedings are the ONLY written record of the events. They are privately-held, now for the first time available for Transfer of Ownership 

The Tweet Collection are QUOTES of speakers on record. NOT ELSEWHERE available by name of speaker and affiliation

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

Ingest to Charlie Platform ALL e-Proceedings of ALL Conferences

 

Apply GPT:

Training Data:

–       One conference at a time

–       All Conference on ONE subject matter, i.e., Immunotherapy, Oncolytic Virus Immunotherapy, Immune Oncology

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on Ai in Manufacturing

Use Past Conference Agendas to build Future Conference Agendas

Use Speakers Lists to invite speakers/consultants to your events

Use topics covered in Conferences for Employee training & and in-house Leadership development

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

Having access to +100 e-Proceedings vs Not having access to this resource is a make or break in fine-tuning Corporate Branding: All your competitors attended and had sent Speakers

  • LPBI Group’s e-Proceedings is the only record in one URL

@@@@@@

AI Initiative at Big Pharmas

i.e., Pfizer

LPBI Group’s Digital IP Asset:

Biological Images selected by Experts embedded in original Text (Prior Art)

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Gallery of ~8,000 Biological images and captions is a Treasure TROVE for scientific article writing, Presentation preparations. This Media Gallery is an Art collection of top Scholars in Medicine and Biology
Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Gallery of ~8,000 Biological images and captions is a Treasure TROVE for Disease Detection and Diagnostics

 

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

  • Ingest into Charlie Platform the Media Gallery for generation of Medical article drafts
Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on Ai in Manufacturing

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

Library of Audio and Video Podcasts

N = +300

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Review ALL SCIENTIFIC BREAKTHROUGHS

  • Two criteria for Classifications used by Prof. Marcus W. Feldman and by Dr. Stephen J. Williams to generate the two classifications

https://pharmaceuticalintelligence.com/biomed-audio-podcast-library-lpbi-group/

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Aviva Lev-Ari, PhD, RN, Stephen J. Williams, PhD and Prof. Marcus W. Feldman Health Care Policy Analysis derived from the Farewell remarks from AMA President Jack Resneck Jr., MD | AMA 2023 Annual Meeting

LISTEN to Audio Podcast

Future of Medicine

https://pharmaceuticalintelligence.com/2023/06/10/health-care-policy-analysis-derived-from-the-farewell-remarks-from-ama-president-jack-resneck-jr-md-ama-2023-annual-meeting/

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

  • Ingest to Charlie Platform all +300 Podcasts for Foundation’s AI Learning Lab
Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinard on Ai in Manufacturing

  • Use Podcast for Education
  • Use Podcast as Hybrid: Start presentation with a Podcast continue with a life interview

Examples of an Audio Podcast of Scientists on LPBI Group Team in the past:

  • Podcast with Dr. Larry H. Bernstein, MD, FCAP, Interview by Gail S. Thornton, PhDc, Narrator’s Voice: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-dr-larry-h-bernstein-md-fcap-interview-by-gail-s-thornton-phdc-narrators-voice-stephen-j-williams-phd/

  • Podcast with Prof. Marcus W. Feldman, PhD, Biology Department, Stanford University, Interview by Gail S. Thornton, PhDc

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-prof-marcus-w-feldman-phd-biology-department-stanford-university-interview-by-gail-s-thornton-phdc/

  • Podcast with Dr. Sudipta Saha, PhD, Interview by Gail S. Thornton, PhDc, Narrator’s Voice: Gail S. Thornton, PhDc

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-dr-sudipta-saha-phd-interview-by-gail-s-thornton-phdc-narrators-voice-gail-s-thornton-phdc/

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

Dr. Lev-Ari Prompted @Grok on 11/18/2025:

AI-Driven Cost Savings in Big Pharma   

@Grok responded:

AI-Driven Cost Savings in Big PharmaArtificial intelligence (AI) applications across the pharmaceutical value chain—particularly in drug discovery, clinical trials, and operations—have the potential to generate substantial savings for major pharmaceutical companies (often referred to as “Big Pharma”). Based on recent industry reports and analyses from 2024-2025, these savings vary by application but could collectively reach tens of billions annually by the late 2020s. Projections indicate AI could add up to 11% in value relative to revenue for biopharma firms over the next five years through productivity gains and cost reductions. Below, I break down the estimated savings levels by key areas, drawing from credible sources like McKinsey, Deloitte, IQVIA, and others.

  1. Clinical Trials

  • Annual Industry-Wide Savings: By 2030, AI integration into 60-70% of clinical trials could save the pharmaceutical industry $20-30 billion per year through faster timelines, better patient matching, and reduced administrative costs.
  • Per-Trial Cost Reduction: AI can cut trial costs by up to 70% and shorten durations by 80%, enabling quicker regulatory submissions (40% faster) with 50% improved cost efficiency. For example, AstraZeneca reported up to 25% shorter trial durations in cancer drug studies using AI for dose optimization and biomarker identification.
  • Specific Efficiency Gains: Feasibility surveys can see 90% time reductions using AI/ML tools.
  1. Drug Discovery and Development

  • R&D Cost Drop: AI could reduce drug discovery costs by up to 40%, with overall development timelines shrinking from 5 years to 12-18 months (an 80% cut in some cases). From discovery to preclinical candidate stage, timelines can decrease by up to 50%.
  • Total Clinical Development Savings: Projected at $25 billion across the sector, driven by AI-accelerated pipelines.
  1. Operations and Broader Impacts

  • Value Relative to Revenue: Biopharma AI investments may yield up to 11% additional value (e.g., via cost efficiencies and revenue growth) over five years. For related medtech segments, savings could hit 12% of total revenue within 2-3 years.
  • Operating Profits Boost: By 2030, AI could add $254 billion in annual operating profits industry-wide through productivity jumps (e.g., 50-100% in quality control) and lead time reductions (60-70%).
  • Time-to-Market: Up to 10% faster overall, with production KPIs improving revenue by 25%.

These estimates assume scaled adoption, with global AI spending in pharma rising from ~$3 billion in 2025 to $22-25 billion by 2027-2030. Challenges like high initial costs and data infrastructure may temper short-term gains, but early adopters (e.g., Pfizer, Novartis) are already seeing ROI. For context, the global pharma market is projected at $1.6 trillion by 2025, so even modest percentages translate to billions in savings. If you need details on a specific AI use case or company, let me know!

SOURCE

https://x.com/i/grok?conversation=1990851703051104519

 

Dr. Lev-Ari Prompted @Grok on 11/17/2025:

Pfizer, its competitors by size, market share, product lines and Investment in AI technologies

Grok responded

https://x.com/i/grok?conversation=1990458795936354506

Overview of Pfizer and Key CompetitorsPfizer is one of the world’s largest pharmaceutical companies, with a strong focus on innovative medicines and vaccines. Its primary competitors include Merck & Co., Johnson & Johnson (J&J), Roche, Novartis, and now AstraZeneca, based on revenue leadership and market overlap in areas like oncology, immunology, and vaccines. These companies dominate the global pharmaceutical industry, which is projected to reach $1.21 trillion in revenue by 2025. Below, I compare them across size (measured by 2024 revenue and employees), market share (approximate global pharma share based on 2024 revenues), product lines (key therapeutic areas and top products), and investments in AI technologies. Data reflects the latest available figures as of late 2025.
Here are the same comparisons with more concise tables (2024 data, latest available as of Nov 2025):
Size (2024 Revenue & Employees)
Company
Revenue (USD B)
Employees
Merck & Co.
64.2
70,000
Pfizer
63.6
88,000
Roche
~58.3
103,000
J&J (pharma only)
57.8
138,000
AstraZeneca
54.1
94,300
Novartis
~50.0
76,000
Approx. Global Pharma Market Share
Company
Share %
Merck & Co.
4.3
Pfizer
4.2
J&J / Roche
3.9
AstraZeneca
3.6
Novartis
3.3
Key Product Lines & Top Sellers (2024 revenue, USD B)
Company
Main Areas
Top Products (revenue)
Pfizer
Vaccines, cardiology, oncology
Eliquis (7.3), Prevnar (6.4), Vyndaqel (5.4)
Merck & Co.
Oncology, vaccines
Keytruda (25.0), Gardasil (8.6)
J&J
Immunology, oncology
Stelara (10.9), Darzalex (9.7)
AstraZeneca
Oncology, CVRM, respiratory
Farxiga (7.7), Tagrisso (6.6), Enhertu (4.0)
Roche
Oncology, neurology, hemophilia
Ocrevus (5.7), Hemlibra (4.0)
Novartis
Cardiology, immunology
Entresto (6.6), Cosentyx (5.2)

AI Investment Snapshot (2025)
Company
AI Highlights (2025)
Scale/Impact
Pfizer
AI cuts discovery from years → 30 days; heavy partnerships
Leader in adoption
AstraZeneca
Evinova AI trials unit; $2.5B AI Beijing hub; 12k+ staff upskilled
Very aggressive (>85% productivity gain)
Merck & Co.
Gen-AI for protocols; 20+ AI “blockbusters” in pipeline
>$1.2B spend
J&J
900+ AI projects → $500M savings; surgical AI training
Broad internal rollout
Roche
AI diagnostics + drug discovery; $55M Manifold Bio deal
Strong in diagnostics
Novartis
AI pipeline acceleration; data-science alliances
Integrated into $10B R&D
Dr. Lev-Ari’s Prompt:
  • AI in biotech startups
Searched for “(“AI” OR “artificial intelligence”) (“biotech” OR “biotechnology”) startup since:2025-01-01″
10 results
Show All (2)

AI in Biotech Startups:

2025 LandscapeArtificial Intelligence (AI) is revolutionizing biotech startups by accelerating drug discovery, optimizing clinical trials, and enabling personalized medicine. In 2025, AI applications like generative models for molecule design and predictive analytics for protein structures are slashing R&D timelines from years to months, reducing costs by up to 30%, and boosting success rates. The sector blends biology’s complexity with AI’s data-crunching power, drawing massive VC interest amid ethical debates on gene editing and biosecurity.Key Trends

  • Explosive Funding Growth: U.S. healthcare VC hit $23B in 2024, with ~30% ($5.6B) targeting AI-biotech—nearly triple 2023 levels. Biopharma startups raised $24.2B across 568 deals, up from $10.1B in 2023. Q3 2025 saw $97B global VC (up 38% YoY), half to AI firms. Mega-rounds ($100M+) surged, focusing on drug discovery and rare diseases.
  • Market Expansion: Global AI-biotech market: $4.6B in 2025, projected to $11.4B by 2030 (20% CAGR). Biotech overall to $5.85T by 2034 (13.6% CAGR), driven by AI in gene editing, microbiome therapeutics, and automation.
  • Compute Demands: Biotech AI requires massive GPUs; Nvidia’s AI sales hit $41.1B in one 2025 quarter (56% YoY). Startups lean on cloud partnerships (e.g., AWS-Recursion) or in-house clusters, with hyperscalers eyeing $3-4T infrastructure by 2030.
  • Regional Hotspots: North America leads (42% share) via FDA support and hubs like Boston/SF; Europe grows via funding/regulatory advances; Asia (e.g., China) via collaborations.
  • Challenges & Ethics: Selective funding favors clinically validated firms; rising M&A (big pharma acquiring AI startups); ethical scrutiny on AI gene editing (e.g., CRISPR enhancements).

 

Notable Startups & Recent Funding

Here’s a snapshot of standout AI-biotech startups, highlighting innovations and 2024-2025 funding:

Startup
Focus Area
Key Innovation
Recent Funding (2025)
Insilico Medicine
Drug Discovery
Generative AI for novel molecules
$100M+ Series D (ongoing)
Recursion Pharmaceuticals
Phenomics & Drug Repurposing
AI mapping diseases via cell images
$50M partnership w/ Nvidia
Absci Corp.
Protein Design
AI-generated antibodies/proteins
$20M from AMD (Jan 2025)
TandemAI
Drug Discovery
ML for molecular simulation
$22M Series A extension
Red Queen Bio
Biosecurity
AI for pathogen detection/design
$15M Seed led by OpenAI
Antiverse
Antibody Design
AI predicting antibody structures
Deal w/ Nxera Pharma (2024)
Deep Genomics
Genomics
AI extracting insights from DNA data
VC round undisclosed
Aizen Therapeutics
Molecule Generation
Proprietary AI engines for trials
Early-stage VC

These firms exemplify AI’s shift from hype to pipeline impact—e.g., Insilico’s AI-discovered drug entered Phase II trials in record time.Emerging ApplicationsAI’s top uses in biotech startups:

  1. Drug Discovery: ML predicts drug-target interactions (e.g., AlphaFold-inspired tools cut screening costs 70%).
  2. Genomics & Diagnostics: AI analyzes vast datasets for rare variants, powering precision oncology.
  3. Clinical Trials: Predictive models optimize patient matching, reducing failures by 20-30%.
  4. Synthetic Biology: AI designs custom microbes for therapeutics or biofuels.
  5. Automation Labs: AI-robotics handle experiments autonomously, boosting throughput 10x.

Future Outlook

2025 signals a “quality over quantity” pivot: Investors prioritize AI startups with real-world data and partnerships (e.g., Y Combinator, Google for Startups). M&A will rise as big pharma (Pfizer, Merck) snaps up talent. Yet, compute shortages and regs (e.g., FDA AI guidelines) pose hurdles. For founders, blending AI with wet-lab validation is key to unlocking the next wave of breakthroughs—potentially curing diseases once deemed untreatable. If you’re building or investing, focus on
  • rare diseases or
  • oncology

for the hottest bets.

Additional Competitors by Product Line

@Grok Response

https://x.com/i/grok/share/Ple3W4IDlwhyjUwBUU4JAg2OD

Key Products and Focus AreasPfizer’s portfolio spans immunology, oncology, cardiology, endocrinology, neurology, and vaccines. In 2024, its top revenue generators included:

  • Eliquis (apixaban, blood thinner): $7.3 billion (11% of total revenue).
  • Prevnar family (pneumococcal vaccines): $6.4 billion (10%).
  • Paxlovid (COVID-19 antiviral): $5.7 billion (9%).
  • Vyndaqel (tafamidis, for rare heart disease): $5.4 billion (8%).
  • Comirnaty (COVID-19 vaccine): $5.3 billion (8%).
  • Ibrance (palbociclib, breast cancer drug): $4.3 billion (6%).

The company is investing heavily in AI-driven drug discovery (e.g., 2024 partnerships with NVIDIA and Saama) and next-gen therapies like RSV vaccines, gene therapies for muscular dystrophy, and oral GLP-1 drugs for obesity/diabetes. Challenges include patent expirations (e.g., Eliquis in 2026-2028) and biosimilar competition, but Pfizer’s diversified pipeline positions it for mid-single-digit growth in 2025.Pfizer’s Main CompetitorsPfizer operates in a highly competitive $1.5 trillion global pharma market, where rivals vie for dominance in oncology, vaccines, immunology, and emerging areas like obesity and gene therapy. Its primary competitors are fellow “Big Pharma” giants, often overlapping in therapeutic areas. Based on revenue, market share, and direct product rivalries, here are the top ones (ranked roughly by 2024 revenue for context):

Competitor
Headquarters
2024 Revenue (Pharma Segment)
Key Strengths & Overlaps with Pfizer
Notable Products
Johnson & Johnson (J&J)
New Brunswick, NJ, USA
$88.8 billion (total; pharma ~$57.8 billion)
Diversified (pharma, devices, consumer); strong in oncology, immunology, vaccines. Competes directly in blood thinners and cancer therapies.
Darzalex (multiple myeloma), Stelara (psoriasis), Imbruvica (blood cancer).
Merck & Co.
Rahway, NJ, USA
$64.2 billion
Oncology leader; vaccines and antivirals. Keytruda outsells Pfizer’s Ibrance; rivals in HPV/pneumococcal vaccines.
Keytruda (immunotherapy, $29.5 billion), Gardasil (HPV vaccine).
Roche Holding AG
Basel, Switzerland
~$68 billion (group; pharma ~$50 billion)
Diagnostics + pharma powerhouse; dominates oncology and neurology. Competes in antibody-drug conjugates post-Pfizer’s Seagen buy.
Ocrevus (MS), Hemlibra (hemophilia), Avastin (cancer).
AbbVie Inc.
North Chicago, IL, USA
$56.3 billion
Immunology and neuroscience focus; biosimilars erode shared markets like Humira vs. Pfizer’s Enbrel alternatives.
Humira (rheumatoid arthritis), Skyrizi (psoriasis), Rinvoq (arthritis).
Novo Nordisk A/S
Bagsværd, Denmark
~$40 billion (est.)
Diabetes/obesity specialist; exploding GLP-1 demand challenges Pfizer’s emerging obesity pipeline.
Ozempic/Wegovy (semaglutide for diabetes/weight loss).
Eli Lilly and Co.
Indianapolis, IN, USA
~$45 billion (est.; projecting $58-61 billion in 2025)
Diabetes/obesity and neurology; Mounjaro rivals Pfizer’s danuglipron in GLP-1 space.
Mounjaro/Zepbound (tirzepatide), Verzenio (breast cancer).
AstraZeneca PLC
Cambridge, UK
$50.95 billion
Oncology, respiratory, and cardiovascular; Tagrisso competes with Pfizer’s Lorbrena in lung cancer.
Tagrisso (lung cancer), Farxiga (diabetes/heart), Imfinzi (immunotherapy).
Novartis AG
Basel, Switzerland
~$48 billion
Gene therapy and immunology; rivals in rare diseases and oncology.
Cosentyx (psoriasis), Entresto (heart failure), Kisqali (breast cancer).
Bristol-Myers Squibb (BMS)
New York, NY, USA
$47.8 billion
Oncology and immunology; Opdivo directly competes with Pfizer’s Bavencio in checkpoint inhibitors.
Opdivo (immunotherapy), Eliquis (co-developed with Pfizer, but BMS handles ex-US).
Sanofi S.A.
Paris, France
~$46 billion
Vaccines and rare diseases; Dupixent challenges in immunology; rivals Prevnar in pneumococcal vaccines.
Dupixent (atopic dermatitis), Lantus (diabetes).

These competitors collectively hold over 50% of the global pharma market share. Emerging threats include biotech upstarts like Moderna (mRNA vaccines, directly rivaling Comirnaty) and generics firms like Teva (eroding post-patent revenues). In 2025, competition intensifies in obesity (Novo/Lilly lead) and oncology (Merck/Roche dominate), with tariffs and pricing pressures (e.g., US executive orders) affecting all.Pfizer’s edge comes from its scale and partnerships (e.g., BioNTech for mRNA), but it must navigate revenue dips from COVID fade-out and biosimilars. For 2025 projections, analysts expect 6-9% growth across the sector, with Pfizer targeting modest gains through oncology expansions. If you’d like deeper dives on specific rivals, financials, or products, let me know!

AI Acquisitions by Big Pharma

2 days ago • Visible to anyone on or off LinkedIn

AI in Pharma: From Tool to Value Creator 🚀

Over the past 5 years, pharma’s AI footprint has expanded rapidly and the investment trends tell a clear story.

💡 ~60% of collaborations and acquisitions now focus on TargetDiscovery and Validation — pharma’s main frontier to shorten early-stage R&D timelines and boost success rates through PrecisionBiology.
⚗️ The second-largest area (~25%) centers on AI-driven DrugDesign and MoleculeOptimization, where algorithms accelerate CandidateSelection, refine molecular properties, and enable NewModalities such as Biologics and CellAndGeneTherapies.
🏆 Among top players, Eli Lilly and Company stands out as the most active investor — making AI a true StrategicCapability, not just another research tool.

The takeaway?
AI isn’t just supporting pharma anymore, it’s reshaping it.

Source: Bita Sehat, PhD, MBA

hashtagai hashtaglifesciences hashtagpharmaceuticals hashtagtechbio hashtagbiotech

table

Article was summarized by @Grok on 11/29/2025
Summary of the Article:
“Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma”
This November 15, 2025, article on PharmaceuticalIntelligence.com (curated by LPBI Group) makes a compelling case for leveraging LPBI’s decade-plus of expert-curated digital IP as a high-fidelity, rights-clean training corpus to supercharge AI-driven innovation in Big Pharma. It contrasts the limitations of generic LLMs (like GPT) with the need for domain-specific, ontologized biomedical data, positioning LPBI’s assets as a “defensible moat” for accelerating R&D in drug discovery, precision medicine, and beyond. The piece blends market analysis of pharma giants with a deep dive into LPBI’s portfolio, emphasizing its multimodal, metadata-rich structure for NLP, knowledge graphs, and fine-tuning SLMs/LLMs.IntroductionThe thesis: In an era where AI investments in pharma (e.g., Eli Lilly’s aggressive funding) focus on slashing R&D timelines by 60%+ in target discovery and 25% in drug design, LPBI’s proprietary corpus—built since 2012 via human curation, digitization, and ontology mapping—offers unmatched authenticity over web-scraped data. It addresses GPT’s biomedical shortcomings (e.g., failing to infer fusion proteins or noncanonical pathways without curated inputs) and highlights LPBI’s foresight in text analysis for generative AI augmentation. A key quote from Data2Discovery: “We are able to improve drug discovery now as well as demonstrating new fast-cycle AI-driven processes that will have a revolutionary impact on drug discovery if fully implemented.”Portfolio OverviewLPBI’s ~9 GB, debt-free, multimodal corpus is privately held, expert-curated (e.g., by Prof. Marcus W. Feldman and Dr. Stephen J. Williams), and ingest-ready for AI pre-training/evaluations. It spans five key asset classes, each with metadata exports, timestamps, crosslinks, and centralized rights for model training:

 

Asset Class
Description & Size
Unique Value Proposition
I: Scientific Articles
6,250+ articles on PharmaceuticalIntelligence.com (~2.5M views); covers genomics, oncology, immunology, etc.
Live ontology, author/role labels, view histories; enables temporal NLP for trend analysis.
II: e-Books
48 bilingual (English/Spanish) volumes in 5 BioMed e-Series (e.g., Series A: Cardiovascular, 6 vols., $515 total; Series E: Patient-Centered, 4 vols., $274); 151,000+ page downloads; 2,728 articles.
Peer-reviewed, senior-editor TOCs; pay-per-view model proves demand; ideal for entity-relationship extraction.
III: e-Proceedings
100+ from biotech/genomics conferences (2013–2025); +50 tweet collections as speaker quotes with affiliations.
Real-time event curation; captures emerging insights for knowledge graph augmentation.
V: Biological Images
7,500+ images in Digital Art Media Gallery; embedded as prior art in texts.
Expert-contextualized visuals; supports multimodal AI for image-text pairing in diagnostics.
X: Audio Podcasts
300+ interviews with scientific leaders (e.g., Nobel laureates like Jennifer Doudna); classified by themes like CRISPR, mRNA vaccines.
Transcripts + NLP WordClouds; adds auditory/verbal depth for voice-enabled AI copilots.

The portfolio’s “living ontology” allows seamless integration into tools like InfraNodus for concept mapping.AI Training RelevanceUnlike PubMed’s unstructured dumps, LPBI’s assets are pre-annotated for concept extraction (e.g., gene-disease-drug dyads), reducing hallucinations and bias in LLMs. A case study integrates curation with ChatGPT-5: Manual ontology + knowledge graphs uncovered novel WNT/Hedgehog interactions in lung cancer, generating research questions like: “How does the interaction between [[EGFR]] mutations and sex-specific gene alterations, including [[RBM10]], influence treatment outcomes in lung adenocarcinoma?” This hybrid approach outperforms solo GPT, proving the corpus’s role in trustworthy biomedical inference.Applications

  • Drug Discovery: ML prompts on dyads to mimic Pfizer’s AI (e.g., reducing discovery from years to months); supports biologics/cell therapies.
  • Precision Medicine: 547 articles for patient subtyping; 3,615 disease-focused pieces for predictive modeling.
  • Diagnostics & Rare Diseases: 560 rare disease articles + 30 on ATTR-CM for early detection (e.g., via Pfizer’s ASPIRE AI).
  • Clinical/Regulatory: Streamlines trials, evidence synthesis, and content generation (e.g., via Pfizer’s “Charlie” copilot, saving hours on lit reviews).
  • Generative Tools: Fine-tune for compliant, domain-aware outputs in medical affairs/safety.

Challenges & Solutions

  • Data Quality/Bias: Unstructured text leads to GPT errors; solved by expert curation + ontology for accurate, diverse inputs.
  • Regulatory Compliance: Rights fragmentation; addressed via LPBI’s centralized assignment for NDA-secure licensing.
  • Scalability: Generic data overload; LPBI’s focused, high-signal corpus enables fast-cycle AI without noise.

Recommendations

Urge Big Pharma (e.g., Pfizer, Merck—table highlights Merck’s $64.2B revenue, Keytruda focus) to acquire the full portfolio for end-to-end knowledge bases, or license under NDA for SLM/LLM training (English/Spanish).
  • Emphasizes “buy the moat” for competitive edge in 2025’s 6–9% sector growth (obesity/oncology hotspots).
Contact: avivalev-ari@alum.berkeley.edu.
Conclusion
LPBI’s corpus isn’t just data—it’s a “scientific revolution enabler,” blending human creativity with AI for transformative pharma impact. Validated on extraction tasks, it’s primed for partnerships, outshining combinatorial predictions with authentic, curated depth. This article builds on prior LPBI mappings (e.g., our podcast TOC summary), reinforcing the portfolio’s strategic fit for AI.
For deeper analysis or related assets, let me know!

Read Full Post »

Real Time Conferecence Coverage: Advancing Precision Medicine Conference Philadelphia PA November 1,2 2024  Deliverables

Curator: Stephen J. Williams, Ph.D.

Below are deliverables in form of real Time conference coverage from the Advancing Precision Medicine Confererence held this year in Philadelphia, PA.  The meeting brought together scientists and clinicians to discuss the challenges faced in implementing genomics and proteomics into precision medicine decision making workflow.  As summarized by a future release at the 2025 ASCO, there are many issues and hindrances to incorporating data obtained from sequencing to make a personalized medicine strategy.  The meeting focused on two main disease states: oncology and cardiovascular however most of  the live meeting notes are from the oncology tract.  In general it was discussed there are three areas which need to be addressed to correctly and more frequently incorporate precision medicine and genomic panel testing into clinical decision making workflow:

  1.  access to testing panels and testing methodology for both doctors and patients
  2. expert interpretation of results including algorithms needed to analyze the data
  3. more education of molecular biology and omics data and methodology in medical school to address knowledge gaps between clinicians and scientists

The issues can be summarized by a JCO report to ASCO in 2022:

 Helen Sadik, PhDDaryl Pritchard, PhD https://orcid.org/0000-0003-2675-0371 dpritchard@personalizedmedicinecoalition.orgDerry-Mae Keeling, BScFrank Policht, PhDPeter Riccelli, PhDGretta Stone, BSKira Finkel, MSPHJeff Schreier, MBA, and Susanne Munksted, MS.  Impact of Clinical Practice Gaps on the Implementation of Personalized Medicine in Advanced Non–Small-Cell Lung Cancer. 2022: JCO Precision Oncology; Volume 6. https://doi.org/10.1200/PO.22.00246

Personalized medicine presents new opportunities for patients with cancer. However, many patients do not receive the most effective personalized treatments because of challenges associated with integrating predictive biomarker testing into clinical care. Patients are lost at various steps along the precision oncology pathway because of operational inefficiencies, limited understanding of biomarker strategies, inappropriate testing result usage, and access barriers. We examine the impact of various clinical practice gaps associated with diagnostic testing-informed personalized medicine strategies on the treatment of advanced non–small-cell lung cancer (aNSCLC).

The authors used a  Diaceutics’ Data Repository, a multisource database including commercial and Medicare claims and laboratory data from over 500,000 patients with non–small-cell lung cancer in the United States. They  analyzed the number of patients with newly diagnosed aNSCLC who could have, but did not, benefit from a personalized treatment. The analysis was focused on identifying the gaps and at which steps during care did gaps existed which precipitated either lack of use of precision medicine testing or incorrect interpretation of results.

Their conclusions were alarming:

Most patients with aNSCLC eligible for precision oncology treatments do not benefit from them because of clinical practice gaps. This finding is likely reflective of similar gaps in other cancer types. An increased understanding of the impact of each practice gap can inform strategies to improve the delivery of precision oncology, helping to fully realize the promise of personalized medicine.

The links to the live meeting notes are given below and collection of tweets follow (please note this meeting did not have a Twitter hashtag)

Real Time Coverage Advancing Precision Medicine Annual Conference, Philadelphia PA November 1,2 2024

https://pharmaceuticalintelligence.com/2024/11/01/real-time-coverage-advancing-precision-medicine-annual-conference-philadelphia-pa-november-12-2024/

Real Time Coverage Morning Session on Precision Oncology: Advancing Precision Medicine Annual Conference, Philadelphia PA November 1 2024

https://pharmaceuticalintelligence.com/2024/11/01/real-time-coverage-morning-session-on-precision-oncology-advancing-precision-medicine-annual-conference-philadelphia-pa-november-1-2024/

Real Time Coverage Afternoon Session on Precision Oncology: Advancing Precision Medicine Annual Conference, Philadelphia PA November 1 2024

https://pharmaceuticalintelligence.com/2024/11/01/real-time-coverage-afternoon-session-on-precision-oncology-advancing-precision-medicine-annual-conference-philadelphia-pa-november-1-2024/ 

Real Time Coverage Morning Session on Precision Oncology: Advancing Precision Medicine Annual Conference, Philadelphia PA November 2 2024

https://pharmaceuticalintelligence.com/2024/11/04/real-time-coverage-morning-session-on-precision-oncology-advancing-precision-medicine-annual-conference-philadelphia-pa-november-2-2024/ 

Tweet Collection

Tweet Collection Advancing Precision Medicine Conference November 1,2 2024 Philadelphia PA

 

Read Full Post »

DeepSeek-V3 and Reasoning Model R1: Four Views (a) Explanations (b) The Chinese Perspective (c) DeepSeek Impact on Demand for Inference Chips & Training Chips, and (d) LPBI Group: Expert Content for ML Models in Healthcare, Pharmaceutical, Medical and Life Sciences

Curator: Aviva Lev-Ari, PhD, RN

With the announcement of DeepSeek on January 27, 2025, it became compelling to cover several aspects of this hot Artificial Intelligence Technology.

This curation has four Parts: 

Part A: Explanations 

Part B: The Chinese Perspective

Part C: DeepSeek potential Impact on Demand for Inference Chips & Training Chips, and

Part D: LPBI Group: Expert Content for ML Models in Healthcare, Pharmaceutical, Medical and Life Sciences

 

Part A: Explanations by Morgan Brown

@morganb

Jan 27   Read on X

🧵 Finally had a chance to dig into DeepSeek’s r1…

Let me break down why DeepSeek’s  AI innovations are blowing people’s minds (and possibly threatening Nvidia’s $2T market cap) in simple terms… 

0/ first off, shout out to @doodlestein who wrote the must-read on this here:

The Short Case for Nvidia StockAll the reasons why Nvidia will have a very hard time living up to the currently lofty expectations of the market.https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda

1/ First, some context: Right now, training top AI models is INSANELY expensive. OpenAI, Anthropic, etc. spend $100M+ just on compute. They need massive data centers with thousands of $40K GPUs. It’s like needing a whole power plant to run a factory. 

2/ DeepSeek just showed up and said “LOL what if we did this for $5M instead?” And they didn’t just talk – they actually DID it. Their models match or beat GPT-4 and Claude on many tasks. The AI world is (as my teenagers say) shook. 

3/ How? They rethought everything from the ground up. Traditional AI is like writing every number with 32 decimal places. DeepSeek was like “what if we just used 8? It’s still accurate enough!” Boom – 75% less memory needed. 

4/ Then there’s their “multi-token” system. Normal AI reads like a first-grader: “The… cat… sat…” DeepSeek reads in whole phrases at once. 2x faster, 90% as accurate. When you’re processing billions of words, this MATTERS. 

5/ But here’s the really clever bit: They built an “expert system.” Instead of one massive AI trying to know everything (like having one person be a doctor, lawyer, AND engineer), they have specialized experts that only wake up when needed.

[color added by curator, See Part D, below]

6/ Traditional models? All 1.8 trillion parameters active ALL THE TIME. DeepSeek? 671B total but only 37B active at once. It’s like having a huge team but only calling in the experts you actually need for each task. 

7/ The results are mind-blowing:
– Training cost: $100M → $5M
– GPUs needed: 100,000 → 2,000
– API costs: 95% cheaper
– Can run on gaming GPUs instead of data center hardware 

8/ “But wait,” you might say, “there must be a catch!” That’s the wild part – it’s all open source. Anyone can check their work. The code is public. The technical papers explain everything. It’s not magic, just incredibly clever engineering. 

9/ Why does this matter? Because it breaks the model of “only huge tech companies can play in AI.” You don’t need a billion-dollar data center anymore. A few good GPUs might do it. 

10/ For Nvidia, this is scary. Their entire business model is built on selling super expensive GPUs with 90% margins. If everyone can suddenly do AI with regular gaming GPUs… well, you see the problem. 

11/ And here’s the kicker: DeepSeek did this with a team of <200 people. Meanwhile, Meta has teams where the compensation alone exceeds DeepSeek’s entire training budget… and their models aren’t as good. 

12/ This is a classic disruption story: Incumbents optimize existing processes, while disruptors rethink the fundamental approach. DeepSeek asked “what if we just did this smarter instead of throwing more hardware at it?” 

13/ The implications are huge:
– AI development becomes more accessible
– Competition increases dramatically
– The “moats” of big tech companies look more like puddles
– Hardware requirements (and costs) plummet 

14/ Of course, giants like OpenAI and Anthropic won’t stand still. They’re probably already implementing these innovations. But the efficiency genie is out of the bottle – there’s no going back to the “just throw more GPUs at it” approach. 

15/ Final thought: This feels like one of those moments we’ll look back on as an inflection point. Like when PCs made mainframes less relevant, or when cloud computing changed everything.

 AI is about to become a lot more accessible, and a lot less expensive. The question isn’t if this will disrupt the current players, but how fast.

/end 

P.S. And yes, all this is available open source. You can literally try their models right now. We’re living in wild times! 🚀 

Momma, I’m going viral! No substack or gofundme to share but a few things to add/clarify:

1/ The DeepSeek app is not the same thing as the model. Apps are owned and operated by a Chinese corporation, the model itself is open source.

2/ Jevon’s paradox is the counter argument. Thanks papa @satyanadella. Could be a mix shift in chip type, compute type, etc. but we’re constrained by power and compute right now, not demand constrained.

3/ The techniques used are not ground breaking. It’s the combination of them w/the relative model performance that is so exciting. These are common eng techniques that combined really fly in the face of more compute is the only answer for model performance. Compute is no longer a moat.

4/ Thanks to all for pointing out my NVIDIA market cap numbers miss and other nuances – will do better next time, coach. 🫡 

SOURCE

https://threadreaderapp.com/thread/1883686162709295541.html#google_vignette

 

Part B: The Chinese Perspective

© 2025 Jordan Schneider

DeepSeek: The View from China

China’s takes are better than yours

 

SOURCE

From: ChinaTalk <chinatalk@substack.com> on behalf of ChinaTalk <chinatalk@substack.com>
Reply-To: ChinaTalk <reply+2ktto4&8t4ds&&6fa8442469b96573268378f7538ff49c28c45589f5811b2a55b30e89ee8ff94d@mg1.substack.com>
Date: Tuesday, January 28, 2025 at 9:54 AM
To: Aviva Lev-Ari <avivalev-ari@alum.berkeley.edu>
Subject: DeepSeek: The View from China

And

https://www.chinatalk.media/p/deepseek-the-view-from-china

https://open.substack.com/pub/chinatalk/p/deepseek-the-view-from-china?r=8t4ds&utm_campaign=post&utm_medium=email

The Mystical DeepSeek. ‘The most important thing about DeepSeek is pushing intelligence’

  1. Founder and CEO Liang Wenfeng is the core person of DeepSeek. He is not the same type of person as Sam Altman. He is very knowledgeable about technology.
  2. DeepSeek has a good reputation because it was the first to release the reproducible MoE, o1, etc. It succeeded in acting early, but whether or not it did the absolute best remains to be seen. Moving forward, the biggest challenges are that resources are limited and can only be invested in the most high-potential areas. DeepSeek’s research and culture are still strong, and if given 100,000 or 200,000 chips, they might be able to do better.
  3. From its preview to its official release, DeepSeek’s model’s long-context capabilities have improved rapidly. DeepSeek’s long-context 20K can be achieved with very conventional methods.
  4. The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant.
  5. DeepSeek focused all its efforts on a single goal and subsequently gave up many things, such as multimodality. DeepSeek is not just serving people, but seeking intelligence itself, which may have been a key factor in its success.
  6. In some ways, quant trading can be said to be the business model of DeepSeek. Huanfang (another quantitative investment company founded by Liang Wenfeng) is the product of the last round of machine learning. DeepSeek’s highest priority is to push intelligence. Money and commercialization are not high priorities. China needs several leading AI labs to explore things that can beat OpenAI. Intelligence takes a long time to develop, and has begun to differentiate again this year, so new innovations are bound to result.
  7. From a technical perspective, DeepSeek has been instrumental as a training ground for talent.
  8. The business model of AI labs in the United States is not good either. AI does not have a good business model today and will require viable solutions in the future. Liang Wenfeng is ambitious; DeepSeek does not care about the model and is just heading towards AGI.
  9. Many of the insights from DeepSeek’s paper involve saving hardware costs. On a couple of big dimensions of scaling, DeepSeek’s techniques are able to reduce costs.
  10. In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough.
  11. Discussing DeepSeek’s organization:
    1. When investing, we always choose the most advanced talent. But we see from DeepSeek’s model (the team is mostly smart young people who graduated from domestic universities) that a group that coheres well may also gradually advance their skills together. It has yet to be seen whether poaching one person might break DeepSeek’s advantage, but for now this seems unlikely.
    2. While there’s a lot of money in the market, DeepSeek’s core advantage is its culture. The research culture of DeepSeek and ByteDance are similar, and both are critical for determining the availability of funding and long-term viability. Only with an important business model can there be a sustainable culture. Both DeepSeek and ByteDance have very good business models.
  12. Why did DeepSeek catch up so fast?
    1. Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.
    2. One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier.

Pioneers vs. Chasers: ‘AI Progress Resembles a Step Function – Chasers Require 1/10th the Compute’

Points 13 – 17

[Points 18-48 was a long technical discussion we’ve machine-translated below]

Why didn’t the other companies take the DeepSeek approach: ‘Models from the big labs need to maintain a low profile’

Points 49, 50

The Divergence and Bets of 2025 Technology: ‘Can We Find Architectures Beyond Transformer?’

Points 51 – 56

Have developers moved from closed-source models to DeepSeek? ‘Not yet’

Points 57 – 62

OpenAI Stargate’s $500B Narrative and Changes in Computing Power Demand

  1. The emergence of DeepSeek has led people to question the latest $500B narrative from Nvidia and OpenAI. There’s no verdict yet on compute — and OpenAI’s $500B narrative is their attempt to throw themselves a lifeline.
  2. Regarding the doubts about OpenAI’s $500B infrastructure investment: because OpenAI is a commercial company, it could be risky if debt is involved.
  3. $500B is an extreme number — likely to be executed over 4 or 5 years. SoftBank and OpenAI are the leading players (the former providing capital, the latter technology) — but SoftBank’s current funds can’t support $500B; rather SoftBank is using its assets as collateral. OpenAI, meanwhile, isn’t very cash-rich either, and other AI companies are more technical participants than they are funding providers. So it will be a struggle to fully realize the $500B vision.
  4. OpenAI’s $500B computing power makes sense: during the exploration phase, the cost of trial and error is high, with both human and investment costs being substantial. But although the path isn’t clear and getting from o1 to R1 won’t be easy, at least we can see what the finish line looks like: we can track the intermediate markers, and from day one, aim for others’ proven end states; this gives us a better bearing on our progress. Being at the frontier exploring the next generation is most resource-intensive. The followers don’t bear exploration costs — they’re always just following. If Google/Anthropic succeed in their exploration areas, they might become the frontier company.
  5. In the future, Anthropic might replace all their inference with TPU or AWS chips.
  6. Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.
  7. The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency.DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization.

 

Impact on the Secondary Market: ‘Short-term sentiment is under pressure, but the long-term narrative continues’

Points 70 – 74

Open-Source vs Closed Source: ‘If capabilities are similar, closed source will struggle.’

Points 75 – 78

The Impact of DeepSeek’s Breakthrough: ‘Vision Trumps Technology’

  1. DeepSeek’s breakthrough made the outside world realize China’s AI strength. Previously, outsiders thought China’s AI progress lagged America by two years, but DeepSeek shows the gap is actually 3 to 9 months, and in some areas, even shorter.
  2. When it comes to technologies and sectors that America has historically blocked China from accessing, if China can break through nonetheless, those sectors ultimately become highly competitive. AI might follow this pattern — and DeepSeek’s success may well prove this.
  3. DeepSeek didn’t suddenly explode. R1’s impressive results reverberated throughout America’s entire AI establishment.
  4. DeepSeek stands on the shoulders of giants — but exploring the frontier still requires much more time and human capital cost. R1 doesn’t mean that future training costs will decrease.
  5. AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.
  6. China is still replicating technical solutions; reasoning was proposed by OpenAI in o1, so the next gap between various AI labs will be about who can propose the next reasoning. Infinite-length reasoning might be one vision.
  7. The core difference between different AI labs’ models lies not in technology, but in what each lab’s next vision is.
  8. After all, vision matters more than technology.

Technical Discussion

There was a deep technical discussion in the article that we’ve machine-translated below.

Technical Detail 1: Supervised Fine-Tuning (SFT). ‘No need for SFT on the reasoning level’

Points 18 – 27

Technical Detail 2: Data. ‘DeepSeek values data annotation’

Points 28 – 30

Technical Detail 3: Distillation. ‘The limit of distillation is that model diversity drops

Points 31 – 43

Technical Detail 4: Process Reward. ‘The upper limit of process reward is human, but the upper limit of outcome supervision is the model itself.’

Points 44 – 48

 

SOURCE of the Chinese Perspective

https://www.chinatalk.media/p/deepseek-the-view-from-china?utm_source=substack&publication_id=4220&post_id=155916148&utm_medium=email&utm_content=share&utm_campaign=email-share&triggerShare=true&isFreemail=true&r=8t4ds&triedRedirect=true

 

Part C: DeepSeek Impact on Demand for “Inference Chips” and “Training Chips”

 

Watch Full Interviews with Ark’s Cathie Wood

 

  • Ark’s Wood on DeepSeek, AI, Crypto, Trump | Cathie Wood Full Interview

https://youtu.be/EKELCEW8lNo?si=Zri9QqcMHsESgO8N

 

  • Cathie Wood Talks DeepSeek Lessons, Musk, Driverless Cars & UK

https://youtu.be/aThejSuMX-I?si=e9uM7TpoQ1Neb-cT

 

“Inference Chips” and “Training Chips”: Technology explained

 

AI Chips Explained: Training vs. Inference Processors Unveiled

https://www.friendsofthemetaverse.com/blog/ai-chips-explained-training-vs-inference-processors-unveiled

 

Inference chips and training chips are both types of AI chips that serve different purposes. Training chips are used to develop AI models, while inference chips are used to deploy those models in real-world applications. 

An “inference chip” is designed to efficiently execute a trained AI model on new data to make predictions in real-time, prioritizing low latency and power consumption, while a “training chip” is optimized for the computationally intensive process of initially training a machine learning model, requiring high processing power and memory bandwidth, often at the cost of power efficiency; essentially, inference chips are for “applying” the learned model, while training chips are for “learning” the model itself. 

 

SOURCE

https://www.google.com/search?q=training+chips+vs+inference+chips&oq=Training+Chips+vs+Inference+chips&gs_lcrp=EgZjaHJvbWUqBggAEEUYOzIGCAAQRRg7Mg0IARAAGIYDGIAEGIoFMg0IAhAAGIYDGIAEGIoFMg0IAxAAGIYDGIAEGIoFMgoIBBAAGIAEGKIEMgoIBRAAGIAEGKIEMgoIBhAAGIAEGKIEMgoIBxAAGIAEGKIE0gEKMTc2OTVqMGoxNagCCLACAQ&sourceid=chrome&ie=UTF-8

 

Training vs. Inference (But, Really: Training Then Inference)

To recap: the AI training stage is when you feed data into your learning algorithm to produce a model, and the AI inference stage is when your  algorithm uses that training to make inferences from data. Here’s a chart for quick reference: 

Table

Inference

Feed training data into a learning algorithm

Apply the model to the inference data

Produces a model comprising code and data

Produces output data

One time-ish (Requirement to retain training data in case of re-training.)

Often continuous

Inference

Apply the model to the inference data

Produces output data

Often continuous

The difference may seem inconsequential at first glance, but defining these two stages helps to show implications for AI adoption particularly with businesses. That is, given that it’s much less resource intensive (and therefore, less expensive), it’s likely to be much easier for businesses to integrate already-trained AI algorithms with their existing systems. 

And, as always, we’re big believers in demystifying terminology for discussion purposes. Let us know what you think in the comments, and feel free to let us know what you’re interested in learning about next.

SOURCE

AI 101: Training vs. Inference

November 9, 2023 by Stephanie Doyle

https://www.backblaze.com/blog/ai-101-training-vs-inference/

 

r/AMD_Stock

“AI is really two markets, training and inference. Inference is going to be 100 times bigger than training. Nvidia is really good at training but very miscast at inference.” – Chamath Palihapitiya

Let’s discuss.

Below I layout AMD investor relevant time stamps:

7:35 – Meta AI business strategy

10:00 – Open source impact on LLM marketplace

12:10 – Telecom analogy (capex discussion)

16:35 – Closed source model economic viability

19:50 – Meta overspend on training (Nvidia)

SOURCE

https://www.reddit.com/r/AMD_Stock/comments/1cf765y/ai_is_really_two_markets_training_and_inference/

 

Part D: LPBI Group: Expert Content for ML Models in Healthcare, Pharmaceutical, Medical and Life Sciences

 

LPBI Group’s Journal http://pharmaceuticalintelligence.com had a fully developed ontology for the Healthcare, Pharmaceutical, Medical and Life Sciences domains of knowledge.

The ontology comprises of +750 categories of research. Each category consists of multiple scientific articles that were curated by domain knowledge experts in the fields of Healthcare, Pharmaceutical, Medical and Life Sciences.

  • Each article is a token, a Non Fungible Token (NFT) = a mutually exclusive scientifically written piece that makes a Prior Art artifact from the intellectual property law perspective and copyright law.
  • Each category of research is “An expert system knowledge base”
  • Examples: The last column in this table represents the number of articles in this category of research
  • Each curation is written by an expert in this domain, and
  • Each one of the 469 articles in Example #1, in this category of research had been assigned THIS category by an EXPERT in this domain. 
  • The universe of 469 articles represents an “Expert System Knowledge Base” in the domain of biological networks, gene regulation and evolution
  • Example #1 comprises of 469 NFTs
  • Example #2 comprises of 1,022 NFTs
  • Example #3 comprises of 681 NFTs
  • An ML model can be trained on the content of a Master file that included the content of all the 469 article files mentioned in Example #1 – that process is performed on Training Chips
  • The outcomes of the model involve the phase of Inference. That process is performed on Inference Chips.

 

Example #1: 469 articles in Biological Networks, Gene Regulation and Evolution

Expert, Author, Writer (EAW): Dr. Larry Bernstein
Degree: BS, MS, MD
Specialty: Clinical Pathology
e-Mail: larry.bernstein@gmail

N = 469

Biological Networks, Gene Regulation and Evolution

 

Points (a) to (f) are applicable as well to Example #2, and #3, below. Or for any other category of research from the universe of +750 categories that consists of +50 articles

 

Example #2: 1,022 articles in CANCER BIOLOGY & Innovations in Cancer Therapy

Contributor EAW: Prabodh kumar Kandala, PhD Specialty: Preclinical Oncology, Prabodh.kandala@gmail.com

Contributor EAW: Ritu Saxena, PhD
ritu.uab@gmail.com

Contributor EAW: Dr. Larry Bernstein
Degree: BS, MS, MD
Specialty: Clinical Pathology
e-Mail: larry.bernstein@gmail.com

Contributor EAW: Stephen J. Williams
Degree: Ph.D. Pharmacology
Specialty: cancer pharmacology, ovarian specialty
e-Mail: sjwilliamspa@comcast.net
Phone: 215-487-0259

Contributor EAW: Tilda Barliya
Degree: PhD
Specialty: Cancer biology, cell biology, nanotechnology and drug delivery
e-Mail: tildabarliya@gmail.com
Phone: +972-50-8622289

N = 1,022

CANCER BIOLOGY & Innovations in Cancer Therapy

 

Example #3: 681 articles in Frontiers in Cardiology and Cardiovascular Disorders

EAW: Aviva Lev-Ari, PhD, RN

EAW: Justin D. Pearlman
Degree: MD ME PhD MA FACC
Specialty: Internal Medicine, Cardiology, Cardiovascular Radiology, Image Processing, Computer Science, Electronic Records
jdpmdphd@gmail.com
Phone:617-894-6888

N = 681

Frontiers in Cardiology and Cardiovascular Disorders

 

Respectively, the categories of research are

  • “Expert systems domain knowledge bases”
  • They are ready for ML model development in each of the domains that a category comprises more than 50 articles.
  • Total number of categories of research in the Journal’s Ontology N = 757 on 1/28/2025

Read Full Post »

Nobel Prize in Physics 2024 to J.J. Hopfield and to G.E. Hinton

Reporter: Aviva Lev- Ari, PhD, RN 

 

UPDATED on 10/19/2024

Why the Nobel Prize in Physics Went to AI Research 

Nobel committee recognizes scientists for foundation research in neural networks

 

The Nobel Prize Committee for Physics caught the academic community off-guard by handing the 2024 award to John J. Hopfield and Geoffrey E. Hinton for their foundational work in neural networks.

The pair won the prize for their seminal papers, both published in the 1980s, that described rudimentary neural networks. Though much simpler than the networks used for modern generative AI like ChatGPT or Stable Diffusion, their ideas laid the foundations on which later research built.

Even Hopfield and Hinton didn’t believe they’d win, with the latter telling The Associated Press he was “flabbergasted.” After all, AI isn’t what comes to mind when most people think of physics. However, the committee took a broader view, in part because the researchers based their neural networks on “fundamental concepts and methods from physics.”

“Initially, I was surprised, given it’s the Nobel Prize in Physics, and their work was in AI and machine learning,” says Padhraic Smyth, a distinguished professor at the University of California, Irvine. “But thinking about it a bit more, it was clearer to me why [the Nobel Prize Committee] did this.” He added that physicists in statistical mechanics have “long thought” about systems that display emergent behavior.

Hopfield first explored these ideas in a 1982 paper on neural networks. He described a type of neural network, later called a Hopfield network, formed by a single layer of interconnected neurons. The paper, which was originally categorized under biophysics, said a neural network could retain “memories” from “any reasonably sized subpart.”

Hinton expanded on that work to conceptualize the Boltzmann machine, a more complex neural network described in a 1985 paper Hinton co-authored with David H. Ackley and Terrence J. Sejnowski. They introduced the concept of “hidden units,” additional layers of neurons which exist between the input and output layers of a neural network but don’t directly interact with either. This makes it possible to handle tasks that require a more generalized understanding, like classifying images.

So, what’s the connection to physics?

Hopfield’s paper references the concept of a “spin glass,” a material in which disordered magnetic particles lead to complex interactions. Hinton and his co-authors drew on statistical mechanics, a field of physics that uses statistics to describe the behavior of particles in a system. They even named their network in honor of Ludwig Boltzmann, the physicist whose work formed the foundation of statistical mechanics.

And the connection between neural networks and physics isn’t a one-way street. Machine learning was crucial to the discovery of the Higgs boson, where it sorted the data generated by billions of proton collisions. This year’s Nobel Prize for Chemistry further underscored machine learning’s importance in research, as the award went to a trio of scientists who built an AI model to predict the structures of proteins.

While Hopfield and Hinton authored influential papers, their contributions to machine learning were cemented by their continued work, and both won multiple awards before the Nobel Prize. Among others, Hopfield won the Boltzmann Medal in 2022; Hinton received the IEEE Frank Rosenblatt Award in 2014, the IEEE James Clerk Maxwell Medal in 2016, and the Turing Award in 2018 (that last one alongside Yann LeCun and Yoshua Bengio).

Smyth saw Hopfield’s efforts first-hand as a student at the California Institute of Technology. “Hopfield was able to bring together mathematicians, engineers, computer scientists, and physicists. He got them in the same room, got them excited about modeling the brain, doing pattern recognition and machine learning, unified by mathematical theories he brought in from physics.”

In 2012, Hinton co-founded a company called DNNResearch with two of his students; Ilya Sutskever, who later co-founded OpenAI, and Alex Krizhevsky. Together, the trio collaborated on AlexNet, a hugely influential neural network for computer vision. Hinton also taught at the University of Toronto, where he continued to champion machine learning.

Navdeep Jaitly, now a deep learning researcher at Apple, said Hinton inspired new generations of engineers and researchers. In Jaitly’s case, the influence was direct; Jaitly studied under Hinton at the University of Toronto.

“I came in with experience in statistical modeling,” says Jaitly, “but Hinton still managed to entirely change how I think about problem solving. In terms of his contributions to machine learning, his methods are central to almost everything we do.”

SOURCE

https://spectrum.ieee.org/nobel-prize-in-physics

 

UPDATED on 10/13/2024

Website: https://www.bbc.co.uk/newsnight

https://youtu.be/MGJpR591oaM?si=6DhusxGt_B8dUpT_

 

10/8/2024

John J. Hopfield
Princeton University, NJ, USA

Born 1933 in Chicago, IL, USA. PhD 1958 from Cornell University, Ithaca, NY, USA. Professor at Princeton University, NJ, USA.

Geoffrey E. Hinton
University of Toronto, Canada

Born 1947 in London, UK. PhD 1978 from The University of Edinburgh, UK. Professor at University of Toronto, Canada.

was announced on 10/8/2024 in Stockholm, Sweden.

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

Reference

Popular science background: They used physics to find patterns in information (pdf)


Scientific background: “For foundational discoveries and inventions that enable machine learning with artificial neural networks” (pdf)

 

SOURCE

https://www.nobelprize.org/prizes/physics/2024/press-release/

Read Full Post »

Drug Development Process been Revolutionized by Artificial Intelligence (AI) Technologies

Curators: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN

 

The Voice of Stephen J. Williams, PhD

LPBI Group, CSO

PENDING

The Voice of Aviva Lev-Ari, PhD, RN

1.0 LPBI Group and 2.0 LPBI Group, Founder

With the advent of AI in the last 5-7 years in our fields: Pharmaceutical, Life Sciences and Medicine, LPBI Group had launched several initiatives to advance the frontier of knowledge by using our own contents repositories of +8 giga bytes for experimenting with Machine Learning (ML) technologies for Medical Text Analysis.

These AI Technologies include

  • Natural Language Processing (NLP): Statistical ML and Deep Learning ML
  • ChatGPT and GPT-4
  • Generative AI
A quote by Brad Power, Co-founder and CEO, Cancer Patient Lab made in January 2024
LPBI Group is in the admirable position of sitting on a treasure trove of medical literature that would be useful input in the current environment of customized ChatGPTs looking for reliable medical content.

In the Drug Development (DD) field, AI technologies are been employed  chiefly, for these tasks:

(a) Generation of molecular information libraries

(b) Explorations and combinatorial experiments on protein structures, and

(c) measurements of biochemical interactions

The A.I. learns from patterns in the data to suggest possible useful drug candidates, as if matching chemical keys to the right protein locks.

Because A.I. for drug development is powered by precise scientific data, toxic “hallucinations” are far less likely than with more broadly trained chatbots. And any potential drug must undergo extensive testing in labs and in clinical trials before it is approved for patients.

“Generative A.I. is transforming the field, but the drug-development process is messy and very human,” said David Baker, a biochemist and director of the Institute for Protein Design at the University of Washington.

As of December 2023,

  • 24 AI-discovered molecules had completed Phase I trials, with 21 of them being successful. This success rate of 80–90% is higher than the historical industry average of 40–65%. 
  • In Phase II trials, the success rate is around 40%, which is similar to the historical average. 

SOURCE

AI Overview

https://www.google.com/search?q=A.I.-developed+drugs+are+in+clinical+trials&oq=A.I.-developed+drugs+are+in+clinical+trials&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigAdIBCTIxNzZqMGoxNagCCLACAQ&sourceid=chrome&ie=UTF-8

 

Examples of drugs developed by AI technologies:

NCI definition of AI Drugs:

AI drug

A drug that blocks the activity of an enzyme called aromatase, which the body uses to make estrogen in the ovaries and other tissues. Blocking aromatase lowers the amount of estrogen made by the body, which may stop the growth of cancer cells that need estrogen to grow. AI drugs are used to treat some types of breast cancer or to keep it from coming back. They may also be used to help prevent breast cancer in some women who are at a high risk of developing it. Examples of AI drugs are anastrozole, letrozole, and exemestane. AI drugs are a type of hormone therapy. Also called aromatase inhibitor.
 

SOURCE

https://www.cancer.gov/publications/dictionaries/cancer-terms/def/ai-drug

 

More examples of AI Drugs, Drugs developed with AI technologies

#1: 

INS018_055
Developed by Insilico Medicine, a Hong Kong-based biotech startup, to treat idiopathic pulmonary fibrosis (IPF). IPF is a chronic lung disease that causes scarring and can be fatal if left untreated. In January 2023, Insilico Medicine announced positive results from a Phase I safety trial of INS018_055. In February 2023, the FDA granted breakthrough status to a small molecule inhibitor identified by Insilico Medicine’s AI platforms for the drug. As of November 2023, INS018_055 was in mid-stage trials in the US and China, with some results expected in early 2025.
 
The first fully A.I. -generated drug enters clinical trials in human patients. Insilico Medicine, a Hong Kong-based biotech startup with more than $400 million in funding, created the drug as a treatment for idiopathic pulmonary fibrosis, a chronic lung disease.Jun 29, 2023
 
  • Schrödinger’s AI-driven platform
    Uses predictive modeling to optimize the molecular structure of drugs. AI is expected to integrate more advanced simulation techniques, such as quantum computing, to more accurately predict molecular behavior.

Drug discovery software developer Schrodinger Inc. (NASDAQ: SDGR) stock has been trying to recover after plummeting over 80% off its all-time high of $117 in January 2021. Schrodinger’s artificial intelligence (AI) powered software technology platform utilizes physics-based modeling and sophisticated machine learning algorithms to help clients identify the suitable molecules to treat the desired ailments. Its programs can help predict the behavior of molecules and potential outcomes.

This entails finding suitable molecules that effectively target specific cells and proteins, transcend through cell walls, are absorbed and dissolved well without interfering with other drugs or producing bad reactions to other drugs, and are scalable.

Big Name Pharma Customers of Schrödinger, Inc.

Its technology platform allows for the faster and cheaper discovery of novel molecules with a higher success rate than traditional methods. Its clients include the top 20 pharmaceutical companies in the world, including Pfizer Inc. (NYSE: PFE)Merck & Co. Inc. (NYSE: MRK), Takeda, AstraZeneca PLC (NYSE: AZN), and GlaxoSmithKline plc  (NYSE: GSK). It closed new agreements with Eli Lilly & Co. (NYSE: LLY) and Otsuka Pharmaceuticals out of Tokyo, Japan.

SOURCE

Schrodinger is an AI-Powered Drug Discovery Developer to Watch

https://www.nasdaq.com/articles/schrodinger-is-an-ai-powered-drug-discovery-developer-to-watch

 

Schrodinger’s Pipelines include:

  • SGR-1505 (MALT1)

Hematologic Malignancies

DISCOVERYPRECLINICALPHASE 1
 
  • SGR-2921 (CDC7)

AML/MDS

DISCOVERYPRECLINICALPHASE 1
 
  • SGR-3515 (Wee1/Myt1)

Solid Tumors

DISCOVERYPRECLINICALPHASE 1
 
  • SOS1

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • PRMT5-MTA

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • EGFRC797S

Oncology

DISCOVERYPRECLINICALPHASE 1
 
  • NLRP3

Immunology

DISCOVERYPRECLINICALPHASE 1
 
  • LRRK2

Neurology

DISCOVERYPRECLINICALPHASE 1
 
  • Undisclosed Programs

Multiple Areas

SOURCE

https://www.schrodinger.com/pipeline/#Proprietary-Pipeline

 

In high-tech labs, workers are generating data to train A.I. algorithms to design better medicine, faster. But the transformation is just getting underway.

Terray Therapeutics campus in Monrovia, Calif., June 17, 2024

https://www.nytimes.com/2024/06/17/business/ai-drugs-development-terray.html?smid=nytcore-ios-share&referringSource=articleShare

 

Five AI drug discovery companies you should know about

  1. Atomwise 
  2. Cradle 
  3. Exscientia 
  4. Iktos 
  5. Insilico Medicine 

According to Grand View Research, the global AI in drug discovery market size was valued at $1.1 billion in 2022, and is expected to expand at a compound annual growth rate (CAGR) of 29.6% from 2023 to 2030. The report states that the growing demand for the discovery and development of novel drug therapies and increasing manufacturing capacities of the life science industry are driving the demand for AI-empowered solutions in the drug discovery processes. 

As this report suggests, AI for drug discovery is clearly a growing field within the biopharma industry. Inevitably, as it grows even larger, we will see more companies come to the forefront of the field, hoping to change the face of drug discovery – and also the biopharma industry as a whole – so that the entire drug development process can become faster, more consistent, more accurate, and more scalable.

SOURCE

https://www.labiotech.eu/best-biotech/ai-drug-discovery-companies/#:~:text=to%20%2433%20million.-,Exscientia,manner%20using%20its%20AI%20technology.

 

At LPBI Group, Of Note is our Journal PharmaceuticalIntelligence.com

it represents our commitment to AI technologies in the following research categories and How many articles have been written in each of these topics: 

  • A total of x articles have been categorized 511 times among the following Artificial Intelligence research categories

Artificial Intelligence – General

113

An executive’s guide to AI

9

Artificial Intelligence – Breakthroughs in Theories and Technologies

94

Artificial Intelligence Applications in Health Care

81

Artificial Intelligence in CANCER

29

Artificial Intelligence in Health Care – Tools & Innovations

55

Artificial Intelligence in Medicine – Application for Diagnosis

44

Artificial intelligence applications for cardiology

21

AI-assisted Cardiac MRI

9

Artificial Intelligence in Psychiatry

5

Artificial Intelligence in Medicine – Applications in Therapeutics

50

 

LPBI Group’s involvement in Conceptual Drug Development covers the following two areas:

DrugDiscovery @LPBI Group, 2016 – 2018

Synthetic Biology in Drug Discovery, 2021 – Present

 

Applications of Artificial Intelligence to Medicine

Artificial Intelligence: Genomics & Cancer, 2021 – Present

Medicine with GPT-4 & ChatGPT, 2023 – Present

 

LPBI Group commitment to Medical Text Analysis using Machine Learning

2021-2025 Medical Text Analysis (NLP), 2020 – Present

ChatGPT + Wolfram PlugIn, 2023 – Present

 

LPBI Group Team members published two books on Drug Delivery Technologies

We had covered drug delivery technologies in two of our books. See all the Books: 

https://www.amazon.com/s?k=Aviva+Lev-Ari&i=digital-text&rh=n%3A133140011&ref=nb_sb_noss

 

  • Series E, Volume Four 

Medical 3D BioPrinting – The Revolution in Medicine, Technologies for Patient-centered Medicine: From R&D in Biologics to New Medical Devices. 

https://www.amazon.com/dp/B078QVDV2W

and

  • Series C, Volume Two 

Cancer Therapies: Metabolic, Genomics, Interventional, Immunotherapy and Nanotechnology in Therapy Delivery (Series C Book 2).

http://www.amazon.com/dp/B071VQ6YYK

 

The Table of Contents of these two book can be found in our

Spanish-language Edition, as well

  • Serie E, Volumen 4

Bioimpresión médica en 3D: la revolución de la medicina: Tecnologías para una medicina centrada en el paciente: de la I+D en agentes biológicos a los nuevos … en el paciente nº 4) (Spanish Edition) 2023

(Spanish Edition) Kindle Edition

https://www.amazon.com/dp/B0BRNVDB1P $56

 

  • Serie C, Volumen 2

Tratamientos contra el cáncer: Metabólicos, genómicos, intervencionistas, inmunoterapia y nanotecnología para la administración de tratamientos (Serie … y la oncología nº 2) 2022

(Spanish Edition) Kindle Edition

http://www.amazon.com/dp/B0BQTM44SM $75

Read Full Post »

ChatGPT Chemistry Assistant for Text Mining and the Prediction of MOF Synthesis

Reporter: Aviva Lev-Ari, PhD, RN

ChatGPT Chemistry Assistant for Text Mining and the Prediction of MOF Synthesis

  • Zhiling Zheng
  • Oufan Zhang
  • Christian Borgs
  • Jennifer T. Chayes
  • Omar M. Yaghi*

Cite this: J. Am. Chem. Soc. 2023, 145, 32, 18048–18062 Publication Date:August 7, 2023 https://doi.org/10.1021/jacs.3c05819 Copyright © 2022 American Chemical Society. This publication is licensed under these Terms of Use.https://pubs.acs.org/doi/10.1021/jacs.3c05819

 

Abstract

We use prompt engineering to guide ChatGPT in the automation of text mining of metal–organic framework (MOF) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT’s tendency to hallucinate information, an issue that previously made the use of large language models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different trade-offs among labor, speed, and accuracy. We deploy this system to extract 26 257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90–99%. Furthermore, with the data set built by text mining, we constructed a machine-learning model with over 87% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions about chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry subdisciplines.

This publication is licensed for personal use by The American Chemical Society.

Concluding Remarks


Our research has successfully demonstrated the potential of LLMs, particularly GPT models, in the domain of chemistry research. We presented a ChatGPT Chemistry Assistant that includes three different but connected approaches to text mining with ChemPrompt Engineering: Process 3 is capable of conducting search and filtration, Processes 2 and 3 classify synthesis paragraphs, and Processes 1, 2, and 3 are capable of summarizing synthesis conditions into structured data sets. Enhanced by three fundamental principles of prompt engineering specific to chemistry text processing, coupled with the interactive prompt refinement strategy, the ChatGPT-based assistant has substantially advanced the extraction and analysis of the MOF synthesis literature, with precision, recall, and F1 scores exceeding 90%.
We elucidated two crucial insights from the data set of synthesis conditions. First, the data can be employed to construct predictive models for reaction outcomes, which shed light on the key experimental factors that influence the MOF crystallization process. Second, it is possible to create an MOF chatbot that can provide accurate answers based on text mining, thereby improving access to the synthesis data set and achieving a data-to-dialogue transition. This investigation illustrates the potential for rapid advancement inherent in ChatGPT and other LLMs as a proof of concept.
On a fundamental level, this study provides guidance on interacting with LLMs to serve as AI assistants for chemists, accelerating research with minimal prerequisite coding expertise and thus bridging the gap between chemistry and the realms of computational and data science more effectively. Through interaction and chatting, the code and design of experiments can be modified, democratizing data mining and enhancing the landscape of scientific research. Our work sets a foundation for further exploration and application of LLMs across various scientific domains, paving the way for a new era of AI-assisted chemistry research.

SOURCE

https://pubs.acs.org/doi/10.1021/jacs.3c05819

ChatGPT accelerates chemistry discovery for climate response, study shows

Yaghi said. “AI has transformed many other sectors of our society – commerce, banking, travel. Why not transform science?”
These datasets on the synergy of the highly-porous materials known as metal-organic frameworks (MOFs) will inform predictive models. The models will accelerate chemists’ ability to create or optimize MOFs, including ones that alleviate water scarcity and capture air pollution. All chemists – not just coders – can build these databases due to the use of AI-fueled chatbots.

To help them teach ChatGPT to generate accurate and relevant information, they modified an approach called “prompt engineering” into “ChemPrompt Engineering.” They developed prompts that avoided asking ChatGPT for made up or misleading content; laid out detailed directions that explained to the chatbot the context and format for the response; and provided the large language model a template or instructions for extracting data.

The chatbot’s literature review – and the experts’ approach – was successful. ChatGPT finished in a fraction of an hour what would have taken a student years to complete, said Borgs, BIDMaP’s director. It mined the synthetic conditions of MOFs with 95% accuracy, Yaghi said.

Read Full Post »

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Reporter: Frason Francis Kalapurakal, Research Assistant II

Researchers from MIT and Technion have made a significant contribution to the field of machine learning by developing an adaptive algorithm that addresses the challenge of determining when a machine should follow a teacher’s instructions or explore on its own. The algorithm autonomously decides whether to use imitation learning, which involves mimicking the behavior of a skilled teacher, or reinforcement learning, which relies on trial and error to learn from the environment.

The researchers’ key innovation lies in the algorithm’s adaptability and ability to determine the most effective learning method throughout the training process. To achieve this, they trained two “students” with different learning approaches: one using a combination of reinforcement and imitation learning, and the other relying solely on reinforcement learning. The algorithm continuously compared the performance of these two students, adjusting the emphasis on imitation or reinforcement learning based on which student achieved better results.

The algorithm’s efficacy was tested through simulated training scenarios, such as navigating mazes or reorienting objects with touch sensors. In all cases, the algorithm demonstrated superior performance compared to non-adaptive methods, achieving nearly perfect success rates and significantly outperforming other methods in terms of both accuracy and speed. This adaptability could enhance the training of machines in real-world situations where uncertainty is prevalent, such as robots navigating unfamiliar buildings or performing complex tasks involving object manipulation and locomotion.

Furthermore, the algorithm’s potential applications extend beyond robotics to various domains where imitation or reinforcement learning is employed. For example, large language models like GPT-4 could be used as teachers to train smaller models to excel in specific tasks. The researchers also suggest that analyzing the similarities and differences between machines and humans learning from their respective teachers could provide valuable insights for improving the learning experience.The MIT and Technion researchers’ algorithm stands out due to its principled approach, efficiency, and versatility across different domains. Unlike existing methods that require brute-force trial-and-error or manual tuning of parameters, their algorithm dynamically adjusts the balance between imitation and trial-and-error learning based on performance comparisons. This robustness, adaptability, and promising results make it a noteworthy advancement in the field of machine learning.

References:

“TGRL: TEACHER GUIDED REINFORCEMENT LEARNING ALGORITHM FOR POMDPS” Reincarnating Reinforcement Learning Workshop at ICLR 2023 https://openreview.net/pdf?id=kTqjkIvjj7

https://arxiv.org/abs/2301.01219

Reinforcement Learning: A Survey by L. P. Kaelbling, M. L. Littman, A. W. Moore https://doi.org/10.48550/arXiv.cs/9605103

Concrete Problems in AI Safety by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané https://arxiv.org/abs/1606.06565

Other related articles published in this Open Access Online Scientific Journal include the following:

92 articles in the Category:

‘Artificial Intelligence – Breakthroughs in Theories and Technologies’ 

https://pharmaceuticalintelligence.com/category/artificial-intelligence-general/artificial-intelligence-breakthroughs-in-theories-and-technologies/

Read Full Post »

« Newer Posts - Older Posts »