Feeds:
Posts
Comments

Archive for the ‘BioIT: BioInformatics’ Category

Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

Curators: Aviva Lev-Ari, PhD, RN and Stephen J. Williams, PhD

Updated on 11/29/2025

 

The Voice of Stephen J. Williams, PhD

Since the formation of the LPBI group in 2012, we have set as a goal to digitally curate and ontologize the biomedical literature.  These aspects of our mission, although not understandable to most at that time of our inception, was at the forefront of many academic efforts which became the springboard and initial conceptualization of today’s large language models.  In select universities like University of Illinois, University of Delaware, and the Technion, academics and researchers at Microsoft and Google, were laying the framework for  a semantic web, or Web 3.0.  Another company at the forefront of this idea was a company springing forth from Mathematica, Wolfram Research. This idea for a semantic web would entail the processes of curation, digitization, and ontology creation.  Their belief, such as ours, was  a precursor and much needed key to the puzzle of moving language from machine learning algorithms to the generative transformers used by artificial intelligence.

However as most efforts in this genre focused on general language, our efforts at LPBI Group were focused on the biomedical field, as we understood, from 2012, that the biomedical literature was unique, and so a unique strategy had to be developed to semantically understand biomedical text, even though at the time of 2012 GPTs were not even a concept.  However the potential for doing biomedical text analysis was there, and LPBI Group responded by developing a methodology of scientific curation which involved a multimodal strategy to curate, digitize, and ontologize biomedical findings and text.

It was about at the time of 2012 that other groups, mainly focused of drug development applications (for example at University of Indiana) recognized that new computational power of machine learning algorthims could be  useful in analyzing complex biological questions.  Please see our Synthetic Biology in Drug Discovery section of our Journal for more information on this. For instance, an early adopter of this strategy, a company called  Data2Discovery, one of the earliest AI for drug discovery startups, stated

We are able to improve drug discovery now as well as demonstrating new fast-cycle AI-driven processes that will have a revolutionary impact on drug discovery if fully implemented. We have had some dramatic successes, but we are just starting to discover the impact that data, knowledge graphs, AI and machine learning can together have on drug discovery.

We need all the expertise of academics, consortia, AI companies and pharma to make his happen, and it’s going to require some serious investment, and a big change of thinking. But the opportunity to get drug discovery out of the death spiral and framed for data-driven success is too important to pass up.

However the LPBI Group was cognizant of these changes occuring and pivoted to the developing natural language processing arena as well as ideas for the developing Blockchain technology.  This was more of a natural progression for the LPBI Group than a pivot (please read here).

This would be our Vision 2.0, to make biomedical text amenable for Natural Language Processing.   We utilized a few strategies in this regard, partnering with a company who was developing NLP for biomedical text analysis, and developing in house machine learning and NLP methods using the Wolfram language environment.  Our focus on structuring biomedical text (versus the highly structured genomics and omics data found in many omics related databanks) was prescient for the time.  As NLP and machine learning  efforts realized, biomedical text needs to have a structure much like genes, proteins and other molecular databases had been organized.  Therefore it was realized that structured data was imperative for efficient NLP analysis, a crux for the new GPT which was being developed (and in this mind still is a crux for current GPT and LLM models when it comes to biomedical text analysis).

Our strategy using our scientific curation methodology (as described below in links form our founder Dr. Aviva Lev-Ari, was proven to be highly efficient and amenable to NLP analysis, as a pilot with an NLP company noticed.  Most of the data they were using was unstructured and their first step involved annotation and structuring the text, as we had already performed for years.  This was critical as our text was able to pull out more concepts, relationships, in a faster time than NLP on sources such as PubMed available text.  We had also developed our own in house algorithms for NLP on our material, which is shown in some of our book offerrings and individual articles.

However with the advent of GPT it was thought all this was unnecessary.  However this idea that our strategy was outdated or irrelevent in the era of GPT was wholly  incorrect to the advocates of a sole GPT strategy to analyze biomedical text and data.  It is now understood that structure is needed as some of biomedical-centric GPT projects would find out, such as BioGPT.  We have many articles which attest to the lack of  accuracy and efficiency of these GPT architectures (seen here). These include failure rates in many areas of healthcare and biomedicine by sole reliance on GPT,

It was realized by many in the biomedical arena, especially those involved in NLP efforts, that there was much value in the semantic web 3.0 idea, and this was readily picked up by those spearheading effort to incorporate knowledge graphs with the new generative AI or GPT technology.  We have shown a clear example our scientific methodology of curation with ontology has better inference when combined with knowledge graphs and GPT than reliance on GPT alone

please read this article

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for Non-Small Cell Lung Cancer

at https://pharmaceuticalintelligence.com/2014/09/05/multiple-lung-cancer-genomic-projects-suggest-new-targets-research-directions-for-non-small-cell-lung-cancer/

As shown here in this article

This update was performed by the following methods:
A. GPT 5 Text analysis and Reasoning
B. Insertion of Knowledge Graph on topic Curation of Genomic Analysis from Non Small Cell Lung Cancer Studies  from Nodus Labs using InfraNodus software
C. Domain Knowledge Expert evaluation of the Update outcomes
This article has the following Structure:
Part A: Introduction to LLM, Knowledge Graph software InfraNodus, ChatGPT5 and Background Information on curated material for Test Case
Part B: InfraNodus Analysis of manual curation and Knowledge Graph Creation
Part C: Chat GPT 5 Analysis of Manually Curated Material
Part D: Curation entitled Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for Non-Small Cell Lung Cancer originally published on 09/05/2014
Results of Article Update with GPT 5
1. GPT5 alone was not able to understand the goal of the article, namely to determine knowledge gaps in a particular research area involving 5 genomic studies on lung cancer patients
2. GPT5 alone was not able to group concepts or comonalities between biological pathways unless supplied with a manually curated list of KEGG pathways from a list of mutated genes.  However this precluded any effect that fusion proteins had on the analysis and so GPT5 would only concentrate on mutated genes commonly found in literature
3. GPT was not able to access some of the open Access databases like NCBI Gene Ontology database
Results of Article Update with KnowledgeGraph presentation to GPT 5
4. As the Knowledge Graph understood the importance of fusion proteins and transversions, the knowledgegraph augmented the GPT analysis and so enriched the known pathways as well as could correctly identify the less represented pathways in the knowledge graph
5.  This led to the identification of many novel signaling pathways not identified in the original analysis, and was able to perform this task with ease and speed

6. GPT with InfraNodus Analysis was able to propose pertinent questions for future research (the goal of the original curation) such as:

  • How does the interaction between [[EGFR]] mutations and sex-specific gene alterations, including [[RBM10]], influence treatment outcomes in lung adenocarcinoma?
  • How does the intersection of mutational patterns from smoking influence pathway activation in NSCLC, and can identifying these interactions improve targeted therapy development?
Novelty in comparison to Original article published on 09/05/2014
7. it appears that manual curation is necessary to assist in the building of relevant knowledge graphs in the biomedical fields to augment generative AI analysis
8. by itself, generative AI is not optimized for inference of higher concepts from biomedical text, and therefore, at this point, requires the input from human curators developing domain-specific knowledge graphs
9.  The combination of ChatGPT5 and Knowledge graphs of this manually curated biomedical text added a further layer of complexity of gaps of knowledge not seen in the original curations including the need to study noncanonical signaling pathways like WNT and Hedgehog in smoker versus nonsmoker cohorts of lung cancer patients

The Voice of Aviva Lev-Ari, PhD, RN

LPBI Group’s Portfolio of Digital IP Assets as Proprietary Training Data Corpus for AI in Medicine, in Life Sciences, in Pharmaceutical and in Health Care Applications

The Portfolio of Digital IP Assets by Class is a rare, defensible asset, privately-held debt-free by LPBI Group’s founder. The content, aka a Data Corpus is best designed for the Training and Pre-Training of Foundation Multimodal Models in Health Care. 

#HealthcareAI

#FoundationModels

#ProprietaryTrainingData

LPBI Group is offering transfer of ownership, in full, a privately held, multimodal healthcare training corpus leveraging propriety unique data set curated by domain experts and mapped to a living ontology for GenAI creating defensibility.

The Portfolio of IP spans:

  • 6,250+ articles (~2.5MM views),
  • 48 e‑books (EN/ES) (+152,000 page downloads),
  • 100+ e‑Proceedings with +50 Tweet collections,
  • 7,500+ biological images with expert context, and
  • 300+ Audio podcasts on Life Sciences breakthroughs.

Each asset (Use Case: Scientific Article) has timestamps, author/role labels, crosslinks, and view histories.

  • Metadata export exists; full text and media transfer via WordPress/Amazon account control for immediate ingestion.
  • Rights are centrally assigned with explicit model‑training data by domain-aware for model implementation for Small Language Models or Large Language Models.

Strategic acquirers in Big Pharma of Vertical AI startups (i.e., LPBI Group) with data‑moat strategies

Pharma strategics Acquire LPBI’s end‑to‑end, rights‑clean healthcare knowledge base to accelerate R&D, medical affairs, and safety. Ideal for and with acceleration of R&D, medical affairs, and safety. Emphasize compliant internal copilots and evidence synthesis enabled by expert curation and living ontology. Close with rapid onboarding under NDA  Metadata export plus full text/media transfer for rapid onboarding. Full acquisition only.

Subject: Buy the moat: full acquisition of expert healthcare corpus with clean rights

We’re selling the entire asset: a privately held, multimodal healthcare corpus with centralized training rights and an exportable ontology, validated on gene–disease–drug extraction. It’s ingest‑ready and transfers cleanly via account control plus a metadata export. If owning differentiated data is critical for your agent or workflow, we can provide a diligence preview under NDA.

compliant internal copilots and evidence synthesis enabled by expert curation and living ontology. Close with rapid onboarding under NDA

Five Examples of Domain-aware for model implementation for Small Language Models – English Edition & Spanish Edition

Series A: Cardiovascular Diseases ($515) – Six Volumes

https://www.amazon.com/gp/product/B07P981RCS?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Six Examples of Domain-aware in the Specialty of Cardiovascular Diseases

  • Series A, Volume One

Perspectives on Nitric Oxide in Disease Mechanisms2013

http://www.amazon.com/dp/B00DINFFYC $75

  • Series A, Volume Two 

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation, 2015

http://www.amazon.com/dp/B018Q5MCN8 $75

  • Series A, Volume Three

Etiologies of Cardiovascular Diseases – Epigenetics, Genetics and Genomics2015

http://www.amazon.com/dp/B018PNHJ84 $75

  • Series A, Volume Four

Therapeutic Promise: Cardiovascular Diseases, Regenerative & Translational Medicine, 2015

http://www.amazon.com/dp/B019UM909A $75

  • Series A, Volume Five

Pharmacological Agents in Treatment of Cardiovascular Diseases2018

https://www.amazon.com/dp/B07MGSFDWR  $115

  • Series A, Volume Six:

Interventional Cardiology for Disease Diagnosis and Cardiac Surgery for Condition Treatment2018

https://www.amazon.com/dp/B07MKHDBHF $100

 

Series B: Frontiers in Genomics ($200) – Two Volumes

https://www.amazon.com/gp/product/B0BSDPG2RX?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series C: Cancer & Oncology ($175) – Two Volumes

https://www.amazon.com/gp/product/B0BSDWVB3H?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series D: Immunology ($325) – Four Volumes

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series E: Patient-Centered Medicine ($274) – Four Volumes

https://www.amazon.com/gp/product/B0BSDW2K6C?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

One Example of Domain-aware for model implementation for Large Language Models

Eighteen volumes in the English Edition and 19 volumes in the Spanish Edition including 2,728 articles by biomedical professionals are available.

https://www.amazon.com/s?k=Aviva+Lev-Ari&i=digital-text&rh=n%3A133140011&ref=nb_sb_noss

The electronic books are collections of curated articles in biomedical science. The electronic Tables of Contents (eTOCs) of each volume was designed by a senior editor with expertise in the subjects covered in that volume. The curations use as sources published research findings in peer-reviewed scientific journals together with expert added interpretations.

The e-books are designed to make the latest research in the Five Bilingual BioMed e-Series – 37 volumes accessible to practicing health care professionals. These five e-Series cover the following medical specialties:

  • Cardiovascular diseases and therapies,
  • Genomics,
  • Cancer etiology and oncological therapies,
  • Immunology, and
  • Patient-centered precision medicine.

The material in these volumes can greatly enhance medical education and provide a resource for continued updating and education for health care professionals. In addition to the 37 e-books, LPBI has published more than 6,000 articles in its online scientific journal “PharmaceuticalIntelligence.com”, which has received 2.5 million views since its launch in 4/2012, Top articles had more than 18,000 views.

The Portfolio is:

  • rights‑clean,
  • expert‑curated healthcare corpus
  • mapped to a living Ontology,
  • already packaged for immediate model ingestion and
  • suitable for safe pre-training, evals, and fine‑tuning.

If healthcare domain data is on your roadmap, this is a rare, defensible asset worth a preview.

LPBI Group is offering transfer of ownership, in full, a privately held, multimodal healthcare training corpus leveraging propriety unique data set curated by domain experts and mapped to a living ontology for GenAI creating defensibility. It spans 6,250+ articles (~2.5MM views), 48 e‑books (EN/ES) (+151,000 page downloads), 100+ e‑proceedings with +50 tweet collections, 7,500+ biological images with expert context, and 300+ Audio podcasts on Life Sciences breakthroughs. Each asset has timestamps, author/role labels, crosslinks, and view histories. Rights are centrally assigned with explicit model‑training data by domain-aware for model implementation for Small LMs or LLMs. Metadata export exists; full text and media transfer via WordPress/Amazon account control for immediate ingestion.

Leaders in Pharmaceutical Business Intelligence Group, LLC, Doing Business As LPBI Group, Newton, MA

Full acquisition only: LPBI Group’s Healthcare Training Data Corpus

  • Scientific articles
  • e‑Books in Medicine
  • e‑Proceedings,
  • Biological images
  • Podcasts

#HealthcareAI #FoundationModels #TrainingData

Contact Founder: avivalev-ari@alum.berkeley.edu

PharmaceuticalIntelligence.com

About the Founder

  • LPBI Group’s Legacy and Biography of Aviva Lev-Ari, PhD, RN, Founder & Director, INTERACTIVE CHAT with Grok, created by xAI

https://pharmaceuticalintelligence.com/2025/04/30/interactive-chat-with-grok-created-by-xai-lpbi-groups-legacy-and-biography-of-aviva-lev-ari-phd-rn-founder-director/ 

For Recognitions and a DETAILED BIO by Perplexity.ai and Gemini 2.5 Pro (experimental), see

  • “Show Thinking” by AI Deep Research: The AI-generated Biography Description of Aviva Lev-Ari, PhD, RN, Founder of PharmaceuticalIntelligence.com by Perplexity.ai and by Google’s Gemini 2.5 Pro (experimental)

https://pharmaceuticalintelligence.com/2025/04/29/show-thinking-by-ai-deep-research-the-ai-biography-description-of-aviva-lev-ari-phd-rn-founder-of-pharmaceuticalintelligence-com-by-perplexity-ai-and-by-googles-gemini-2/

Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma

On 11/14, 2025, Dr. Lev-Ari asked @Grok about AI Initiatives at a Big Pharma, i.e., Pfizer.

@Grok response is presented, below

URL SOURCE:

https://x.com/i/grok/share/0ol5VOJsEYs11baXq4xkzNb0h

Pfizer’s AI Initiatives Overview

Pfizer, a global pharmaceutical leader, has integrated artificial intelligence (AI) and machine learning (ML) across its operations to accelerate drug discovery, enhance clinical trials, improve manufacturing, and drive patient-centric innovations. These efforts aim to reduce development timelines, boost efficiency, and address challenges like rare disease detection. As of late 2025, Pfizer’s AI strategy emphasizes internal expertise, cloud-based tools (primarily with AWS), and cross-industry collaborations, with a focus on multimodal data analysis and generative AI. Below is a breakdown of key initiatives, drawn from recent developments.

1. Pfizer AI Cloud Transformation (PACT) Program Launched in 2021 with AWS, PACT is a prototyping accelerator for AI/ML applications in lab, clinical manufacturing, and supply chain efforts. It has pursued 14 projects, with five now in production. Key outcomes include:

  • Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
  • 55% reduction in infrastructure costs through cloud optimization.
  • Exploration of virtual/augmented reality for training, with rapid prototyping to test feasibility. This program fosters “fail-fast” innovation, enabling quick iteration on ideas like AI-driven predictive modeling.

2. Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D, cutting years off traditional timelines (e.g., from 8–10 years to months for certain candidates).

  • COVID-19 Initiatives: AI optimized molecular searches for PAXLOVID (oral antiviral), enabling intravenous-to-oral pivots. During vaccine development, the Safety Data Quality (SDQ) tool—a custom AI for clinical trial data management—saved a full month of analysis via the Breakthrough Change Accelerator sandbox.
  • Precision Medicine and Subpopulation Identification: Advanced ML (classical and deep learning) analyzes molecular datasets from trials to predict patient responses, informing trial design, first-in-human studies, and oncology/inflammatory disease targeting. This has doubled clinical success rates to ~20% (vs. industry 7–10%).
  • ML Research Hub: Pfizer’s Cambridge, MA-based Biomedical Data Science group builds internal talent for AI in genome sciences, predicting disease outcomes and treatment efficacy using techniques from tech giants like Google.

3. Clinical Trials and Regulatory Efficiency AI automates content creation across the drug lifecycle, from lab data to regulatory docs and patient booklets, ensuring quality and speed.

  • Predictive Regulatory Tools: ML models forecast regulator queries, proactively addressing safety concerns to build trust and shorten approvals.
  • Decentralized Trials: The “Clinical Trial Anywhere” model uses AI to integrate remote data (e.g., home health visits), improving recruitment/retention and generating real-world evidence.
  • Inventory and Supply Chain: AI predicts vaccine storage issues, pioneered during COVID distribution.

4. Disease Detection and Diagnostics

  • ATTR-CM Initiative: Through the ASPIRE collaboration (with Amyloidosis Research Consortium), AI analyzes EHRs, ECGs, and echocardiograms to detect transthyretin amyloid cardiomyopathy earlier, targeting underdiagnosed heart failure cases.
  • Rare Diseases: ML models identify symptoms in large datasets, mapping the immune system for better predictions.

5. Generative AI and Operational Tools

  • Charlie Platform: Rolled out in 2024 for marketing, this gen AI tool (named after co-founder Charles Pfizer) generates compliant content like emails, presentations, and medical article drafts. Trained on approved product data, it streamlines physician targeting and customer insights.
  • Scientific Data Cloud: AWS-powered centralization of data/ML capabilities, enabling faster research and treating 1.3 billion patients in 2022. It supports gen AI via Amazon SageMaker/Bedrock for manufacturing efficiency.
  • Global Health Grants: Through the Pfizer Foundation’s AI Learning Lab, funding supports AI for equitable healthcare access, e.g., biometric tools for community care.

6. Partnerships and Education

  • Collaborations: With AWS for cloud AI; IMI Big Picture for a 3M-sample disease database; IIT Madras for the 2025 AIPM Symposium on AI in pharma (focusing on drug discovery and precision medicine).
  • Talent and Events: Hosting webinars on AI in manufacturing (e.g., predictive maintenance, regulatory submissions) and engaging leaders like Dr. Subha Madhavan on AI for biomedical data integration.
  • Strategic Focus: $500M R&D reinvestment by 2026 targets AI for productivity, part of $7.7B cost savings. Emphasis on ethical AI, diverse datasets, and competing with global biotech advances (e.g., China’s AI in CRISPR).

Pfizer’s approach balances operational efficiency with transformative R&D, positioning AI as a core enabler rather than a standalone focus. Ongoing investments, like cloud migrations saving $47M annually and reducing CO2 emissions, underscore sustainability. For the latest, monitor Pfizer’s investor updates or events like AWS re:Invent.

Responsiveness to Big Pharma AI Initiatives by LPBI Group’s Digital IP Portfolio:

  • Scientific articles
  • e‑Books in Medicine
  • e‑Proceedings,
  • Biological images
  • Podcasts
AI Initiative at Big Pharma

i.e., Pfizer

Composed @Grok

LPBI Group’s Digital IP Asset:

Journal articles

N = 6,250

Composed by Aviva Lev-Ari

Generative AI tools

that save scientists up to 16,000 hours annually in literature searches and data analysis.

Generative AI tools searching LPBI’s Proprietary data in addition to Public Domain data sources

Journal ONTOLOGY used to optimize context classification selected for search

Drug Discovery and Development Acceleration

Pfizer uses AI, supercomputing, and ML to streamline R&D timelines

–       Run prompts by category of research on the following three dyads

–       Run ML across categories of research for these three dyads

-Gene-disease

-Disease-drug

-Gene-drug

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Curation competences:

  • content creation across the drug lifecycle, from lab data to regulatory docs
Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Heart Failure Diagnosis by Early detection of transthyretin amyloid cardiomyopathy

–       Journal published ~30 curations by Dr. Larry on this subject ATTR-CM

–       Run NLP on this Corpus

Rare diseases:

Journal published 560 articles on Rare diseases

–       Run ML on this Corpus

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker /Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Content generated in the Journal can become “generated compliant content” if run on the Charlie Platform.

–       For REUSE content in context

 

Entire Corpus of 9 Giga bytes can be ingested to Pfizer Foundation’s AI Learning Lab

–       Run prompts against it

–       Journal’s Content to be used for Internal staff expertise development

–       Journal’s Content for Leadership development

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on AI in Manufacturing

The Journal had published 547 articles in Precision Medicine

 

The Journal had published 1,114 articles in Drug Discovery

 

The Journal had published 701  articles in Drug Delivery

 

The Journal had published 3,615 articles on subject matter “Disease”

 

The Journal had published 738 articles on Biomedical topics

 

The Journal had published 425 articles on Artificial Intelligence (AI)

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

The Journal had published 432 articles on CRISPR

 

Productivity derived from Journal’s content:

–       As a result of the fact that ~70% of the Journal’s articles are curations written by Domain Knowledge Experts subjectively expressing theirs clinical interpretations of basic and primary research – the productivity of the knowledge workers at any Big Pharma would increase vastly.

–       If Grok and Claude would run on LPBI Group’s Digital IP Corpus, a scientific revolution will emerge

–       It is not combinatorics applied to molecules with 98% futile results!!!

it is the IQ of Gifted HUMANS, of domain knowledge experts generating content using individual CREATIVITY no Quantum or Super Intelligence which is not in existence, YET.

–       Foundation Models in Healthcare depends on the OUTPUT of the human creative mind. AI takes keyword (classic search) and concepts (semantic search) and run frequency of occurrence and predict the nest word, one word after the next one.

@@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

e-Books

Domain-aware Editorials and Curations

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis. The electronic Table of Contents of every e-book is a CONCEPTUAL MASTER PIECE of one unique occurrence in Nature generated by the Editor, or the Editors that had

–       Commissioned articles for the e-Book

–       Had selected articles from collections of Categories of Research created by domain knowledge experts

–       Had reviewed the TOTALITY of the Journal’s Ontology and found new concept to cover in the e-Book not originally planned

Had incorporated Highlights of Lectures given at 100 Conferences LPBI Group’s Dr. Lev-Ari and Dr. Willians had cover in Real Real, by invitation, only as PRESS.

–       The vision of the Editor-in-Chief of the BioMed e-Series reflects the BIG PICTURE of Patient care delivery.

–       UC, Berkeley PhD’83

–       Knowledge student and Knowledge worker, 10/1970 to Present

–       Conceptual pioneer of 26 algorithms in Decision Science of Operations Management decision support systems

–       2005 to Present in the Healthcare field.

–       2005-2012: Clinical Nurse Manager in Post-acute SNF settings and Long-term Acute care Hospital Supervisor – had developed a unique view on Diagnosis, Therapeutics and Patient care delivery

–       The BioMed e-Series is the EPITOM of human CREATIVITY in Healthcare an OPUS MAGNUM created by collaboration of top Scientists, Physicians and MD/PhDs

–       The 48 e-Books Published by LPBI Group – represent the ONLY one Publisher on Amazon.com with +151,000 pages downloaded since the 1st e-book published on 6/2013 and since Pay-per-View was launched by Amazon.com in 2016.

Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Two volumes on the BioMed e-Series were subjected to Medical Text Analysis with AI, ML, Natural Language Processing (NLP).

–       Cancer, Volume 1 (In English, part of the Spanish Edition, Series C)

–       Genomics, Volume 2 (In English, part of the Spanish Edition, Series B)

–       GPT capabilities are warranted to attempt to subject to ML Analytics every book of the MUTUALLY EXCLUSIVE 48 URLs provided by Amazon.com to LPBI Group, the Publisher.

–       5 URLs for 5 Bundles in The English Edition: Series A,B,C,D,E – English Edition

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in English

–       All books in each series – 5 Corpuses for domain-aware Small Language Model in Spanish

–       5 URLs for 5 Bundles in The Spanish Edition: Series A,B,C,D,E –Spanish Edition

 

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

–       No one had attempted ML on every book, only two books were analyzed by ML.

–       No one had attempted ML on all the Volumes in any of the 5 Series.

–       No one had attempted ML on all the 48 books

–       WHEN that will be done – a REVOLUTION on Disease Detection and Diagnostics will be seen for the first time because the totality of these 48 books represent the Brains of Human Experts

 

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care

Add the content of all the Books to Charlie Platform
Partnerships and Education

 

Collaborations: IMI Big Picture for 3M – sample disease database

 

AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

 

Webinars of AI for biomedical data integration

 

Webinard on Ai in Manufacturing

e-Books are the SOURCE for Education

–       Offer the books as Partnership sustenance

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

URLs for the English-language Edition by e-Series:

Series A: Cardiovascular Diseases ($515)

https://www.amazon.com/gp/product/B07P981RCS?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series B: Frontiers in Genomics ($200)

https://www.amazon.com/gp/product/B0BSDPG2RX?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series C: Cancer & Oncology ($175)

https://www.amazon.com/gp/product/B0BSDWVB3H?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

Series D: Immunology ($325)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

Series E: Patient-Centered Medicine ($274)

https://www.amazon.com/gp/product/B0BSDW2K6C?ref_=dbs_p_mng_rwt_ser_shvlr&storeType=ebooks

 

@@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

e-Proceedings: N = +100, and

Tweet Collections: N = +50

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines List of all e-Proceeding of +100 TOP Conferences in Biotech, in Medicine, in Genomics, in Precision Medicine

https://pharmaceuticalintelligence.com/press-coverage/part-two-list-of-biotech-conferences-2013-to-present/

In these conferences the Frontier of Science was presented, ofter BEFORE publication findings were revealed. These Proceedings are the ONLY written record of the events. They are privately-held, now for the first time available for Transfer of Ownership 

The Tweet Collection are QUOTES of speakers on record. NOT ELSEWHERE available by name of speaker and affiliation

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

Ingest to Charlie Platform ALL e-Proceedings of ALL Conferences

 

Apply GPT:

Training Data:

–       One conference at a time

–       All Conference on ONE subject matter, i.e., Immunotherapy, Oncolytic Virus Immunotherapy, Immune Oncology

Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on Ai in Manufacturing

Use Past Conference Agendas to build Future Conference Agendas

Use Speakers Lists to invite speakers/consultants to your events

Use topics covered in Conferences for Employee training & and in-house Leadership development

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

Having access to +100 e-Proceedings vs Not having access to this resource is a make or break in fine-tuning Corporate Branding: All your competitors attended and had sent Speakers

  • LPBI Group’s e-Proceedings is the only record in one URL

@@@@@@

AI Initiative at Big Pharmas

i.e., Pfizer

LPBI Group’s Digital IP Asset:

Biological Images selected by Experts embedded in original Text (Prior Art)

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Gallery of ~8,000 Biological images and captions is a Treasure TROVE for scientific article writing, Presentation preparations. This Media Gallery is an Art collection of top Scholars in Medicine and Biology
Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Gallery of ~8,000 Biological images and captions is a Treasure TROVE for Disease Detection and Diagnostics

 

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

  • Ingest into Charlie Platform the Media Gallery for generation of Medical article drafts
Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinar on Ai in Manufacturing

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

@@@@@@

AI Initiative at Big Pharma

i.e., Pfizer

LPBI Group’s Digital IP Asset:

Library of Audio and Video Podcasts

N = +300

Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines Review ALL SCIENTIFIC BREAKTHROUGHS

  • Two criteria for Classifications used by Prof. Marcus W. Feldman and by Dr. Stephen J. Williams to generate the two classifications

https://pharmaceuticalintelligence.com/biomed-audio-podcast-library-lpbi-group/

Clinical Trials and Regulatory Efficiency AI:

-Predictive Regulatory Tools

-Decentralize Trials

-inventory management

Disease Detection and Diagnostics:

–       ATTR-CM Initiative

–       Rare diseases

Aviva Lev-Ari, PhD, RN, Stephen J. Williams, PhD and Prof. Marcus W. Feldman Health Care Policy Analysis derived from the Farewell remarks from AMA President Jack Resneck Jr., MD | AMA 2023 Annual Meeting

LISTEN to Audio Podcast

Future of Medicine

https://pharmaceuticalintelligence.com/2023/06/10/health-care-policy-analysis-derived-from-the-farewell-remarks-from-ama-president-jack-resneck-jr-md-ama-2023-annual-meeting/

Generative AI and Operational Tools:

–       Charlie Platform

–       Scientific Data Cloud AWS powered ML on centralized data

–       Amazon’s SageMaker/Bedrock for Manufacturing efficiency

–       Global Health Grants:

Pfizer Foundation’s AI Learning Lab for equitable  access to care and tools for community care

  • Ingest to Charlie Platform all +300 Podcasts for Foundation’s AI Learning Lab
Partnerships and Education

–       Collaborations: IMI Big Picture for 3M – sample disease database

–       AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine

–       Webinars of AI for biomedical data integration

–       Webinard on Ai in Manufacturing

  • Use Podcast for Education
  • Use Podcast as Hybrid: Start presentation with a Podcast continue with a life interview

Examples of an Audio Podcast of Scientists on LPBI Group Team in the past:

  • Podcast with Dr. Larry H. Bernstein, MD, FCAP, Interview by Gail S. Thornton, PhDc, Narrator’s Voice: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-dr-larry-h-bernstein-md-fcap-interview-by-gail-s-thornton-phdc-narrators-voice-stephen-j-williams-phd/

  • Podcast with Prof. Marcus W. Feldman, PhD, Biology Department, Stanford University, Interview by Gail S. Thornton, PhDc

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-prof-marcus-w-feldman-phd-biology-department-stanford-university-interview-by-gail-s-thornton-phdc/

  • Podcast with Dr. Sudipta Saha, PhD, Interview by Gail S. Thornton, PhDc, Narrator’s Voice: Gail S. Thornton, PhDc

https://pharmaceuticalintelligence.com/audio-podcasts/podcast-with-dr-sudipta-saha-phd-interview-by-gail-s-thornton-phdc-narrators-voice-gail-s-thornton-phdc/

Strategic Focus:

–       $500M R&D reinvestment by 2026 targets AI for Productivity

–       Part of $7.7B cost savings

–       Ethical AI, diverse DBs

–       Global biotech advances: China’s AI in CRISPR

Dr. Lev-Ari Prompted @Grok on 11/18/2025:

AI-Driven Cost Savings in Big Pharma   

@Grok responded:

AI-Driven Cost Savings in Big PharmaArtificial intelligence (AI) applications across the pharmaceutical value chain—particularly in drug discovery, clinical trials, and operations—have the potential to generate substantial savings for major pharmaceutical companies (often referred to as “Big Pharma”). Based on recent industry reports and analyses from 2024-2025, these savings vary by application but could collectively reach tens of billions annually by the late 2020s. Projections indicate AI could add up to 11% in value relative to revenue for biopharma firms over the next five years through productivity gains and cost reductions. Below, I break down the estimated savings levels by key areas, drawing from credible sources like McKinsey, Deloitte, IQVIA, and others.

  1. Clinical Trials

  • Annual Industry-Wide Savings: By 2030, AI integration into 60-70% of clinical trials could save the pharmaceutical industry $20-30 billion per year through faster timelines, better patient matching, and reduced administrative costs.
  • Per-Trial Cost Reduction: AI can cut trial costs by up to 70% and shorten durations by 80%, enabling quicker regulatory submissions (40% faster) with 50% improved cost efficiency. For example, AstraZeneca reported up to 25% shorter trial durations in cancer drug studies using AI for dose optimization and biomarker identification.
  • Specific Efficiency Gains: Feasibility surveys can see 90% time reductions using AI/ML tools.
  1. Drug Discovery and Development

  • R&D Cost Drop: AI could reduce drug discovery costs by up to 40%, with overall development timelines shrinking from 5 years to 12-18 months (an 80% cut in some cases). From discovery to preclinical candidate stage, timelines can decrease by up to 50%.
  • Total Clinical Development Savings: Projected at $25 billion across the sector, driven by AI-accelerated pipelines.
  1. Operations and Broader Impacts

  • Value Relative to Revenue: Biopharma AI investments may yield up to 11% additional value (e.g., via cost efficiencies and revenue growth) over five years. For related medtech segments, savings could hit 12% of total revenue within 2-3 years.
  • Operating Profits Boost: By 2030, AI could add $254 billion in annual operating profits industry-wide through productivity jumps (e.g., 50-100% in quality control) and lead time reductions (60-70%).
  • Time-to-Market: Up to 10% faster overall, with production KPIs improving revenue by 25%.

These estimates assume scaled adoption, with global AI spending in pharma rising from ~$3 billion in 2025 to $22-25 billion by 2027-2030. Challenges like high initial costs and data infrastructure may temper short-term gains, but early adopters (e.g., Pfizer, Novartis) are already seeing ROI. For context, the global pharma market is projected at $1.6 trillion by 2025, so even modest percentages translate to billions in savings. If you need details on a specific AI use case or company, let me know!

SOURCE

https://x.com/i/grok?conversation=1990851703051104519

 

Dr. Lev-Ari Prompted @Grok on 11/17/2025:

Pfizer, its competitors by size, market share, product lines and Investment in AI technologies

Grok responded

https://x.com/i/grok?conversation=1990458795936354506

Overview of Pfizer and Key CompetitorsPfizer is one of the world’s largest pharmaceutical companies, with a strong focus on innovative medicines and vaccines. Its primary competitors include Merck & Co., Johnson & Johnson (J&J), Roche, Novartis, and now AstraZeneca, based on revenue leadership and market overlap in areas like oncology, immunology, and vaccines. These companies dominate the global pharmaceutical industry, which is projected to reach $1.21 trillion in revenue by 2025. Below, I compare them across size (measured by 2024 revenue and employees), market share (approximate global pharma share based on 2024 revenues), product lines (key therapeutic areas and top products), and investments in AI technologies. Data reflects the latest available figures as of late 2025.
Here are the same comparisons with more concise tables (2024 data, latest available as of Nov 2025):
Size (2024 Revenue & Employees)
Company
Revenue (USD B)
Employees
Merck & Co.
64.2
70,000
Pfizer
63.6
88,000
Roche
~58.3
103,000
J&J (pharma only)
57.8
138,000
AstraZeneca
54.1
94,300
Novartis
~50.0
76,000
Approx. Global Pharma Market Share
Company
Share %
Merck & Co.
4.3
Pfizer
4.2
J&J / Roche
3.9
AstraZeneca
3.6
Novartis
3.3
Key Product Lines & Top Sellers (2024 revenue, USD B)
Company
Main Areas
Top Products (revenue)
Pfizer
Vaccines, cardiology, oncology
Eliquis (7.3), Prevnar (6.4), Vyndaqel (5.4)
Merck & Co.
Oncology, vaccines
Keytruda (25.0), Gardasil (8.6)
J&J
Immunology, oncology
Stelara (10.9), Darzalex (9.7)
AstraZeneca
Oncology, CVRM, respiratory
Farxiga (7.7), Tagrisso (6.6), Enhertu (4.0)
Roche
Oncology, neurology, hemophilia
Ocrevus (5.7), Hemlibra (4.0)
Novartis
Cardiology, immunology
Entresto (6.6), Cosentyx (5.2)

AI Investment Snapshot (2025)
Company
AI Highlights (2025)
Scale/Impact
Pfizer
AI cuts discovery from years → 30 days; heavy partnerships
Leader in adoption
AstraZeneca
Evinova AI trials unit; $2.5B AI Beijing hub; 12k+ staff upskilled
Very aggressive (>85% productivity gain)
Merck & Co.
Gen-AI for protocols; 20+ AI “blockbusters” in pipeline
>$1.2B spend
J&J
900+ AI projects → $500M savings; surgical AI training
Broad internal rollout
Roche
AI diagnostics + drug discovery; $55M Manifold Bio deal
Strong in diagnostics
Novartis
AI pipeline acceleration; data-science alliances
Integrated into $10B R&D
Dr. Lev-Ari’s Prompt:
  • AI in biotech startups
Searched for “(“AI” OR “artificial intelligence”) (“biotech” OR “biotechnology”) startup since:2025-01-01″
10 results
Show All (2)

AI in Biotech Startups:

2025 LandscapeArtificial Intelligence (AI) is revolutionizing biotech startups by accelerating drug discovery, optimizing clinical trials, and enabling personalized medicine. In 2025, AI applications like generative models for molecule design and predictive analytics for protein structures are slashing R&D timelines from years to months, reducing costs by up to 30%, and boosting success rates. The sector blends biology’s complexity with AI’s data-crunching power, drawing massive VC interest amid ethical debates on gene editing and biosecurity.Key Trends

  • Explosive Funding Growth: U.S. healthcare VC hit $23B in 2024, with ~30% ($5.6B) targeting AI-biotech—nearly triple 2023 levels. Biopharma startups raised $24.2B across 568 deals, up from $10.1B in 2023. Q3 2025 saw $97B global VC (up 38% YoY), half to AI firms. Mega-rounds ($100M+) surged, focusing on drug discovery and rare diseases.
  • Market Expansion: Global AI-biotech market: $4.6B in 2025, projected to $11.4B by 2030 (20% CAGR). Biotech overall to $5.85T by 2034 (13.6% CAGR), driven by AI in gene editing, microbiome therapeutics, and automation.
  • Compute Demands: Biotech AI requires massive GPUs; Nvidia’s AI sales hit $41.1B in one 2025 quarter (56% YoY). Startups lean on cloud partnerships (e.g., AWS-Recursion) or in-house clusters, with hyperscalers eyeing $3-4T infrastructure by 2030.
  • Regional Hotspots: North America leads (42% share) via FDA support and hubs like Boston/SF; Europe grows via funding/regulatory advances; Asia (e.g., China) via collaborations.
  • Challenges & Ethics: Selective funding favors clinically validated firms; rising M&A (big pharma acquiring AI startups); ethical scrutiny on AI gene editing (e.g., CRISPR enhancements).

 

Notable Startups & Recent Funding

Here’s a snapshot of standout AI-biotech startups, highlighting innovations and 2024-2025 funding:

Startup
Focus Area
Key Innovation
Recent Funding (2025)
Insilico Medicine
Drug Discovery
Generative AI for novel molecules
$100M+ Series D (ongoing)
Recursion Pharmaceuticals
Phenomics & Drug Repurposing
AI mapping diseases via cell images
$50M partnership w/ Nvidia
Absci Corp.
Protein Design
AI-generated antibodies/proteins
$20M from AMD (Jan 2025)
TandemAI
Drug Discovery
ML for molecular simulation
$22M Series A extension
Red Queen Bio
Biosecurity
AI for pathogen detection/design
$15M Seed led by OpenAI
Antiverse
Antibody Design
AI predicting antibody structures
Deal w/ Nxera Pharma (2024)
Deep Genomics
Genomics
AI extracting insights from DNA data
VC round undisclosed
Aizen Therapeutics
Molecule Generation
Proprietary AI engines for trials
Early-stage VC

These firms exemplify AI’s shift from hype to pipeline impact—e.g., Insilico’s AI-discovered drug entered Phase II trials in record time.Emerging ApplicationsAI’s top uses in biotech startups:

  1. Drug Discovery: ML predicts drug-target interactions (e.g., AlphaFold-inspired tools cut screening costs 70%).
  2. Genomics & Diagnostics: AI analyzes vast datasets for rare variants, powering precision oncology.
  3. Clinical Trials: Predictive models optimize patient matching, reducing failures by 20-30%.
  4. Synthetic Biology: AI designs custom microbes for therapeutics or biofuels.
  5. Automation Labs: AI-robotics handle experiments autonomously, boosting throughput 10x.

Future Outlook

2025 signals a “quality over quantity” pivot: Investors prioritize AI startups with real-world data and partnerships (e.g., Y Combinator, Google for Startups). M&A will rise as big pharma (Pfizer, Merck) snaps up talent. Yet, compute shortages and regs (e.g., FDA AI guidelines) pose hurdles. For founders, blending AI with wet-lab validation is key to unlocking the next wave of breakthroughs—potentially curing diseases once deemed untreatable. If you’re building or investing, focus on
  • rare diseases or
  • oncology

for the hottest bets.

Additional Competitors by Product Line

@Grok Response

https://x.com/i/grok/share/Ple3W4IDlwhyjUwBUU4JAg2OD

Key Products and Focus AreasPfizer’s portfolio spans immunology, oncology, cardiology, endocrinology, neurology, and vaccines. In 2024, its top revenue generators included:

  • Eliquis (apixaban, blood thinner): $7.3 billion (11% of total revenue).
  • Prevnar family (pneumococcal vaccines): $6.4 billion (10%).
  • Paxlovid (COVID-19 antiviral): $5.7 billion (9%).
  • Vyndaqel (tafamidis, for rare heart disease): $5.4 billion (8%).
  • Comirnaty (COVID-19 vaccine): $5.3 billion (8%).
  • Ibrance (palbociclib, breast cancer drug): $4.3 billion (6%).

The company is investing heavily in AI-driven drug discovery (e.g., 2024 partnerships with NVIDIA and Saama) and next-gen therapies like RSV vaccines, gene therapies for muscular dystrophy, and oral GLP-1 drugs for obesity/diabetes. Challenges include patent expirations (e.g., Eliquis in 2026-2028) and biosimilar competition, but Pfizer’s diversified pipeline positions it for mid-single-digit growth in 2025.Pfizer’s Main CompetitorsPfizer operates in a highly competitive $1.5 trillion global pharma market, where rivals vie for dominance in oncology, vaccines, immunology, and emerging areas like obesity and gene therapy. Its primary competitors are fellow “Big Pharma” giants, often overlapping in therapeutic areas. Based on revenue, market share, and direct product rivalries, here are the top ones (ranked roughly by 2024 revenue for context):

Competitor
Headquarters
2024 Revenue (Pharma Segment)
Key Strengths & Overlaps with Pfizer
Notable Products
Johnson & Johnson (J&J)
New Brunswick, NJ, USA
$88.8 billion (total; pharma ~$57.8 billion)
Diversified (pharma, devices, consumer); strong in oncology, immunology, vaccines. Competes directly in blood thinners and cancer therapies.
Darzalex (multiple myeloma), Stelara (psoriasis), Imbruvica (blood cancer).
Merck & Co.
Rahway, NJ, USA
$64.2 billion
Oncology leader; vaccines and antivirals. Keytruda outsells Pfizer’s Ibrance; rivals in HPV/pneumococcal vaccines.
Keytruda (immunotherapy, $29.5 billion), Gardasil (HPV vaccine).
Roche Holding AG
Basel, Switzerland
~$68 billion (group; pharma ~$50 billion)
Diagnostics + pharma powerhouse; dominates oncology and neurology. Competes in antibody-drug conjugates post-Pfizer’s Seagen buy.
Ocrevus (MS), Hemlibra (hemophilia), Avastin (cancer).
AbbVie Inc.
North Chicago, IL, USA
$56.3 billion
Immunology and neuroscience focus; biosimilars erode shared markets like Humira vs. Pfizer’s Enbrel alternatives.
Humira (rheumatoid arthritis), Skyrizi (psoriasis), Rinvoq (arthritis).
Novo Nordisk A/S
Bagsværd, Denmark
~$40 billion (est.)
Diabetes/obesity specialist; exploding GLP-1 demand challenges Pfizer’s emerging obesity pipeline.
Ozempic/Wegovy (semaglutide for diabetes/weight loss).
Eli Lilly and Co.
Indianapolis, IN, USA
~$45 billion (est.; projecting $58-61 billion in 2025)
Diabetes/obesity and neurology; Mounjaro rivals Pfizer’s danuglipron in GLP-1 space.
Mounjaro/Zepbound (tirzepatide), Verzenio (breast cancer).
AstraZeneca PLC
Cambridge, UK
$50.95 billion
Oncology, respiratory, and cardiovascular; Tagrisso competes with Pfizer’s Lorbrena in lung cancer.
Tagrisso (lung cancer), Farxiga (diabetes/heart), Imfinzi (immunotherapy).
Novartis AG
Basel, Switzerland
~$48 billion
Gene therapy and immunology; rivals in rare diseases and oncology.
Cosentyx (psoriasis), Entresto (heart failure), Kisqali (breast cancer).
Bristol-Myers Squibb (BMS)
New York, NY, USA
$47.8 billion
Oncology and immunology; Opdivo directly competes with Pfizer’s Bavencio in checkpoint inhibitors.
Opdivo (immunotherapy), Eliquis (co-developed with Pfizer, but BMS handles ex-US).
Sanofi S.A.
Paris, France
~$46 billion
Vaccines and rare diseases; Dupixent challenges in immunology; rivals Prevnar in pneumococcal vaccines.
Dupixent (atopic dermatitis), Lantus (diabetes).

These competitors collectively hold over 50% of the global pharma market share. Emerging threats include biotech upstarts like Moderna (mRNA vaccines, directly rivaling Comirnaty) and generics firms like Teva (eroding post-patent revenues). In 2025, competition intensifies in obesity (Novo/Lilly lead) and oncology (Merck/Roche dominate), with tariffs and pricing pressures (e.g., US executive orders) affecting all.Pfizer’s edge comes from its scale and partnerships (e.g., BioNTech for mRNA), but it must navigate revenue dips from COVID fade-out and biosimilars. For 2025 projections, analysts expect 6-9% growth across the sector, with Pfizer targeting modest gains through oncology expansions. If you’d like deeper dives on specific rivals, financials, or products, let me know!

AI Acquisitions by Big Pharma

2 days ago • Visible to anyone on or off LinkedIn

AI in Pharma: From Tool to Value Creator 🚀

Over the past 5 years, pharma’s AI footprint has expanded rapidly and the investment trends tell a clear story.

💡 ~60% of collaborations and acquisitions now focus on TargetDiscovery and Validation — pharma’s main frontier to shorten early-stage R&D timelines and boost success rates through PrecisionBiology.
⚗️ The second-largest area (~25%) centers on AI-driven DrugDesign and MoleculeOptimization, where algorithms accelerate CandidateSelection, refine molecular properties, and enable NewModalities such as Biologics and CellAndGeneTherapies.
🏆 Among top players, Eli Lilly and Company stands out as the most active investor — making AI a true StrategicCapability, not just another research tool.

The takeaway?
AI isn’t just supporting pharma anymore, it’s reshaping it.

Source: Bita Sehat, PhD, MBA

hashtagai hashtaglifesciences hashtagpharmaceuticals hashtagtechbio hashtagbiotech

table

Article was summarized by @Grok on 11/29/2025
Summary of the Article:
“Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma”
This November 15, 2025, article on PharmaceuticalIntelligence.com (curated by LPBI Group) makes a compelling case for leveraging LPBI’s decade-plus of expert-curated digital IP as a high-fidelity, rights-clean training corpus to supercharge AI-driven innovation in Big Pharma. It contrasts the limitations of generic LLMs (like GPT) with the need for domain-specific, ontologized biomedical data, positioning LPBI’s assets as a “defensible moat” for accelerating R&D in drug discovery, precision medicine, and beyond. The piece blends market analysis of pharma giants with a deep dive into LPBI’s portfolio, emphasizing its multimodal, metadata-rich structure for NLP, knowledge graphs, and fine-tuning SLMs/LLMs.IntroductionThe thesis: In an era where AI investments in pharma (e.g., Eli Lilly’s aggressive funding) focus on slashing R&D timelines by 60%+ in target discovery and 25% in drug design, LPBI’s proprietary corpus—built since 2012 via human curation, digitization, and ontology mapping—offers unmatched authenticity over web-scraped data. It addresses GPT’s biomedical shortcomings (e.g., failing to infer fusion proteins or noncanonical pathways without curated inputs) and highlights LPBI’s foresight in text analysis for generative AI augmentation. A key quote from Data2Discovery: “We are able to improve drug discovery now as well as demonstrating new fast-cycle AI-driven processes that will have a revolutionary impact on drug discovery if fully implemented.”Portfolio OverviewLPBI’s ~9 GB, debt-free, multimodal corpus is privately held, expert-curated (e.g., by Prof. Marcus W. Feldman and Dr. Stephen J. Williams), and ingest-ready for AI pre-training/evaluations. It spans five key asset classes, each with metadata exports, timestamps, crosslinks, and centralized rights for model training:

 

Asset Class
Description & Size
Unique Value Proposition
I: Scientific Articles
6,250+ articles on PharmaceuticalIntelligence.com (~2.5M views); covers genomics, oncology, immunology, etc.
Live ontology, author/role labels, view histories; enables temporal NLP for trend analysis.
II: e-Books
48 bilingual (English/Spanish) volumes in 5 BioMed e-Series (e.g., Series A: Cardiovascular, 6 vols., $515 total; Series E: Patient-Centered, 4 vols., $274); 151,000+ page downloads; 2,728 articles.
Peer-reviewed, senior-editor TOCs; pay-per-view model proves demand; ideal for entity-relationship extraction.
III: e-Proceedings
100+ from biotech/genomics conferences (2013–2025); +50 tweet collections as speaker quotes with affiliations.
Real-time event curation; captures emerging insights for knowledge graph augmentation.
V: Biological Images
7,500+ images in Digital Art Media Gallery; embedded as prior art in texts.
Expert-contextualized visuals; supports multimodal AI for image-text pairing in diagnostics.
X: Audio Podcasts
300+ interviews with scientific leaders (e.g., Nobel laureates like Jennifer Doudna); classified by themes like CRISPR, mRNA vaccines.
Transcripts + NLP WordClouds; adds auditory/verbal depth for voice-enabled AI copilots.

The portfolio’s “living ontology” allows seamless integration into tools like InfraNodus for concept mapping.AI Training RelevanceUnlike PubMed’s unstructured dumps, LPBI’s assets are pre-annotated for concept extraction (e.g., gene-disease-drug dyads), reducing hallucinations and bias in LLMs. A case study integrates curation with ChatGPT-5: Manual ontology + knowledge graphs uncovered novel WNT/Hedgehog interactions in lung cancer, generating research questions like: “How does the interaction between [[EGFR]] mutations and sex-specific gene alterations, including [[RBM10]], influence treatment outcomes in lung adenocarcinoma?” This hybrid approach outperforms solo GPT, proving the corpus’s role in trustworthy biomedical inference.Applications

  • Drug Discovery: ML prompts on dyads to mimic Pfizer’s AI (e.g., reducing discovery from years to months); supports biologics/cell therapies.
  • Precision Medicine: 547 articles for patient subtyping; 3,615 disease-focused pieces for predictive modeling.
  • Diagnostics & Rare Diseases: 560 rare disease articles + 30 on ATTR-CM for early detection (e.g., via Pfizer’s ASPIRE AI).
  • Clinical/Regulatory: Streamlines trials, evidence synthesis, and content generation (e.g., via Pfizer’s “Charlie” copilot, saving hours on lit reviews).
  • Generative Tools: Fine-tune for compliant, domain-aware outputs in medical affairs/safety.

Challenges & Solutions

  • Data Quality/Bias: Unstructured text leads to GPT errors; solved by expert curation + ontology for accurate, diverse inputs.
  • Regulatory Compliance: Rights fragmentation; addressed via LPBI’s centralized assignment for NDA-secure licensing.
  • Scalability: Generic data overload; LPBI’s focused, high-signal corpus enables fast-cycle AI without noise.

Recommendations

Urge Big Pharma (e.g., Pfizer, Merck—table highlights Merck’s $64.2B revenue, Keytruda focus) to acquire the full portfolio for end-to-end knowledge bases, or license under NDA for SLM/LLM training (English/Spanish).
  • Emphasizes “buy the moat” for competitive edge in 2025’s 6–9% sector growth (obesity/oncology hotspots).
Contact: avivalev-ari@alum.berkeley.edu.
Conclusion
LPBI’s corpus isn’t just data—it’s a “scientific revolution enabler,” blending human creativity with AI for transformative pharma impact. Validated on extraction tasks, it’s primed for partnerships, outshining combinatorial predictions with authentic, curated depth. This article builds on prior LPBI mappings (e.g., our podcast TOC summary), reinforcing the portfolio’s strategic fit for AI.
For deeper analysis or related assets, let me know!

Read Full Post »

Real Time Conference Coverage: Advancing Precision Medicine Conference, Afternoon Session October 4 2025

Real Time Conference Coverage: Advancing Precision Medicine Conference, Afternoon Session  October 4 2025

Reporter: Stephen J. Williams, PhD

Leaders in Pharmaceutical Business Intellegence will be covering this conference LIVE over X.com at

@pharma_BI

@StephenJWillia2

@AVIVA1950

@AdvancingPM

using the following meeting hashtags

#AdvancingPM #precisionmedicine #WINSYMPO2025

1:40 – 2:30

AI in Precision Medicine

Dr. Ganhui Lan
Dr. Xiaoyan Wang
Dr. Ahmad P. Tafti
Jen Gilburg

Jen Gilburg (moderator)Deputy Secretary of Technology and Entrepreneurship, Dept. of Community and Economic Development, Commonwealth of Pennsylvania

  • AI will help reduce time for drug development especially in early phase of discovery but eventually help in all phases
  • Ganhui: for drug regulators might be more amenable to AI in clinical trials; AI may be used differently by clinicians
  • nonprofit in Philadelphia using AI to repurpose drugs (this site has posted on this and article will be included here)
  • Ganhui: top challenge of AI in Pharma; rapid evolution of AI and have to have core understanding of your needs and dependencies; realistic view of what can be done; AI has to have iterative learning; also huge vertical challenge meaning how can we allign the use of AI through the healthcare vertical layer chain like clinicians, payers, etc.
  • Ganhui sees a challenge for health companies to understand how to use AI in business to technology; AI in AI companies is different need than AI in healthcare companies
  • 95% of AI projects not successful because most projects are very discrete use

2:00-2:20

Building Precision Oncology Infrastructure in Low- and Middle-Income Countries

Razelle Kurzrock, MD

Sewanti Limaye, MD, Director, Medical & Precision Oncology; Director Clinical and Translational Oncology Research, Sir HN Reliance Foundation Hospital & Research Centre, Mumbai, India; Founder, Nova Precision AI; Co-Founder, Iylon Precision Oncology; Co-Chair, Asia Pacific Coalition Against Lung Cancer; Co-Chair,  Asia Pacific Immuno-Oncology; Member,  WIN Consortium

  • globally 60 precision initiatives but there really are because many in small countries
  • three out of five individuals in India die of cancer
  • precision medicine is a must and a hub and spoke model is needed in these places; Italy does this hub and spoke; spokes you enable the small places and bring them into the network so they know how and have access to precision medicine
  • in low income countries the challenge starts with biopsy: then diagnosis and biomarker is issue; then treatment decision a problem as they may not have access to molecular tumor boards
  • prevention is always a difficult task in LMICs (low income)
  • you have ten times more patients in India than in US (triage can be insurmountable)
  • ICGA Foundation: Indian Cancer Genome Atlas
  • in India mutational frequencies vary with geographical borders like EGFR mutations or KRAS mutations
  • genomic landscape of ovarian cancer in India totally different than in TCGA data
  • even different pathways are altered in ovarian cancer seen in North America than in India
  • MAY mean that biomarker panels need to be adjusted based on countries used in
  • the molecular data has to be curated for the India cases to be submitted to a tumor board
  • twenty diagnostic tests in market like TruCheck for Indian market; uses liquid biopsy
  • they are also tailoring diagnostic and treatment for India getting FDA fast track approvals

2:20-2:40

Co-targeting KIT/PDGRFA and Genomic Integrity in Gastrointestinal Stromal Tumors

Razelle Kurzrock, MD

Lori Rink, PhD, Associate ProfessorFox Chase Cancer Center

  • GIST are most common nesychymal tumor in GI tract
  • used to be misdiagnosed; was considered a leimyosarcoma
  • very asymptomatic tumors and not good prognosis
  • very refractory to genotoxic therapies
  • RTK KIT/PDGFRA gain of function mutations
  • Gleevec imatinib for unresectable GIST however vast majority of even responders become resistant to therapy and cancer returns
  • there is a mutation map for hotspot mutations and sensitivity for gleevec
  • however resistance emerged to ripretinib; in ATP binding pocket
  • over treatment get a polyclonal resistance
  • performed a kinome analysis; Wee1 looked like a potential target
  • mouse studies (80 day) showed good efficacy
  • avapiritinib ahs some neurotox and used in PDGFRA mut GIST model which is resistant to imitinib
  • but if use Wee1 inhibitor with TKI can lower dose of avapiritinib
  • cotargeting KIT/PDGFRA and WEE1 increases replicative stress
  • they are using PDX models to test these combinations
  • combination creates genomic instability

 

Read Full Post »

Real Time Coverage Advancing Precision Medicine Annual Conference, Philadelphia PA November 1,2 2024

Reporter: Stephen J. Williams, Ph.D.

Source: https://www.advancingprecisionmedicine.com/apm-annual-conference-and-exhibition-in-philadelphia/ 

This event will be covered by the LPBI Group on Twitter.  Follow on

@Pharma_BI

@StephenJWillia2

@Aviva1950

@AdvancingPM

using the following meeting hashtags

#AdvancingPM #precisionmedicine

The Advancing Precision Medicine (APM) Annual Conference 2024 will take place at the Pennsylvania Convention Center in Philadelphia,  November 1-2, 2024. Located in the heart of the biopharma ecosystem and with easy access to some of the most renowned academic and research institutions in the world, the APM Annual Conference 2024 will attract all segments of the precision medicine landscape.

The event will consist of two parallel tracks composed of keynote addresses, panel discussions and fireside chats which will encourage audience participation. Over the course of the two-day event leaders from industry, healthcare, regulatory bodies, academia and other pertinent stakeholders will share an intriguing and broad scope of content.

his event will consist of three immersive tracks, each crafted to explore the multifaceted dimensions of precision medicine. Delve into Precision Oncology, where groundbreaking advancements are reshaping the landscape of cancer diagnosis and treatment. Traverse the boundaries of Precision Medicine Outside of Oncology, as we probe into the intricate interplay of genetics, lifestyle, and environment across a spectrum of diseases and conditions including rare disease, cardiology, ophthalmology, and neurodegenerative disease. Immerse yourself in AI for Precision Medicine, where cutting-edge technologies are revolutionizing diagnostics, therapeutics, and patient care. Additionally, explore the emerging frontiers of Spatial Biology and Mult-Omics, where integrated approaches are unraveling the complexities of biological systems with unprecedented depth and precision.

Whether you are a seasoned researcher, a dedicated clinician, or a visionary industry professional, this conference serves as a vibrant hub of knowledge exchange, collaboration, and innovation. Elevate your expertise, expand your network, and chart the course of your career trajectory amidst a community of like-minded individuals.  Join us as we embark on this transformative journey, where the possibilities are as limitless as the potential of precision medicine itself.

Agenda – What’s on when

7:30 – 8:25

Registration and Check-in          Meeting Room 203          Philadelphia Convention Center

8:25 – 8:30

Welcome and Introduction

8:30 – 9:00

Opening Keynote

Advancing Precision Medicine in the Prevention and Treatment of Cardiometabolic Disease (CME Eligible)

Daniel Rader

Daniel Rader, Penn Medicine and Children’s Hospital of Philadelphia

9:00 – 10:20

9:00-10:20

Diagnosis to Treatment – A Case Study in Non Small Cell Lung Cancer

Jason Crites

Moderator: Jason Crites, Assurance Health Data

Miriam Bredella, NYU Lagone Health

Robert Dumanois

Rob Dumanois, Thermo Fisher Scientific

Joe Lennerz

Joe Lennerz, BostonGene

10:20 – 10:50

Networking, Exhibits and Product Presentations

10:25-10:35  PRODUCT PRESENTATION  204C

The Genexus Integrated Sequencer System:
NGS Results in 24 hours for Oncology Genomic Profiling

Jeff Smith,  Thermo Fisher Scientific

10:35-10:45  PRODUCT PRESENTATION  204A

Shifting the Paradigm in Patient Management with MRD Testing: Why Evidence-Generated Performance and Experience is Key

Karen Lin, Natera

10:50 – 12:50

10:50-11:50

Who Needs Liquid Biopsy? Opportunities to Increase Access and Improve Outcomes

Nicole St. Jean, GSK

Phil Febbo,  Veracyte, Inc.

Andrea Ferreira-Gonzalez, Virginia Commonwealth University

Lauren Leiman, BloodPAC

Nicole Sheahan, Global Colon Cancer Association

11:50-12:50

Advancing Digital Pathology and Precision Medicine – Where Are We Now?

Shruti Mathur, Genentech

Luke Benko, Roche Diagnostics

Kimberly GasuadJK Life Sciences

Eric Walk, PathAI

10:50-11:10

Real World Data vs Multi Modal Omics Data for Therapeutic Discovery (CME Eligible)

Adam Resnick, CHOP

11:10-11:30

An Academic Perspective on Rare Disease Target Discovery to Commercial Treatment Development (CME Eligible)

Hakon Hakonarson

Hakon Hakonarson, CHOP

11:30-11:50

NCATS Perspective on Success and Failures of Drug Repurposing for Rare Disease (CME Eligible)

PJ Brooks, NIH

11:50-12:10

Pharma Perspective and Realities (CME Eligible)

Sundeep Dugar, Rarefy Therapeutics

12:10-12:50

A Panel Discussion: Scaling Precision Therapeutic Development for Rare Disease (CME Eligible)

Marni Falk

Marni Falk, CHOP

Stephen Ekker, University of Texas at Austin

Christine Nguyen, FDA

Frank Sasinowski, Hyman, Phelps & McNamara

Adam Resnick, CHOP

Hakon Hakonarson

Hakon Hakonarson, CHOP

Sundeep Dugar, Rarefy Therapeutics

PJ Brooks, NIH

12:50 – 1:50

Lunch & Product Presentations

1:10-1:25  PRODUCT PRESENTATION  204C

The Power of ctDNA Testing in Therapy Selection and Recurrence Monitoring

Taylor Jensen,  LabCorp

1:50 – 3:50

1:50-3:50

Unlocking the Next Quantum Leap in Precision Medicine – A Town Hall Discussion (CME Eligible)

Co-Chairs

Amanda Paulovich

Amanda Paulovich, Fred Hutchinson Cancer Center

Henry Rodriguez

Henry Rodriguez, NCI/NIH

Eric Schadt

Eric Schadt, Pathos

Participants

Ezra Cohen, Tempus

Jennifer Leib, Innovation Policy Solutions

Susan Monarez, ARPA-H

Nick Seddon, Optum Genomics 

Giselle Sholler, Penn State Hershey Children’s Hospital

Janet Woodcock

Janet Woodcock, Former FDA

1:50-2:50

Advancing Precision Medicine in Non-Oncology Therapeutic Areas

Moderator: Mike Montalto, Amgen

Scott Friedman, Mt. Sinai

Sana Syed, University of Virginia

Lei Zhao, Amgen

2:50-3:20

Towards a Precision Neuroimmunology Platform (CME Eligible)

Amit Bar-Or, Penn Medicine

3:20-3:50

3:50 – 4:20

Networking and Exhibits

4:20 – 6:15

4:20-4:45

Advancing Precision Medicine: Polygenic Risk Scores and Beyond (CME Eligible)

Dokyoon Kim, Penn Medicine

4:45-5:30

The Rocky Road to Clinical Trial Diversity (CME Eligible)

Ysabel Duron, The Latino Cancer Institute

Porscha Johnson, PJW Clinical Pharmacy Consulting

Victor LaGroon, Department of Veterans Affairs

5:30-6:15

In the Rising Age of Women’s Health, How Do We Build Diagnostics to Last?

Oriana Papin Zoghbi, AOADx

Sarah Huah, Johnson & Johnson

6:30 – 7:00

Evening Keynote

Reimagining Health Equity in the Era of Precision Medicine (CME Eligible)

Rick Kittles

Rick Kittles, Morehouse School of Medicine

7:00 – 7:45

Cocktail Networking Reception 

November 02, 2024

8:00-8:55

Registration and Check-in          Meeting Room 203          Philadelphia Convention Center

Read Full Post »

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Use of Systems Biology for Design of inhibitor of Galectins as Cancer Therapeutic – Strategy and Software

 

 

Curator: Stephen J. Williams, Ph.D.

Below is a slide representation of the overall mission 4 to produce a PROTAC to inhibit Galectins 1, 3, and 9.

 

Using A Priori Knowledge of Galectin Receptor Interaction to Create a BioModel of Galectin 3 Binding

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now after collecting literature from PubMed on “galectin-3” AND “binding” to determine literature containing kinetic data we generate a WordCloud on the articles.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This following file contains the articles needed for BioModels generation.

https://pharmaceuticalintelligence.com/wp-content/uploads/2022/12/Curating-Galectin-articles-for-Biomodels.docx

 

From the WordCloud we can see that these corpus of articles describe galectin binding to the CRD (carbohydrate recognition domain).  Interestingly there are many articles which describe van Der Waals interactions as well as electrostatic interactions.  Certain carbohydrate modifictions like Lac NAc and Gal 1,4 may be important.  Many articles describe the bonding as well as surface  interactions.  Many studies have been performed with galectin inhibitors like TDGs (thio-digalactosides) like TAZ TDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside).  This led to an interesting article

Dual thio-digalactoside-binding modes of human galectins as the structural basis for the design of potent and selective inhibitors

Affiliations 2016 Jul 15;6:29457.
 doi: 10.1038/srep29457. Free PMC article

Abstract

Human galectins are promising targets for cancer immunotherapeutic and fibrotic disease-related drugs. We report herein the binding interactions of three thio-digalactosides (TDGs) including TDG itself, TD139 (3,3′-deoxy-3,3′-bis-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside, recently approved for the treatment of idiopathic pulmonary fibrosis), and TAZTDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside) with human galectins-1, -3 and -7 as assessed by X-ray crystallography, isothermal titration calorimetry and NMR spectroscopy. Five binding subsites (A-E) make up the carbohydrate-recognition domains of these galectins. We identified novel interactions between an arginine within subsite E of the galectins and an arene group in the ligands. In addition to the interactions contributed by the galactosyl sugar residues bound at subsites C and D, the fluorophenyl group of TAZTDG preferentially bound to subsite B in galectin-3, whereas the same group favored binding at subsite E in galectins-1 and -7. The characterised dual binding modes demonstrate how binding potency, reported as decreased Kd values of the TDG inhibitors from μM to nM, is improved and also offer insights to development of selective inhibitors for individual galectins.

Figures

Figure 1
 
Figure 2
 
Figure 3

 

 

Read Full Post »

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort

 

Curator: Stephen J. Williams, PhD

Article ID #295: The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort. Published on 6/14/2022

WordCloud Image Produced by Adam Tubman

Ever since the hard work by Rosalind Franklin to deduce structures of DNA and the coincidental work by Francis Crick and James Watson who modeled the basic building blocks of DNA, DNA has been considered as the basic unit of heredity and life, with the “Central Dogma” (DNA to RNA to Protein) at its core.  These were the discoveries in the early twentieth century, and helped drive the transformational shift of biological experimentation, from protein isolation and characterization to cloning protein-encoding genes to characterizing how the genes are expressed temporally, spatially, and contextually.

Rosalind Franklin, who’s crystolagraphic data led to determination of DNA structure. Shown as 1953 Time cover as Time person of the Year

Dr Francis Crick and James Watson in front of their model structure of DNA

 

 

 

 

 

 

 

 

 

Up to this point (1970s-mid 80s) , it was felt that genetic information was rather static, and the goal was still to understand and characterize protein structure and function while an understanding of the underlying genetic information was more important for efforts like linkage analysis of genetic defects and tools for the rapidly developing field of molecular biology.  But the development of the aforementioned molecular biology tools including DNA cloning, sequencing and synthesis, gave scientists the idea that a whole recording of the human genome might be possible and worth the effort.

How the Human Genome Project  Expanded our View of Genes Genetic Material and Biological Processes

 

 

From the Human Genome Project Information Archive

Source:  https://web.ornl.gov/sci/techresources/Human_Genome/project/hgp.shtml

History of the Human Genome Project

The Human Genome Project (HGP) refers to the international 13-year effort, formally begun in October 1990 and completed in 2003, to discover all the estimated 20,000-25,000 human genes and make them accessible for further biological study. Another project goal was to determine the complete sequence of the 3 billion DNA subunits (bases in the human genome). As part of the HGP, parallel studies were carried out on selected model organisms such as the bacterium E. coli and the mouse to help develop the technology and interpret human gene function. The DOE Human Genome Program and the NIH National Human Genome Research Institute (NHGRI) together sponsored the U.S. Human Genome Project.

 

Please see the following for goals, timelines, and funding for this project

 

History of the Project

It is interesting to note that multiple government legislation is credited for the funding of such a massive project including

Project Enabling Legislation

  • The Atomic Energy Act of 1946 (P.L. 79-585) provided the initial charter for a comprehensive program of research and development related to the utilization of fissionable and radioactive materials for medical, biological, and health purposes.
  • The Atomic Energy Act of 1954 (P.L. 83-706) further authorized the AEC “to conduct research on the biologic effects of ionizing radiation.”
  • The Energy Reorganization Act of 1974 (P.L. 93-438) provided that responsibilities of the Energy Research and Development Administration (ERDA) shall include “engaging in and supporting environmental, biomedical, physical, and safety research related to the development of energy resources and utilization technologies.”
  • The Federal Non-nuclear Energy Research and Development Act of 1974 (P.L. 93-577) authorized ERDA to conduct a comprehensive non-nuclear energy research, development, and demonstration program to include the environmental and social consequences of the various technologies.
  • The DOE Organization Act of 1977 (P.L. 95-91) mandated the Department “to assure incorporation of national environmental protection goals in the formulation and implementation of energy programs; and to advance the goal of restoring, protecting, and enhancing environmental quality, and assuring public health and safety,” and to conduct “a comprehensive program of research and development on the environmental effects of energy technology and program.”

It should also be emphasized that the project was not JUST funded through NIH but also Department of Energy

Project Sponsors

For a great read on Dr. Craig Ventnor with interviews with the scientist see Dr. Larry Bernstein’s excellent post The Human Genome Project

 

By 2003 we had gained much information about the structure of DNA, genes, exons, introns and allowed us to gain more insights into the diversity of genetic material and the underlying protein coding genes as well as many of the gene-expression regulatory elements.  However there was much uninvestigated material dispersed between genes, the then called “junk DNA” and, up to 2003 not much was known about the function of this ‘junk DNA’.  In addition there were two other problems:

  • The reference DNA used was actually from one person (Craig Ventor who was the lead initiator of the project)
  • Multiple gaps in the DNA sequence existed, and needed to be filled in

It is important to note that a tremendous amount of diversity of protein has been realized from both transcriptomic and proteomic studies.  Although about 20 to 25,000 coding genes exist the human proteome contains about 600,000 proteoforms (due to alternative splicing, posttranslational modifications etc.)

This expansion of the proteoform via alternate splicing into isoforms, gene duplication to paralogs has been shown to have major effects on, for example, cellular signaling pathways (1)

However just recently it has been reported that the FULL human genome has been sequenced and is complete and verified.  This was the focus of a recent issue in the journal Science.

Source: https://www.science.org/doi/10.1126/science.abj6987

Abstract

Since its initial release in 2000, the human reference genome has covered only the euchromatic fraction of the genome, leaving important heterochromatic regions unfinished. Addressing the remaining 8% of the genome, the Telomere-to-Telomere (T2T) Consortium presents a complete 3.055 billion–base pair sequence of a human genome, T2T-CHM13, that includes gapless assemblies for all chromosomes except Y, corrects errors in the prior references, and introduces nearly 200 million base pairs of sequence containing 1956 gene predictions, 99 of which are predicted to be protein coding. The completed regions include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric chromosomes, unlocking these complex regions of the genome to variational and functional studies.

 

The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013 and most recently patched in 2019 (GRCh38.p13) (1). This reference traces its origin to the publicly funded Human Genome Project (2) and has been continually improved over the past two decades. Unlike the competing Celera effort (3) and most modern sequencing projects based on “shotgun” sequence assembly (4), the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps. However, limitations of BAC cloning led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished or incorrectly assembled (5).

 

Fig. 1. Summary of the complete T2T-CHM13 human genome assembly.
(A) Ideogram of T2T-CHM13v1.1 assembly features. For each chromosome (chr), the following information is provided from bottom to top: gaps and issues in GRCh38 fixed by CHM13 overlaid with the density of genes exclusive to CHM13 in red; segmental duplications (SDs) (42) and centromeric satellites (CenSat) (30); and CHM13 ancestry predictions (EUR, European; SAS, South Asian; EAS, East Asian; AMR, ad-mixed American). Bottom scale is measured in Mbp. (B and C) Additional (nonsyntenic) bases in the CHM13 assembly relative to GRCh38 per chromosome, with the acrocentrics highlighted in black (B) and by sequence type (C). (Note that the CenSat and SD annotations overlap.) RepMask, RepeatMasker. (D) Total nongap bases in UCSC reference genome releases dating back to September 2000 (hg4) and ending with T2T-CHM13 in 2021. Mt/Y/Ns, mitochondria, chrY, and gaps.

Note in Figure 1D the exponential growth in genetic information.

Also very important is the ability to determine all the paralogs, isoforms, areas of potential epigenetic regulation, gene duplications, and transposable elements that exist within the human genome.

Analyses and resources

A number of companion studies were carried out to characterize the complete sequence of a human genome, including comprehensive analyses of centromeric satellites (30), segmental duplications (42), transcriptional (49) and epigenetic profiles (29), mobile elements (49), and variant calls (25). Up to 99% of the complete CHM13 genome can be confidently mapped with long-read sequencing, opening these regions of the genome to functional and variational analysis (23) (fig. S38 and table S14). We have produced a rich collection of annotations and omics datasets for CHM13—including RNA sequencing (RNA-seq) (30), Iso-seq (21), precision run-on sequencing (PRO-seq) (49), cleavage under targets and release using nuclease (CUT&RUN) (30), and ONT methylation (29) experiments—and have made these datasets available via a centralized University of California, Santa Cruz (UCSC), Assembly Hub genome browser (54).

 

To highlight the utility of these genetic and epigenetic resources mapped to a complete human genome, we provide the example of a segmentally duplicated region of the chromosome 4q subtelomere that is associated with facioscapulohumeral muscular dystrophy (FSHD) (55). This region includes FSHD region gene 1 (FRG1), FSHD region gene 2 (FRG2), and an intervening D4Z4 macrosatellite repeat containing the double homeobox 4 (DUX4) gene that has been implicated in the etiology of FSHD (56). Numerous duplications of this region throughout the genome have complicated past genetic analyses of FSHD.

The T2T-CHM13 assembly reveals 23 paralogs of FRG1 spread across all acrocentric chromosomes as well as chromosomes 9 and 20 (Fig. 5A). This gene appears to have undergone recent amplification in the great apes (57), and approximate locations of FRG1 paralogs were previously identified by FISH (58). However, only nine FRG1 paralogs are found in GRCh38, hampering sequence-based analysis.

Future of the human reference genome

The T2T-CHM13 assembly adds five full chromosome arms and more additional sequence than any genome reference release in the past 20 years (Fig. 1D). This 8% of the genome has not been overlooked because of a lack of importance but rather because of technological limitations. High-accuracy long-read sequencing has finally removed this technological barrier, enabling comprehensive studies of genomic variation across the entire human genome, which we expect to drive future discovery in human genomic health and disease. Such studies will necessarily require a complete and accurate human reference genome.

CHM13 lacks a Y chromosome, and homozygous Y-bearing CHMs are nonviable, so a different sample type will be required to complete this last remaining chromosome. However, given its haploid nature, it should be possible to assemble the Y chromosome from a male sample using the same methods described here and supplement the T2T-CHM13 reference assembly with a Y chromosome as needed.

Extending beyond the human reference genome, large-scale resequencing projects have revealed genomic variation across human populations. Our reanalyses of the 1KGP (25) and SGDP (42) datasets have already shown the advantages of T2T-CHM13, even for short-read analyses. However, these studies give only a glimpse of the extensive structural variation that lies within the most repetitive regions of the genome assembled here. Long-read resequencing studies are now needed to comprehensively survey polymorphic variation and reveal any phenotypic associations within these regions.

Although CHM13 represents a complete human haplotype, it does not capture the full diversity of human genetic variation. To address this bias, the Human Pangenome Reference Consortium (59) has joined with the T2T Consortium to build a collection of high-quality reference haplotypes from a diverse set of samples. Ideally, all genomes could be assembled at the quality achieved here, but automated T2T assembly of diploid genomes presents a difficult challenge that will require continued development. Until this goal is realized, and any human genome can be completely sequenced without error, the T2T-CHM13 assembly represents a more complete, representative, and accurate reference than GRCh38.

 

This paper was the focus of a Time article and their basis for making the lead authors part of their Time 100 people of the year.

From TIME

The Human Genome Is Finally Fully Sequenced

Source: https://time.com/6163452/human-genome-fully-sequenced/

 

The first human genome was mapped in 2001 as part of the Human Genome Project, but researchers knew it was neither complete nor completely accurate. Now, scientists have produced the most completely sequenced human genome to date, filling in gaps and correcting mistakes in the previous version.

The sequence is the most complete reference genome for any mammal so far. The findings from six new papers describing the genome, which were published in Science, should lead to a deeper understanding of human evolution and potentially reveal new targets for addressing a host of diseases.

A more precise human genome

“The Human Genome Project relied on DNA obtained through blood draws; that was the technology at the time,” says Adam Phillippy, head of genome informatics at the National Institutes of Health’s National Human Genome Research Institute (NHGRI) and senior author of one of the new papers. “The techniques at the time introduced errors and gaps that have persisted all of these years. It’s nice now to fill in those gaps and correct those mistakes.”

“We always knew there were parts missing, but I don’t think any of us appreciated how extensive they were, or how interesting,” says Michael Schatz, professor of computer science and biology at Johns Hopkins University and another senior author of the same paper.

The work is the result of the Telomere to Telomere consortium, which is supported by NHGRI and involves genetic and computational biology experts from dozens of institutes around the world. The group focused on filling in the 8% of the human genome that remained a genetic black hole from the first draft sequence. Since then, geneticists have been trying to add those missing portions bit by bit. The latest group of studies identifies about an entire chromosome’s worth of new sequences, representing 200 million more base pairs (the letters making up the genome) and 1,956 new genes.

 

NOTE: In 2001 many scientists postulated there were as much as 100,000 coding human genes however now we understand there are about 20,000 to 25,000 human coding genes.  This does not however take into account the multiple diversity obtained from alternate splicing, gene duplications, SNPs, and chromosomal rearrangements.

Scientists were also able to sequence the long stretches of DNA that contained repeated sequences, which genetic experts originally thought were similar to copying errors and dismissed as so-called “junk DNA”. These repeated sequences, however, may play roles in certain human diseases. “Just because a sequence is repetitive doesn’t mean it’s junk,” says Eichler. He points out that critical genes are embedded in these repeated regions—genes that contribute to machinery that creates proteins, genes that dictate how cells divide and split their DNA evenly into their two daughter cells, and human-specific genes that might distinguish the human species from our closest evolutionary relatives, the primates. In one of the papers, for example, researchers found that primates have different numbers of copies of these repeated regions than humans, and that they appear in different parts of the genome.

“These are some of the most important functions that are essential to live, and for making us human,” says Eichler. “Clearly, if you get rid of these genes, you don’t live. That’s not junk to me.”

Deciphering what these repeated sections mean, if anything, and how the sequences of previously unsequenced regions like the centromeres will translate to new therapies or better understanding of human disease, is just starting, says Deanna Church, a vice president at Inscripta, a genome engineering company who wrote a commentary accompanying the scientific articles. Having the full sequence of a human genome is different from decoding it; she notes that currently, of people with suspected genetic disorders whose genomes are sequenced, about half can be traced to specific changes in their DNA. That means much of what the human genome does still remains a mystery.

The investigators in the Telomere to Telomere Consortium made the Time 100 People of the Year.

Michael Schatz, Karen Miga, Evan Eichler, and Adam Phillippy

Illustration by Brian Lutz for Time (Source Photos: Will Kirk—Johns Hopkins University; Nick Gonzales—UC Santa Cruz; Patrick Kehoe; National Human Genome Research Institute)

BY JENNIFER DOUDNA

MAY 23, 2022 6:08 AM EDT

Ever since the draft of the human genome became available in 2001, there has been a nagging question about the genome’s “dark matter”—the parts of the map that were missed the first time through, and what they contained. Now, thanks to Adam Phillippy, Karen Miga, Evan Eichler, Michael Schatz, and the entire Telomere-to-Telomere Consortium (T2T) of scientists that they led, we can see the full map of the human genomic landscape—and there’s much to explore.

In the scientific community, there wasn’t a consensus that mapping these missing parts was necessary. Some in the field felt there was already plenty to do using the data in hand. In addition, overcoming the technical challenges to getting the missing information wasn’t possible until recently. But the more we learn about the genome, the more we understand that every piece of the puzzle is meaningful.

I admire the

T2T group’s willingness to grapple with the technical demands of this project and their persistence in expanding the genome map into uncharted territory. The complete human genome sequence is an invaluable resource that may provide new insights into the origin of diseases and how we can treat them. It also offers the most complete look yet at the genetic script underlying the very nature of who we are as human beings.

Doudna is a biochemist and winner of the 2020 Nobel Prize in Chemistry

Source: https://time.com/collection/100-most-influential-people-2022/6177818/evan-eichler-karen-miga-adam-phillippy-michael-schatz/

Other articles on the Human Genome Project and Junk DNA in this Open Access Scientific Journal Include:

 

International Award for Human Genome Project

 

Cracking the Genome – Inside the Race to Unlock Human DNA – quotes in newspapers

 

The Human Genome Project

 

Junk DNA and Breast Cancer

 

A Perspective on Personalized Medicine

 

 

 

 

 

 

 

Additional References

 

  1. P. Scalia, A. Giordano, C. Martini, S. J. Williams, Isoform- and Paralog-Switching in IR-Signaling: When Diabetes Opens the Gates to Cancer. Biomolecules 10, (Nov 30, 2020).

 

 

Read Full Post »

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

GIASOURCEN M. VOLPICELLI

Gian M. Volpicelli

SENIOR WRITER

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

BUSINESS

APR 5, 2022 7:00 AM

What Twitter Is Really Planning for Crypto

The duo behind Twitter Crypto say NFT profile pics and crypto tipping are just the beginning.

YOU MIGHT HAVE heard of crypto Twitter, the corner of the social network where accounts have Bored Apes as profile pictures, posts are rife with talk of tokens, blockchains, and buying the Bitcoin dip, and Elon Musk is venerated.

Then again, you might have heard of Twitter Crypto, the business unit devoted to developing the social network’s strategy for cryptocurrency, blockchains, and that grab-bag of decentralized technologies falling under the rubric of Web3. The team’s unveiling came in November 2021 via a tweet from the newly hired project lead, Tess Rinearson, a Berlin-based American computer scientist whose career includes stints at blockchain companies such as Tendermint and Interchain.

Rinearson joined Twitter at a crucial moment. Jack Dorsey, the vociferously pro-Bitcoin company CEO, would leave a few weeks later, to be replaced by CTO Parag Agrawal. Agrawal had played an instrumental role in Bluesky, a Twitter-backed project to create a protocol—possibly with blockchain components—to build decentralized social networks.

As crypto went mainstream globally and crypto Twitter burgeoned, the company tried to dominate the space. Under the stewardship of product manager Esther Crawford, in September 2021 Twitter introduced a “tipping” feature that helps creators on Twitter to receive Bitcoin contributions through Lightning—a network for fast Bitcoin payments. In January, Twitter allowed subscribers of Twitter’s premium service, Twitter Blue, to flaunt their NFTs as hexagonal profile pictures, through a partnership with NFT marketplace OpenSea.

Twitter Crypto is just getting started. While Rinearson works with people all across the company, her team is still under 10 people, although more hires are in the pipeline, judging from recent job postings. So it’s worth asking what is next. I caught up over a video call with Rinearson and Crawford to talk about where Twitter Crypto is headed. 

The conversation has been edited for clarity and brevity.

WIRED: Let’s start with the basics. Why does Twitter have a crypto unit?

Tess Rinearson: We really see crypto—and what we’re now calling Web3— as something that could be this incredibly powerful tool that would unlock a lot for our users. The whole crypto world is like an internet of money, an internet of value that our users can potentially tap into to create new ways of owning their content, monetizing their content, owning their own identity, and even relating to each other.

One of my goals is to build Twitter’s crypto unit in such a way that it caters to communities that go beyond just that core crypto community. I love the crypto Twitter space, obviously—I’m a very proud member of the crypto community. And at the same time, I recognize that people who are really deep in the crypto space may not relate to concepts, like for instance blockchain’s immutability, in the same way that someone who’s less intensely involved might feel about those things.

So a lot of what we try to think about is, what can we learn from this group of people who are super engaged and really, really, creative? And then, how can we translate some of that stuff into a format or a mechanism or a product that’s a little bit more accessible to people who don’t have that background?

How are you learning from crypto Twitter? Do you just follow a lot of accounts, do you actually talk to them? How does that learning experience play out?

Esther Crawford: It’s a combination. We have an amazing research team that sets up panel interviews and surveys. But we’re also embedded in the community itself and follow a bunch of accounts, sit on Twitter spaces, go to conferences and events, engage with customers in that way. That’s the way the research piece of it works. But we also encounter it as end users: Twitter is the discovery platform today for all things crypto.

One of the things we do differently at Twitter is we build out in the open. And so this means having dialog with customers in real time—designers will take something that is very early-stage and post it as a tweet and then get real-time feedback. They’ll hop into spaces with product managers and engineering managers, talk about it live with real customers, and then incorporate that feedback into the designs and what ultimately we end up launching.

Rinearson: One of the things I wanted to make sure of before I came to Twitter was to know that we would be able to build features in the open and solicit feedback and show rough drafts. And so this is something I asked Parag Agrawal, who’s now the CEO, and was the person who hired me. Pretty early in the job interview process, I said this was going to be really important, and he said, “If you think it’s important to the success of this work, great, do it—thumbs up.” He also shares that openness.

As you said, Tess, you come from crypto. When you were out there, what did you think Twitter was getting right? What did you think Twitter was getting wrong?

Rinearson: I had been a Twitter power user for a really long time. The thing that I saw was a lot of aesthetic alignment between how Twitter exists in the world and the way that crypto exists in the world. Twitter has decentralized user experiences in its DNA. And, this is a bit cheesy, but people use Twitter sometimes in ways that they use a public blockchain, as a public database where everything’s time stamped and people can agree on what happened.

And for most people it’s open, it is there for public conversation. And then obviously it was also the place—a place—where the crypto community really found its footing. I think it’s been a place where an enormous amount of discovery happens, and education and learning for the whole community. I joined when there were some murmurings about Twitter starting to do crypto stuff, mostly stuff Esther had led actually, and I was excited to see where it was going. And then Twitter’s investment in Bluesky also gave me a lot of confidence.

Let’s talk about the two main things you have delivered so far: The crypto tipping feature and NFT pictures. Can you give me just a potted history of how each came about and why?

Crawford: Those are our first set of early explorations, and the reason why we started there was we really wanted to make sure that what we built benefited creators, their audiences, and then all the conversations that are happening on Twitter. For creators in particular, we know that they rely on platforms like Twitter to monetize and earn a living, and not all people are able to use traditional currencies. Not everybody has a traditional banking account setup.

And so we wanted to provide an opportunity for a borderless payment solution, and that’s why we decided to go ahead and use Bitcoin Lightning as our first big integration. One of the reasons we chose Bitcoin Lightning was also because of the low transaction fees. And we have Bitcoin and Ethereum addresses that you can also put in there [on your Twitter “tipping jar”]. We noticed that people were actually adding information about their crypto wallet addresses in their profiles. And so we wanted to make a more seamless experience, so that people could just tip through the platform, so that it felt native.

With NFT profile pictures, the way that came about was, again, looking at user behavior. People were adding NFTs that they owned as avatars, but you didn’t really know whether they owned those NFTs or not. So we decided to go ahead and build out that feature so that one could actually prove ownership.

That’s similar to how other things developed on Twitter, right? The hashtag, or even even the retweet, were initially just things users invented—by adding the # sign, or by pasting other users’ tweets—and then Twitter made that a feature.

Crawford: Yeah, exactly. Many of the best ideas come from watching user behavior on the platform, and then we just productize that.

Rinearson: Sometimes I’ve heard people call that the “help wanted signs,” and like, keeping an eye out for the “help wanted signs” across the platform. The NFT profile picture was a clear example of that.

How do all these things—these two things and possibly other crypto features coming further down the line—really help Twitter’s bottom line?

Crawford: With creator monetization our goal was to help creators get paid, not Twitter. But Twitter takes a really small cut of earnings. For more successful creators, we take a larger percentage. The way we think about this is, it is part of our revenue diversification.

Twitter today is a wholly ad-based business. In the future we imagine Twitter making money from a variety of different product areas. So Twitter Blue is one of those products—you can pay $2.99 a month and you get additional features, such as the NFT profile pictures. We really think that revenue diversification sits across a variety of areas, and creator monetization is one really small component of that.

As you said, these are just early experiments. Where is Twitter Crypto going next? What’s your vision for crypto technology’s role within Twitter?

Rinearson: The real trick here is to find the right parts of Twitter to decentralize, and to not try to decentralize everything at once—or, you know, make every user suddenly responsible for taking care of some private keys or something like that.

We have to find the right ways to open up some access to a decentralized economic layer, or give people ways that they can take their identity with them, without relying on a single centralized service.

We’re really early in these explorations, and even looking at things like Bitcoin tipping or the NFT profile pictures—we view those features as experiments themselves in a lot of ways and learning experiences. We’re learning things about how our users relate to these concepts, what they understand about them, what they find confusing, and what’s most useful to them. We really want to try to use this technology to bring utility to people and you know, not just like, sprinkle a little blockchain on it for the sake of it. So creator monetization is an area that I’m really excited about because I think there’s a really clear path forward. But again, we’re looking beyond that: We’re also looking at using crypto technology in fields like [digital] identity and [digital] ownership space and also figuring out how we can better serve crypto communities on the platform.

Are you going to put Twitter verified users’ blue ticks on a blockchain, then?

[Laughter]

No?

[More laughter]

OK, moving on. How does the kind of work you do dovetail with Bluesky’s plan to create a protocol for a decentralized social media platform? Is there any synergy there?

Rinearson: I have known Jay [Graber], the Bluesky lead, for a long time, and she and I are in pretty close contact. We check in with each other regularly and talk a lot about problems we might have in common that we’ll both need to solve. There’s an overlap looking at things in the identity area, but at the end of the day, it’s a separate project. She’s pretty focused on hiring her team, and they’re very focused on building a prototype of a protocol. That is different from what Esther and I are thinking about, which is like: There are all these blockchain protocols that exist, and we need to figure out how to make them useful and accessible for real people.

And when I say “real people,” I mean that in a sort of tongue-in-cheek contrast to hardcore crypto nerds like me. Jay is thinking much more about building for people who are creating decentralized networks. That is a very different focus area. Beyond that, I would just say it’s too early to say what Bluesky will mean for Twitter as a product. We are in touch, we have aligned values. But at the end of the day—separate teams.

Why is a centralized Silicon Valley company like Twitter the right place to start to bring more decentralization to internet users? Don’t we have just to start from scratch, build a new platform that is already decentralized?

Rinearson: I started in crypto in 2015, and I have a very vivid memory from those years of watching some of my coworkers—crypto engineers—trying to figure out how to secure some of their Bitcoin like before one of the Bitcoin forks [in which the Bitcoin blockchain split, creating new currencies], and they were panicking and freaking out. I thought there was no way that a normal person would be able to handle this in a way that would be safe. And so I was a little bit disillusioned with crypto, especially from a consumer perspective.

And then last year, I started seeing more interest from people whom I’ve known for a long time and weren’t crypto people. They were just starting to perk their heads up and take notice and start creating NFTs or start talking about DAOs. And I thought that that was interesting, that we were coming around a corner, and it might be time to start thinking about what this could mean for people beyond that hardcore crypto group.

And that was when Twitter reached out. You know, I don’t think that just any centralized platform would be able to bring crypto to the masses, so to speak. But I think Twitter has the right stuff. I think you have to meet people where they are with new technologies: find ways to onboard them and bring them along and show them what this might mean for them. make things accessible. And it’s really, really hard to do that with just a protocol. You need to have some kind of community, you need to have some kind of user base, you need to have some kind of platform. And Twitter’s just right there.

I don’t think I would say that a centralized platform is definitely the way to “bring crypto to the masses.” I do think that Twitter is the way to do it.

But why do the masses need crypto right now?

Rinearson: I don’t know that anyone  needs crypto, and our goal is not to get everyone into crypto. Let’s be clear about that. But I do think that crypto is a potentially very powerful tool for people. And so I think what we are trying to do is show people how powerful it is and unlock those possibilities. It’s also possible that we create some products and features, where people actually don’t even really know what’s happening under the hood.

Like maybe we’re using crypto as a payment rail or again as an identity layer—users don’t necessarily need to know all of those implementation details. And that’s actually something we come back to a lot: What level of abstraction are we talking about with users? What story are we telling them about what’s happening under the hood? But yeah, I would just like to reiterate that the goal is not to just shovel everyone into crypto. We want to provide value for people.

Do you think there is a case for Twitter to launch its own cryptocurrency— a Twittercoin?

Rinearson: I think there’s a case for a lot of things—honestly, there’s a case for a lot of things. We’re trying to think really, really broadly about it.

Crawford: We’re actively exploring a lot of things. It’s not it’s not something we would be making an announcement about.

Rinearson: I think it is really important to stress that when you say “Twittercoin” you probably have a slightly different idea of what it is than we do. And are we exploring those ideas? Yes, we want to think about all of them. Do we have road maps for them? No. But are we trying to think about things really creatively and be really, really open-minded? Yes. We have this new economic technology that we think could unlock a lot of things for people. And we want to go down a bunch of rabbit holes and see what we come up with.

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

Older Posts »