Healthcare analytics, AI solutions for biological big data, providing an AI platform for the biotech, life sciences, medical and pharmaceutical industries, as well as for related technological approaches, i.e., curation and text analysis with machine learning and other activities related to AI applications to these industries.
Tanishq Mathew Abraham is a remarkable figure in the world of science, AI, and healthcare innovation—a true prodigy who has achieved extraordinary milestones at a young age. Based on a deep dive into his X profile and posts (using advanced search tools on X.com), here’s a comprehensive overview of his background, accomplishments, and contributions. I’ll then break down key lessons we can learn from him, especially relevant to fields like domain-aware AI in health, entrepreneurship, and lifelong learning.
Profile Overview
Emerging Leader in Medical AI: Tanishq Mathew Abraham, Ph.D.
At age 21 (2026), CEO of SophontAI ($9.2M seed, building multimodal foundation models for medicine) and founder of MedARC_AI (2,500+ member open research community).
Completed PhD at 19 (UC Davis, Biomedical Engineering); former Research Director at Stability AI.
Key contributions: OpenMidnight (SOTA pathology model on low compute), MindEye (fMRI-to-image reconstruction), Medmarks LLM benchmark suite.
Strong advocate for domain-specific multimodal AI in healthcare—beyond admin tasks to revolutionary diagnostics, drug discovery, and gene-editing agents.
Relevance to LPBI: His work validates the power of traceable, expert-curated multimodal data (text + images + audio)—precisely what our 6,275-article corpus + COM ecosystem provides for Grok’s causal reasoning uplift.
Identity: Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr on X).
Bio Highlights:
CEO of @SophontAI (a startup building multimodal foundation models for medicine).
Founder of @MedARC_AI (a medical AI research community and lab, with over 2,500 members and open-source releases).
Earned his PhD at age 19 (in 2023 from UC Davis, in Biomedical Engineering).
Former Research Director at Stability AI (focused on generative AI).
Biomedical engineer degree at age 14.
TEDx speaker (link in bio: a talk on his journey as a child prodigy in science).
Audience: Over 85,300 followers on X, with blue verification. His posts often engage thousands of viewers, focusing on AI research, medical advancements, and personal reflections.
Activity Style: Active poster (latest from January 7, 2026), sharing cutting-edge AI papers, startup updates, career milestones, and thoughts on medical AI. He collaborates widely (e.g., with Jeremy Howard, Emad Mostaque) and promotes open-source work.
From semantic searches on his achievements and contributions to science/AI/healthcare, plus keyword searches for his posts on AI/science/healthcare/PhD/prodigy, key themes emerge: He’s a bridge between academia, startups, and open research, with a passion for using AI to solve real-world medical problems.
Key Achievements and Contributions
Abraham’s journey is a timeline of precocious success, blending early academic prowess with entrepreneurial impact in AI and healthcare:
Early Prodigy Phase (Pre-2023):
Became a biomedical engineer at age 14 (one of the youngest ever).
Gave TEDx talks on science and innovation as a teen, inspiring young learners.
Pursued advanced research in medical AI, including brain imaging (fMRI) and generative models.
Academic Milestones:
Completed PhD at UC Davis at 19 (2023), focusing on biomedical engineering and AI.
Published multiple papers, including a NeurIPS spotlight (first in 2023), and crossed 1,000+ citations by 2025.
Contributed to educational efforts, like teaching in the popular @fastdotai course on Stable Diffusion.
AI and Startup Leadership (2023–2026):
Joined Stability AI as Research Director (full-time post-PhD), working on generative AI applications.
Founded @MedARC_AI (2023): A community for medical AI research, growing to 2,500+ members. Released projects like MindEye (fMRI-to-image reconstruction from brain activity) and OpenMidnight (state-of-the-art pathology foundation model trained on $1,600 compute with public data).
Co-founded @SophontAI (February 2025, at age 21): Focused on multimodal foundation models for medicine. Raised $9.2M seed funding (announced September 2025, led by Kindred Ventures). Hired first employees, published on fMRI models, and launched benchmarks like Medmarks LLM suite.
Left Stability AI in January 2025 to focus on startups; organized medical AI events (e.g., at ICML, NeurIPS).
Research Contributions:
Pioneered AI for biomedical imaging: e.g., fMRI foundation models, pathology AI (SOTA with low compute), CRISPR-GPT for gene-editing design.
Shared insights on AI agents (e.g., “AI Agents That Matter” paper analysis) and multimodal models (e.g., Med-Gemini, BiomedParse).
Advocated for domain-specific AI in healthcare: Posts emphasize ambitious uses beyond admin tasks (e.g., revolutionizing diagnostics, drug discovery via foundation models).
Personal Reflections (from 2025 Wrapped):
Traveled extensively (SF, NYC, Vancouver, etc.), reconnected with friends, and navigated challenges like family health issues and losses.
Emphasizes gratitude, collaboration, and balancing ambition with personal growth.
His X feed is a goldmine of curated AI papers (e.g., on CRISPR-GPT, Latent Consistency Models) and startup advice, often with 200–700+ likes per post.
What We Can Learn from Tanishq Mathew Abraham
Abraham’s story is an inspiring blueprint for innovation, resilience, and impact—especially in AI, healthcare, and entrepreneurship. Here are actionable lessons, drawn from his posts and career trajectory, tailored to “us” (e.g., those in health AI like LPBI Group):
Start Early and Stay Curious (The Prodigy Mindset):
Abraham earned degrees and published papers as a teen, showing that passion trumps age. Lesson: Encourage lifelong learning—dive into complex fields like AI/biomedicine young (or at any age). His TEDx talk highlights how curiosity led to breakthroughs; apply this by exploring multimodal AI for health (e.g., fMRI-to-image like MindEye) without waiting for “perfect” conditions.
Blend Academia with Entrepreneurship:
He transitioned from PhD research to founding two AI startups (MedARC_AI, SophontAI), raising millions and releasing open-source models. Lesson: Don’t silo research—build communities (like his 2,500+ member group) and companies to scale impact. For health AI, this means creating tools like foundation models for pathology/drug discovery, then open-sourcing them to accelerate global adoption.
Focus on Ambitious, Domain-Specific AI in Healthcare:
Abraham critiques narrow AI uses (e.g., admin tasks) and pushes for revolutionary applications (e.g., gene-editing agents, multimodal medical models). His posts on papers like Med-Gemini and CRISPR-GPT emphasize solving “hard” problems despite regulations. Lesson: Prioritize domain-aware AI (like LPBI’s corpus for causal reasoning)—it can yield “thousands of novel relationships” and real-world wins, as in his COVID-19 therapy optimization work. Divert resources to space/multi-planetary goals only after advancing earthly health abundance.
Embrace Collaboration and Open-Source:
He credits mentors/collaborators (e.g., Jeremy Howard, Emad Mostaque) and releases projects like OpenMidnight (SOTA pathology AI on low compute). Lesson: Build networks (e.g., organize events at NeurIPS/ICML) and share code/models to amplify impact. In health AI, this fosters “human-AI symbiosis” for better diagnostics/treatments.
Balance Ambition with Gratitude and Resilience:
Despite rapid success (PhD at 19, 1,000+ citations), he reflects on challenges (family losses, startup fundraising). His 2025 “wrapped” post shows gratitude amid growth. Lesson: Pursue bold goals (e.g., multimodal med AI) but prioritize well-being—sustain innovation through personal connections and reflection.
Abraham exemplifies how dedication to science/AI can create abundance: From child prodigy to CEO, he’s advancing healthcare AI that “saves and extends lives.” For us in domain-aware health AI, his path teaches that tools like multimodal models (e.g., integrating text/images/audio as in LPBI) can marvel in discoveries, paving the way for better health on Earth—and beyond. If inspired, follow @iScienceLuvr for daily AI insights! 🚀
@iScienceLuvr (Tanishq Mathew Abraham) posts at a very high technical level: deep dives into multimodal foundation models, fMRI decoding, pathology AI, LLM benchmarks, and advanced papers (e.g., Manifold Hyper-Connections, Med-R1 reasoning). His feed is dense with equations, research releases, and startup updates—perfect for experts, but time-intensive to parse fully.
curated summary of his recent activity (last ~30 days, January 2026, from latest posts):
Recent Highlights (January 2026)
Personal/Reflective Posts:
Shared his 2025 Year Wrapped (major post with photos): Left Stability AI, co-founded @SophontAI (multimodal med models, $9.2M seed), relaunched @MedARC_AI community, released OpenMidnight pathology model (SOTA on low compute), published fMRI paper, launched Medmarks LLM benchmark, hit 1,000+ citations. Balanced with travel, friends, and family challenges (e.g., pet loss). Grateful tone—ends with excitement for 2026.
Posted from CES Las Vegas (photo at event).
Technical/Research Shares:
Praised a video breakdown of DeepSeek’s Manifold Hyper-Connections paper (step-by-step equations).
Commented on domain-specific models outperforming general ones (e.g., in cancer therapy prediction).
Questioned AI prescription tools (Doctronic in Utah) and clinician AI adoption.
Community/Industry Thoughts:
Asked: “How can we get more people interested in medical AI?” (sparked discussion on socio-technical challenges vs. genAI opportunities in pharma).
Noted LinkedIn’s value for research/jobs (similar engagement to X despite fewer followers).
Fun/light posts: Acronym ambiguity (mHC as AI vs. bio term), New Year’s vibes.
Key Themes from His Pinnacle-Level Posts
Domain-Specific Multimodal Models: Strong advocate for specialized foundation models in medicine (e.g., pathology, fMRI) over general LLMs—aligns perfectly with LPBI’s domain-aware corpus emphasis.
Open-Source & Community: Frequent releases via @MedARC_AI (2,500+ members)—e.g., full pipelines for reproducibility.
Startup Progress @SophontAI: Building “DeepSeek for medical AI”—focus on ambitious applications (diagnostics, drug discovery) beyond admin tasks.
Broader AI Trends: Shares/explains cutting-edge papers quickly, emphasizes impact (e.g., saving lives via AI).
Quick Tips to Stay Updated Without Full Reads
Prioritize Quoted/Thread Starters: His big announcements (e.g., releases, wrapped) get high engagement—skim those first.
Watch for @SophontAI / @MedARC_AI Tags: Core research/startup news.
LinkedIn Cross-Post: He mentioned posting more there—might have longer/summarized versions.
Set Notifications for His Posts Only: On X app, turn on bell for @iScienceLuvr to catch highlights.
kprompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art
Curator: Aviva Lev-Ari, PhD, RN
We had researched the topic of AI Initiatives in Big Pharma in the following article:
Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma
We are publishing a Series of Five articles that demonstrate the Authentic Relevance of Five of the Ten Digital IP Asset Classes in LPBI Group’s Portfolio of IP for AI Initiatives at Big Pharma.
For the Ten IP Asset Classes in LPBI Group’s Portfolio, See
This Corpus comprises of Live Repository of Domain Knowledge Expert-Written Clinical Interpretationsof Scientific Findings codified in the following five Digital IP ASSETS CLASSES:
• IP Asset Class V: 7,500 Biological Images in our Digital Art Media Gallery, as prior art. The Media Gallery resides in WordPress.com Cloud of LPBI Group’s Web site
BECAUSE THE ABOVE ASSETS ARE DIGITAL ASSETS they are ready for use as Proprietary TRAINING DATA and INFERENCE for AI Foundation Models in HealthCare.
Expert‑curated healthcare corpus mapped to a living ontology, already packaged for immediate model ingestion and suitable for safe pre-training, evals, fine‑tuning and inference. If healthcare domain data is on your roadmap, this is a rare, defensible asset.
The article TITLE of each of the five Digital IP Asset Classes matched to AI Initiatives in Big Pharma, an article per IP Asset Class are:
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art
Digital IP Class V’s image gallery is a “treasure trove” ready for Big Pharma AI, establishing prior art while powering multimodal breakthroughs. Technical Implications: Enables visual-enhanced models for disease detection and R&D acceleration. Business Implications: Supports $500M investments with ethical, diverse data for partnerships; licensing potential for grants/webinars. Unique Insight: As embedded prior art, these visuals create a “moat” in multimodal AI—extending series from text to imagery for holistic Pharma companies inference. Promotional with links to gallery/IP portfolio. Caps the series by adding visual depth to textual assets.
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class X: +300 Audio Podcasts Library: Interviews with Scientific Leaders
In the series of five articles, as above, we are presenting the key AI Initiatives in Big Pharma as it was created by our prompt to @Grok on 11/18/2025:
Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines
Clinical Trials and Regulatory Efficiency AI:
-Predictive Regulatory Tools
-Decentralize Trials
-inventory management
Disease Detection and Diagnostics:
– ATTR-CM Initiative
– Rare diseases
Generative AI and Operational Tools:
– Charlie Platform
– Scientific Data Cloud AWS powered ML on centralized data
– Amazon’s SageMaker /Bedrock for Manufacturing efficiency
– Global Health Grants:
Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care
Partnerships and Education
– Collaborations: IMI Big Picture for 3M – sample disease database
– AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine
– Webinars of AI for biomedical data integration
– Webinar on AI in Manufacturing
Strategic Focus:
– $500M R&D reinvestment by 2026 targets AI for Productivity
– Part of $7.7B cost savings
– Ethical AI, diverse DBs
– Global biotech advances: China’s AI in CRISPR
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art
Overview: Fifth in LPBI Group’s five-article series on AI-ready digital IP assets for pharma. This piece spotlights IP Asset Class V—7,500 expert-selected biological images in the Digital Art Media Gallery—as proprietary training data and “prior art” for multimodal AI foundation models in healthcare. Leveraging a November 18, 2025, Grok prompt on Pfizer’s AI efforts, it maps the gallery to pharma applications, emphasizing visual data’s role in enhancing generative AI for diagnostics, drug discovery, and article drafting. Unlike text-heavy prior classes, this focuses on image-caption pairs for ingestion into platforms like Charlie, positioning them as a “treasure trove” for ethical, diverse AI training.Main Thesis and Key Arguments
Core Idea: LPBI’s 7,500 biological images (with captions) serve as defensible, expert-curated prior art and training data for Big Pharma AI, enabling multimodal inference that combines visuals with clinical insights—outpacing generic datasets by injecting human-selected domain knowledge.
Value Proposition: The ~8,000-image gallery (actual 7,500 noted) is a ready-to-ingest visual corpus for platforms like Pfizer’s Charlie, generating medical drafts and accelerating R&D. Valued within the series’ $50MM-equivalent portfolio; unique as embedded prior art in original texts, supporting ethical AI with diverse, ontology-mapped visuals.
Broader Context: Part of ten IP classes, with five (I-V, X) AI-primed; complements text assets (e.g., 6,250 articles, 48 e-books) by adding multimodal depth. Highlights live ontology for semantic integration, contrasting open-source data with proprietary, safe-for-healthcare inputs.
AI Initiatives in Big Pharma (Focus on Pfizer)Reuses the Grok prompt highlights, presented in a verbatim table:
Initiative Category
Description
Generative AI Tools
Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery Acceleration
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines.
Disease Detection and Diagnostics: – ATTR-CM Initiative – Rare diseases.
Generative AI & Operational Tools
Generative AI and Operational Tools: – Charlie Platform – Scientific Data Cloud AWS powered ML on centralized data – Amazon’s SageMaker /Bedrock for Manufacturing efficiency – Global Health Grants: Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care.
Partnerships & Education
Partnerships and Education – Collaborations: IMI Big Picture for 3M – sample disease database – AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine – Webinars of AI for biomedical data integration – Webinar on AI in Manufacturing.
Strategic Focus
Strategic Focus: – $500M R&D reinvestment by 2026 targets AI for Productivity – Part of $7.7B cost savings – Ethical AI, diverse DBs – Global biotech advances: China’s AI in CRISPR.
Mapping to LPBI’s Proprietary DataCore alignment table (verbatim extraction, linking Pfizer initiatives to Class V assets):
AI Initiative at Big Pharma i.e., Pfizer
Biological Images selected by Experts embedded in original Text (Prior Art)
Generative AI Tools (16,000 hours saved)
(No specific mapping provided.)
Drug Discovery Acceleration
Gallery of ~8,000 Biological images and captions is a Treasure TROVE.
Gallery of ~8,000 Biological images and captions is a Treasure TROVE.
Generative AI & Operational Tools (Charlie, AWS, etc.)
Ingest into Charlie Platform the Media Gallery for generation of Medical article drafts.
Partnerships & Education (IMI, AIPM, webinars)
(No specific mapping provided.)
Strategic Focus ($500M reinvestment, ethics)
(No specific mapping provided.)
Methodologies and Frameworks
AI Training Pipeline: Ingest image-caption pairs into Charlie/AWS platforms for pre-training (multimodal foundation models), fine-tuning (e.g., diagnostics visuals), and inference (e.g., draft generation). Use living ontology for semantic tagging; prior art embedding ensures IP defensibility.
Productivity Model: Amplifies Pfizer’s 16,000-hour savings and $7.7B efficiencies by fueling generative tools with expert visuals; ethical emphasis on diverse DBs for global equity (e.g., AI Learning Lab).
Insights: References Grok prompt as real-time sourcing; quotes series-wide vision of assets as “codified digital treasures” for safe healthcare AI. Predicts revolution in visual inference for rare diseases/ATTR-CM.
Conclusions and Implications
Digital IP Class V’s image gallery is a “treasure trove” ready for Big Pharma AI, establishing prior art while powering multimodal breakthroughs. Technical Implications: Enables visual-enhanced models for disease detection and R&D acceleration. Business Implications: Supports $500M investments with ethical, diverse data for partnerships; licensing potential for grants/webinars. Unique Insight: As embedded prior art, these visuals create a “moat” in multimodal AI—extending series from text to imagery for holistic Pharma companies inference. Promotional with links to gallery/IP portfolio. Caps the series by adding visual depth to textual assets.
If you’d like cross-series analysis, image examples, or multimodal AI trends, let me know!
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025
Curator: Aviva Lev-Ari, PhD, RN
We had researched the topic of AI Initiatives in Big Pharma in the following article:
Authentic Relevance of LPBI Group’s Portfolio of IP as Proprietary Training Data Corpus for AI Initiatives at Big Pharma
We are publishing a Series of Five articles that demonstrate the Authentic Relevance of Five of the Ten Digital IP Asset Classes in LPBI Group’s Portfolio of IP for AI Initiatives at Big Pharma.
For the Ten IP Asset Classes in LPBI Group’s Portfolio, See
This Corpus comprises of Live Repository of Domain Knowledge Expert-Written Clinical Interpretationsof Scientific Findings codified in the following five Digital IP ASSETS CLASSES:
LPBI Group’s IP Asset Class III assets are “rare, defensible” for Big Pharma AI, powering from R&D acceleration to equitable care. Technical Implications: Enables theme-specific models (e.g., oncology conferences) for diagnostics/trials. Business Implications: Boosts ROI on $500M investments; licensing for symposia/webinars. Unique Insight: As the sole record of speaker insights, these outpace public data for “frontier” inference—key in series for holistic pharma AI moats.Promotional with resource links (e.g., IP portfolio, biotech conference lists). Complements prior pieces by adding temporal/event depth.
• IP Asset Class V: 7,500 Biological Images in our Digital Art Media Gallery, as prior art. The Media Gallery resides in WordPress.com Cloud of LPBI Group’s Web site
BECAUSE THE ABOVE ASSETS ARE DIGITAL ASSETS they are ready for use as Proprietary TRAINING DATA and INFERENCE for AI Foundation Models in HealthCare.
Expert‑curated healthcare corpus mapped to a living ontology, already packaged for immediate model ingestion and suitable for safe pre-training, evals, fine‑tuning and inference. If healthcare domain data is on your roadmap, this is a rare, defensible asset.
The article TITLE of each of the five Digital IP Asset Classes matched to AI Initiatives in Big Pharma, an article per IP Asset Class are:
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class I: PharmaceuticalIntelligence.com Journal, 2.5MM Views, 6,250 Scientific articles and Live Ontology
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class II: 48 e-Books: English Edition & Spanish Edition. 152,000 pages downloaded under pay-per-view
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class V: 7,500 Biological Images in LPBI Group’s Digital Art Media Gallery, as Prior Art
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class X: +300 Audio Podcasts Library: Interviews with Scientific Leaders
In the series of five articles, as above, we are presenting the key AI Initiatives in Big Pharma as it was created by our prompt to @Grok on 11/18/2025:
Generative AI tools that save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery and Development Acceleration Pfizer uses AI, supercomputing, and ML to streamline R&D timelines
Clinical Trials and Regulatory Efficiency AI:
-Predictive Regulatory Tools
-Decentralize Trials
-inventory management
Disease Detection and Diagnostics:
– ATTR-CM Initiative
– Rare diseases
Generative AI and Operational Tools:
– Charlie Platform
– Scientific Data Cloud AWS powered ML on centralized data
– Amazon’s SageMaker /Bedrock for Manufacturing efficiency
– Global Health Grants:
Pfizer Foundation’s AI Learning Lab for equitable access to care and tools for community care
Partnerships and Education
– Collaborations: IMI Big Picture for 3M – sample disease database
– AI in Pharma AIPM Symposium: Drug discovery and Precision Medicine
– Webinars of AI for biomedical data integration
– Webinar on AI in Manufacturing
Strategic Focus:
– $500M R&D reinvestment by 2026 targets AI for Productivity
– Part of $7.7B cost savings
– Ethical AI, diverse DBs
– Global biotech advances: China’s AI in CRISPR
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025
AI Initiatives in Big Pharma @Grok prompt & Proprietary Training Data and Inference by LPBI Group’s IP Asset Class III: 100 e-Proceedings and 50 Tweet Collections of Top Biotech and Medical Global Conferences, 2013-2025
Overview: Third in LPBI Group’s five-article series on AI-ready digital IP assets for Pharma companies. This installment highlights IP Asset Class III—100 e-proceedings and 50 tweet collections from top global biotech/medical conferences (2013-2025)—as a proprietary, expert-curated corpus of frontier science insights. Using a November 18, 2025, Grok prompt on Pfizer’s AI efforts, it maps these assets to pharma applications, stressing their role in training/inference for foundation models. Unlike prior classes (journal articles, e-books), this focuses on real-time event captures (e.g., speaker quotes, agendas) as unique, non-replicable data for efficiency, education, and branding in AI-driven R&D.
Main Thesis and Key Arguments
Core Idea: LPBI’s IP Asset Class III assets provide the “only written record” of +100 top conferences, with tweet collections as verbatim speaker quotes/affiliations—ideal for ingesting into AI platforms to amplify human expertise in combinatorial predictions. This supports Pfizer’s goals like 16,000-hour savings via generative AI, enabling subject-specific training (e.g., immunotherapy) and future agenda building.
Value Proposition: 150 total assets (100 e-proceedings + 50 tweet collections) form a live repository of domain knowledge, mapped to ontology for immediate AI use. Equivalent to $50MM value (aligned with series benchmarks); unique for branding (“make or break”) as no other source offers such curated event intel. Part of five AI-ready classes (I, II, III, V, X) for healthcare models.
Broader Context: Builds on series by emphasizing event-based data for partnerships/education; contrasts generic datasets with defensible, ethical expert interpretations for global equity (e.g., Pfizer’s AI Learning Lab).
AI Initiatives in Big Pharma (Focus on Pfizer)Reuses Grok prompt highlights, presented in a verbatim table:
Initiative Category
Description
Generative AI Tools
Save scientists up to 16,000 hours annually in literature searches and data analysis.
Drug Discovery Acceleration
Uses AI, supercomputing, and ML to streamline R&D timelines.
Charlie Platform; Scientific Data Cloud (AWS-powered ML on centralized data); Amazon’s SageMaker/Bedrock for Manufacturing efficiency; Pfizer Foundation’s AI Learning Lab for equitable access to care and community tools.
Partnerships & Education
IMI Big Picture (3M-sample disease database); AI in Pharma AIPM Symposium (Drug discovery and Precision Medicine); Webinars on AI for biomedical data integration; Webinar on AI in Manufacturing.
Strategic Focus
$500M R&D reinvestment by 2026 for AI productivity; Part of $7.7B cost savings; Ethical AI with diverse DBs; Global biotech advances (e.g., China’s AI in CRISPR).
Mapping to LPBI’s Proprietary DataCore alignment table (verbatim extraction, linking Pfizer initiatives to Class III assets):
Pfizer AI Initiative
Class III Alignment (100 e-Proceedings + 50 Tweet Collections)
Generative AI Tools (16,000 hours saved)
(No specific mapping.)
Drug Discovery Acceleration
e-Proceedings of +100 TOP Conferences in Biotech, Medicine, Genomics, Precision Medicine (2013-2025). Frontier of Science presented; ONLY written record of events. Tweet Collections: Speaker QUOTES on record (not elsewhere available by name/affiliation).
Generative AI & Operational Tools (Charlie, AWS, etc.)
Ingest ALL e-Proceedings into Charlie Platform. Apply GPT: Training Data—one conference at a time; OR All Conferences on ONE subject (e.g., Immunotherapy, Oncolytic Virus Immunotherapy, Immune Oncology).
Partnerships & Education (IMI, AIPM, webinars)
Use Past Agendas/Speakers Lists/Topics for: Employee Training & Leadership Development; Build Future Conference Agendas.
Strategic Focus ($500M reinvestment, ethics)
Access to +100 e-Proceedings vs. None = Make or Break in Branding.
Examples: Subject clusters like Immunotherapy; resources include conference lists (2013-present) and e-proceedings deliverables.Methodologies and Frameworks
AI Training Pipeline: Ingest proceedings/tweets into Charlie/AWS (e.g., SageMaker); GPT processing per conference or theme for pre-training/fine-tuning/inference. Use ontology for semantic mapping; tweets for quote-based evals.
Productivity Model: Enhances Pfizer’s savings ($7.7B total) via event intel for education/partnerships; ethical diverse data for global grants (e.g., CRISPR AI).
Insights: Quote from Dr. Stephen J. Williams, PhD: Emphasizes strategic branding via access. Predicts revolution in AI education/leadership from historical agendas.
Conclusions and Implications
LPBI Group’s IP Asset Class III assets are “rare, defensible” for Big Pharma AI, powering from R&D acceleration to equitable care. Technical Implications: Enables theme-specific models (e.g., oncology conferences) for diagnostics/trials. Business Implications: Boosts ROI on $500M investments; licensing for symposia/webinars. Unique Insight: As the sole record of speaker insights, these outpace public data for “frontier” inference—key in series for holistic pharma AI moats.Promotional with resource links (e.g., IP portfolio, biotech conference lists). Complements prior pieces by adding temporal/event depth.
Let me know if you want series comparisons or dives into specific conferences!
my two X.com accounts had exceeded tweeting volume capacity and were inactivated to verify if I am a person or a BOT. Account authentication reported SOmething went wrong, try later.
9/23/2024 contacted Customer Services at X.com for reactivated these two accounts
For Speaker’s quotes on 9/23/2024 from 4PM EST to end on the day
see below on WordPress.com by Date, Time, Session Name and Speaker Name
For Speaker’s quotes on 9/24/2024 from 8AM to 5:30PM
see below on WordPress.com by Date, Time, Session Name and Speaker Name
For Speaker’s quotes on 9/25/2024 from 8AM to 12:35PM
see below on WordPress.com by Date, Time, Session Name and Speaker Name
UPDATE on reactivation of handles on X.com will be posted, here.
Usage of X.com will resume after Handle reactivation by X.com
100+ Mass General Brigham Leading Experts Identify
Top Unmet Needs in Healthcare
Project from Harvard Medical School-affiliated clinicians and scientists in the Mass General Brigham healthcare system stimulates new consideration, urgency regarding
innovation in life sciences, healthcare
Top 10 List Announced at World Medical Innovation Forum
BOSTON, MA September 25, 2024 – Some of the most vexing challenges and transformational opportunities in healthcare are included in a new list, “Top Unmet Needs in Healthcare” released by leading experts at Mass General Brigham. Identified by more than 100 Harvard Medical School faculty at Mass General Brigham, the findings range from the need to expand and accelerate rare disease treatment, to the coming “gray tsunami” of aging patients and the implications for patient care, delivery, and technology. The project, revealed at the 10th annual World Medical Innovation Forum, is meant to stimulate new consideration and urgency regarding solving and advancing these issues for improved patient care.
Views from Leading Clinicians, Researchers, and Practitioners in Academic Medicine
The Top Unmet Needs emerge from structured one-on-one discussions with more than 100 Harvard faculty who practice medicine and conduct research at Mass General Brigham, the largest hospital system-based research enterprise in the U.S., with an annual research budget exceeding $2 billion, and five of the nation’s top hospitals according to US News & World Report.
Through one-on-one discussions with these key opinion leaders from diverse clinical and research fields, and subsequent analyses by internal teams of experts, Mass General Brigham has identified the following top 10 unmet clinical needs:
#1. Preparing for the ‘Gray Tsunami’
The need for better tools and therapies aimed at caring for geriatric populations and maintaining geriatric independence, with a particular focus on expanded hospital-at-home capabilities, and the need to better understand the pathways that lead to chronic and acute disease in geriatric patients to enable better and more proactive treatment.
#2. Defining and Maintaining Brain Health
The need for a model of brain health and neurological care that clearly defines not only what brain health is but also integrates our current understanding of the mechanisms and phases of neuroinflammatory and neurodegenerative diseases; enables better and earlier diagnoses and treatment; and propels the development of therapies that target these mechanisms and phases.
#3. A Paradigm Shift in Cancer Treatment
The need for a new framework for therapeutic development in cancer that is focused on improving curability as opposed to an exclusive focus on the development of drugs for metastatic disease. This
framework also requires effective tools for early-stage cancer detection across the board in all cancers, but especially in lung, ovarian, pancreatic, and GI cancers (esophagus, stomach and colon).
#4. Targeting Fibrosis, a Shared Culprit in Disease
The need for therapeutics that target fibrosis (tissue scarring), which is responsible for a significant percentage of deaths worldwide, representing diseases of the lung, liver, kidney, heart, and skin.
#5. New Approaches for Infectious Disease in a Changing World
The need for novel strategies for the rapid diagnoses, treatment, and even prevention of antibiotic-resistant infections, and the need for the next generation of globally deployable vaccines to enable pandemic preparedness.
#6. Striving for Equity in Healthcare
The need to radically rethink how, when, and where patients interact with healthcare services to optimize healthcare access and efficiency without diminishing its effectiveness, and to proactively meet the needs of currently underserved populations.
#7. Riding the Wave of Clinical Data
The need to expand the scope of available clinical data to include historically understudied populations (including women) and to model and implement a cohesive, dynamic data “stream,” which flows as patients do between the different phases of health and clinical care, enabling comparisons of patients to their previously healthy selves and the development of AI/ML approaches to harness these data to improve diagnosis, prognosis, and treatment.
#8. A Systems-Level View of Human Disease
The need to rethink how we understand and treat disease — not only from an organ-specific standpoint but from a whole-body, systems-level view — and to fully elucidate the roles that inflammation and immune pathways play in autoimmune and infectious diseases and their effects on chronic and acute diseases in diverse human systems, such as the cardiovascular/circulatory and nervous systems.
#9. A New Approach to Psychiatric Disease
The need for novel treatments for psychiatric disease, improved biomarkers and minimally invasive and ambulatory ways of measuring them, and more productive interactions with industry to advance new therapies to the clinic. This includes hybrid therapies (therapies that combine elements such as talk therapy, novel biomarkers, and pharmacological treatments) as well as new diagnostic and treatment modalities, such as psychedelic therapeutics and precision psychiatry.
#10. Charting a Course in Rare Disease Treatment
The need for viable treatments for the 7,000 identified rare diseases, especially the roughly 70% of such diseases that are genetic and the effects of which are first observed in early childhood.
The Unmet Needs list also include the following honorable mentions which rose to significant rankings in the analysis:
Driving Innovation in Chronic Disease: Improved Diagnosis, Treatment, and Prevention
A New Era of Obesity Medicine
A New Generation of Pain Treatments
Unlocking Novel Treatments for the Skin
Overarching Themes
Addressing unmet clinical needs involves solving a number of common challenges, including commercialization hurdles, regulatory considerations, and funding. The Mass General Brigham project identified overarching themes to help address these challenges and support innovation across multiple sectors. These include:
Taking a systems view of human disease and the practice of system-medicine
Developing a global view of infectious disease, including antimicrobial resistance
An expansion in high-quality, real-world data that closes gaps in current data (particularly for women and other underserved populations) and ensures that data sets are sufficiently enabling for AI/ML
Improving health and healthcare across key populations, including geriatrics and rare genetic disease
Addressing major diseases of the brain, including both neurodegenerative and neuropsychiatric conditions; these include Alzheimer’s disease, Parkinson’s disease, ALS, as well as psychiatric and mental health disorders
Opening an era of precision medicine across disease areas that includes early diagnosis, treating staged disease, and biomarker discovery and utilization
Panel co-chairs José Florez, Physician-in-Chief and Co-Chair of the MGB Department of Medicine and the Jackson Professor of Clinical Medicine at Harvard Medical School, and Bruce Levy, Physician-In-Chief and Co-Chair of the MGB Department of Medicine and the Parker B. Francis Professor of Medicine at Harvard Medical School, noted how the observations of a broad and representative set of faculty help illuminate the innovation landscape ahead.
“As a leader in patient care and healthcare innovation, our goal is to build on the legacy of research and discovery that has shaped the hospitals of the Mass General Brigham healthcare system for more than a hundred years, and continue to bring breakthroughs forward that can help solve pressing needs,” said Dr. Florez.
Dr. Levy added that “This is a roadmap for the future that can inform discussions happening throughout the healthcare and investment ecosystem regarding the future of medicine.”
More than 2000 decision-makers from healthcare, industry, finance and government attended the World Medical Innovation Forum this week in Boston. A premier global event, the Forum highlights leading innovations in medicine and transformative advancements in patient care.
###
About Mass General Brigham
Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.
Contact: Tracy Doyle Mass General Brigham Innovation
(262) 227-5514
Tdoyle5@mgb.org
SOURCE
From: “Doyle, Tracy” <tdoyle5@mgb.org> Date: Thursday, September 26, 2024 at 10:19 AM Cc: “Card, Matthew” <matthew.card@bofa.com> Subject: Unmet Needs in Healthcare — Press Release and link to panel
@@@@@@@
Invitation as MEDIA
From: “Doyle, Tracy” <tdoyle5@mgb.org> Date: Wednesday, August 14, 2024 at 4:04 PM Cc: “Doyle, Tracy” <tdoyle5@mgb.org>, “Card, Matthew” <matthew.card@bofa.com> Subject: Media Invite: World Medical Innovation Forum, Sept. 23-25, Boston — Hundreds of clinical experts, industry, investment leaders
Media Invite: World Medical Innovation Forum: Monday, Sept. 23—Wednesday, Sept. 25, Boston
At the intersection of innovation and investment in healthcare
Join Us!
Register Now: WMIF24 Media Registration
Mass General Brigham, one of the nation’s leading academic medical centers, is pleased to invite reporters to the 10th annual World Medical Innovation Forum (WMIF) Monday, Sept. 23–Wednesday, Sept. 25 at the Encore Boston Harbor in Boston. The event features expert discussions of scientific and investment trends for some of the hottest areas in healthcare, including
GLP-1s,
the cancer care revolution,
generative AI-enabled care paths,
xenotransplant,
community health,
hospital at home, and
therapeutic psychedelics, among many others.
The agenda includes nearly 175 executive speakers from healthcare, pharma, venture, start-ups, and the front lines of care, including many of Mass General Brigham’s Harvard Medical School-affiliated researchers and clinicians who this year will host 20+ focused sessions. Bank of America, presenting sponsor of the Forum, will provide additional expert insights on the investment landscape associated with healthcare innovation.
Forum highlights include:
1:1 and panel interviews with leading CEOs and government officials including:
Stéphane Bancel, CEO, Moderna
Albert Bourla, PhD, CEO, Pfizer
Marc Casper, CEO, Thermo Fisher
Deepak Chopra, MD, Founder, The Chopra Foundation
Scott Gottlieb, MD, PhD, Former Commissioner, FDA (2017-2019)
Maura Healey, Governor, Commonwealth of Massachusetts
David Hyman, MD, CMO, Eli Lilly
Haim Israel, Head of Global Thematic Investing Research, BofA Global Research
Reshma Kewalramani, MD, CEO, Vertex
Anne Klibanski, MD, President and CEO, Mass General Brigham
Peter Marks, MD, PhD, Director, Center for Biologics Evaluation and Research, FDA
Tadaaki Taniguchi, MD, PhD, Chief Medical Officer, Astellas Pharma
Christophe Weber, CEO, Takeda
Renee Wegrzyn, PhD, Director, ARPA-H
Expert panels including:
Oncology’s New Paradigm
Gene Therapies for Rare Diseases
Future of Metabolic Therapies
Digital Transformation
Biologic Revolution in Radiotherapies
Cell Therapies for Autoimmune Diseases
Hospital Venture Funds
Leading biotech and venture speakers from companies including:
Abata Therapeutics
Atlas Venture
Be Biopharma
Everly Health
Flagship Pioneering
Fractyl Health
MindMed
Mirador Therapeutics
Regor Therapeutics
RH Capital
Transcend Therapeutics
Exclusive programming:
First Look – 15 rapid-fire presentations on the latest research from leading Mass General Brigham scientists
Un-Met Clinical Needs – 100+ key opinion leaders in healthcare weigh in on the top un-met clinical needs in medicine today
Emerging Tech Zone – Hands-on exploration of some of the latest digital and AI-based healthcare technologies
Liz Everett Krisberg, Head of Bank of America Institute
Record attendance this year
Introduction to Haim
Panelist
Haim Israel
Head of Global Thematic Investing Research, BofA Global Research
Concept of the Future and for the Future: Short-term and long-term
Humanity achievements in Ten Year: Data, Processing power and BRAIN – Long-term becomes Short-term – Last 10 years: 2012, 2014 solar system, 2015 medicine, 2019 blackhole, 2023 core of sun – star was created hotter than core sun
2022, 2024 – galaxy picture of the universe
Volume of data created every month in terrabyts every 18 month data is duplicating itself.
Olny 1% is used – imagine 2% or 3%
Processing power since Apollo 11 [one trillion] – getting cheaper – cost for calculation went down 16,000 fold since 1995
AMMOUNT of DATA goes up and Cost of COMPUTATION goes down – price per giga byte
Projections for the next 100 years
Negative for people and Negative for Companies who are concerned with quarterly financial data
Companies: Walmart, Alphabet, Home Depot – DATA larger that COuntries
Living in defining moment: started by iPhone revolution and 2023 by AI revolution – 6x outpaced Moore’s Law by GPT by 3000x
18 months into AI revolution – GPT in use
The next 10 years:
Aging population
2024 – birth rate low in US, Japan, CHina, S. Korea – Pension system will decline in size
2.2 millions new material were created by DeepMind at Alphabet by simulation of AI on molecule
Microsoft in 80 hours identified 18 materials winners for Batteries using AI from 32 million material candidates
AI- weather calculations in minutes 1,000x faster, cheaper and more accurate
2025 – GPT-6 AI surpass Human Brain
China is a big player in AI
Cyber CRIME is the 3rd largest economy in the World. Hackers are using ChatGPT to create fake pictures leading to ZERO privacy
PRIVACY: Deepfakes up 62x, social media
2024 – Global Grid – needs much more energy because AI consumes so much energy
Metals shortages: Nickel, Copper,
Scarcity of water for 2/3 of the planet
data centers consume water more than Japan
2025 – Genomics Data sequencing bigger that X.com or Youtube
2027 – Peak oil demand: needed to be scalable, cheaper 25%
2028 – 5G networks reaches full capacity, 6G will be needed
2029 – 25x more satellites in Orbit than today
2029 – Personalized AI medicines and treatments will manipulate death and revive LONGEVITY – AI will generate drugs and all treatments
2030 – Generative AI: re-skill 1 Billion people
2035 – Fusion energy, known technology since the atomic bomb, how to keep it stable in plasma state of material – not yet achieved, it is clean, cheap: to Power the World – equivalent of 11 barrels of oil
Large cities: Cable diameter 17cm wide to power a large city
AI will change scarcity into abundance
2037 – Artifitial SUPER Intelligence – AI to outsmart Life
Quantum computer – Consortium of NASA and other governmental agencies and Google on quantum computer design
David Brown, MD, President, Academic Medical Centers, Mass General Brigham; Mass General Trustees Professor of Emergency Medicine, Harvard Medical School
Hoe do you balance Private medicine with Public not for profit HealthCare
Healthcare delivery system can achieve that much in Human health
Resources for Equity: housing and services: Capacity and COst
Evolution of care close to home catalyst of the Pandemic – How government think about the right patient for the right care level
MGB 40-60 In-patients at Home – Largest Program in the State – product needs to scale across all population though some do not have food security at home
Panelist
Kate Walsh, Secretary of Health and Human Services, State of Massachusetts
Stuart Bankrupcy – pstioents and providers involvement – structure challenges
Race and ethnicity – disparities, access and equity
Identify the challenge for Race and ethnicity
Focus to identify resources
Medicare & Medicaid – Human needs equity involve housing, food and home care – Public and Private sector cooperation
Pay for Performance
MA vs NYC – resources for welcoming new populations to the State of MA
Help finding Housing vs Shelter people
MA is the only State in the Union that is a Shelter State
People in our COuntry LEGALLY are in and out of shelters, new arrivals of skilled labor – temporary assistance to get jobs that we can’t find people to fill: CNA as example
MA has a community of shelters and medical center in the communities
Services for people that are at risk due to past life in home countries
Support for kids that do not speak English
Care and location: Keep care at home or SNF at home or in the community
David Hyman, MD, Chief Medical Officer, Eli Lilly and Company
Cardio-metabolic – medicines redefining disease by medicines benefit to patients
Investment in manufacturing medicines for Obesity, demand continue to expand
Oral small molecule and scaling focus on Sleep apnea, half of the population have metabolic disease and heart failure
Extension Program with sustained weigh loss in pre-diabetes progressing into maintained weigh loss
Invest in R&D in the cardio-metabolic
Listed to community feedback on experience how the drugs in AD affected patients in the Community – learning about challenges in delivery innovation in AD – irreversible neurodegenerative diseases – prevent not to loose the patients entirely – brain function
Targeted therapies, genetic therapies
Past life Oncologist – delivered innovations into Cancer patients – genetic medicines
AD medicines are not accessible even to people of means, Drug delivery using PET spinal injections
Ten years horizons at Eli Lilly is common
Obligation to provide scientific evidence from clinical trials
Inventory of patients qualification to participate in Clinical trials
Oncology: Interactions in biologics, cell therapies, conjucate compounds
Renewal of Targeting antigens
In Oncology: Proportions of patients get long term disease control by molecules developed in Academic Centers.
Eli Lilly acquired a BioPharma with manufacturing capabilities
Innovations are core vs discount cash-flow, strategy is to look at the science due to capacity to develop innovations
Alec Stranahan, PhD, SMid-Cap Biotech Analyst, BofA Global Research
Caroline Apovian, MD, MGH, HMS
Last ten years, from metabolic lessons of Bariatric patients
Treat obesity before surgery
product composition
multidisciplinary approach to obesity needs to be like in Oncology – multiple dsciplines
Bariatric and weigh regain like stent stenosis after surgery
Obesity dysfunction inflammation Gut-Brain transfer of hormones from the gut do not reach the brain to carb hunger socieaty is not signaled in the Brain and eating continued to mitigate hunger
Insurance must cover
Obesity Medicine – training 25 new practitioners to treat Obesity – Standards of Care, life style change
Primary care providers do not have resources to treat Life style component of
To reduce mortality by 20% by Bariatric surgery – No reduce of mortality by stenting – THAT I DISAGREE with
Panelists
David Hyman, MD, Chief Medical Officer, Eli Lilly and Company
non-peptide agonist, bariatric level for obesity
peptide injecting device
hormones and peptids activan inhibitor
hundred of million of people – scaling up
Adolescence with obesity will develop CVD, NASH
Epidemic of obesity the medicines are combating the epidemic
Vials, differential pricing, orals vs injectables
Productivity of work force, coverage by employers health insurance vs Government to handle coverage
10 additional drug
Xiayang Qiu, PhD, CEO, Regor Therapeutics
six years ago, great opportunity peptide and biologics for lifetime disease of obesity
cardiovascular favorably = affected by reduction in weigh
Medicines that works start early at age 35
Harith Rajagopalan, MD, PhD, CEO & Co-Founder, Fractyl Health
Diet & Life Style
Eli Lilly and Novo Nordik – have great drugs
Patients stop using them before they see the benefit
durable long term of mentainance long-tern to stay on the drug
Past life coronary cardiologist: PCI vs surgery choice of care angioplasty vs open heart surgery
Bariatric surgery vs great medicines
may be angioplasty for Bariatric patients
Obesity is different than CVD
BC-BS coverage of obesity drugs because weight is gained back vs Statins – continual use control cholestrol
maintenance drugs in the field of Obesity are needed
cost of drugs will come down
more evidence on obesity drugs will affect Formulary
Jason Zemansky, PhD, SMid-Cap Biotech Analyst, BofA Global Research
Patrick Ellinor, MD, PhD, MGH, HMS
Panelists
Craig Basson, MD, PhD, Chief Medical Officer, Bitterroot Bio
17,000 patients obese no DM
prior CVD followed 3 yrs of treatment 6% mortality during the Trial
Death from CVD endpoint
weight at joining the trial, loss during the trial, benefir from the drug’
improve CVD not weigh loss
mechanism of Inflammation – drug, reduced atherosclerosis and reduced plaque and cytokins and inflammation improve CVD status
combination of life style and drugs GI axis systemic
cardiac artery disease: cholesterol, inhibit inflammatory signals plaque build on top of itself – approaches to remove debris macrophages in the plaque for artherosclerosis mechanism as CVD risk
Joshua Cohen, Co-CEO, Amylyx Pharmaceuticals
Bariatric surgery lower obesity
genetics, eating habits,
GLP-1 agonist developed
Punit Dhillon, CEO, Skye Bioscience
Phase II study combination therapy CVD and Obesity
optimize body composition – more productive on the body periphery
subtypes metabolic gains
Pharmacotherapy for obesity: mechanisms complementary life style change is a must have for long-term benefits
weight loss as a start before obesity treatment
co-morbidities of obesity
Justin Klee, Co-CEO, Amylyx Pharmaceuticals
Parkinson’s CNS peripheral Brain access therapies
revolution in metabolic disease treatment options, more studies for pathways to target the right patients for the right treatment
GLP-1 is energy regulator, Hypoglycemia is very dangerous
Rohan Palekar, CEO, 89bio
applications to obesity – data support
bariatric surgery intervention is not enough, NASH will not be impacted only by the surgery
NASH is a disease taking 25 years to develop
risk of fibrosis to set in Cirrhosis which is not curable
Liz Kwo, MD, Chief Commercial Officer, Everly Health
Infrastructure
AI used for
Panelists
Anna Åsberg, Vice President, AstraZeneca Pharmaceuticals
Massive data bases organize
AI to augment intelligence inside the data
Tyler Bryson, Corporate Vice President, US Health & Public Sector Industries, Microsoft Corporation
Do we have platforms to serve new problem
Regulatory changes require visiting use cases
Pharma has the research data, providers have EMR – Microsoft builds new models using that data
Tumor imaging data was processed and new pattern recognition done on data of these tumors. New patterns are now a subject for research, just identified inside the data
Trust in Healthcare
NYC and Microsoft developed a System for small businesses to access city resources
Works with Academic institutions: Programs at Harvard and Princeton to train students by Microsoft employees on MIcrosoft AI technologies that as they graduate there will be trained new AI-trained employees
collaborations
Aditya Bhasin, BofA
AI in Banking: Bias, security
AI virtual system analytics to provide insight for scaling
Jane Moran, MGH
Network, Data structure needs updates
technology to help clinicians
care team to work with Generative AI to assist in e-mail reading and problem solving
Healthcare equity – avoid Bias
AI is not an answer to every problem
innovate at scale: using Epic and Microsoft
Clinical data structure for LLM, AI to renovate administrative processes inside MGH
John Bishai, PhD, Global Healthcare Investment Banking, BofA Securities
Umar Mahmood, MD, PhD, MGH, HMS
Panelists
Amos Hedt, Chief Business Strategy Officer, Perspective Therapeutics
imaging used to deliver the therapeutics before the drug touch the patient to calculate toxicity
PL-1 combined with radiotherapy synergistics results
immunogenic combination therapy, in presence of these agents, immune response reaction in the immune cells
Matthew Roden, PhD, President & CEO, Aktis Oncology
Conjugates – delivery direct to tumors
Opportunity two targets: (1) SSTA2 marker (2) xx
WHen agent inside the tumor, shrinkage and no emergence of cell nascent
optimization design
Treatment break for patients and families
Philip Kantoff, MD, Co-Founder & CEO, Convergent Therapeutics
Radio-pharmeceutics : 10 days half-life carrier not a target for small molecules Data on 120 patient, namo robust response synergy of antibody and molecule
image alphas
durable responses
Matt Vincent, PhD, AdvanCell Isotopes
ROS species generated in the tumor
peptides, protein binders
paradigm shift in delivery of oncology therapeutics directly to tumors
Lena Janes, PhD, Abdera Therapeutics
isotope will deliver the payload without damaging the DNA and healthy tissue
target different types of tumors, different half-life
Radiation therapy using isotopes id one of two modalities: tumor in and tumor out approach
screen for patient for the translational therapy
Next generation of products will come, now it is the beginning of these agents
Michael Ryskin, Life Science Tools & Diagnostics Analyst, BofA Global Research
Precision Medicine was it a paradigm shift??
Acquisition of manufacturing capabilities
research, manufacturinf line blurred
WHat excites you the most
Panelist
Marc Casper, Chairman, President & CEO, Thermo Fisher Scientific
Enabling Life sceinces, Pharmaceutical industries $1.5Billion internal investment annually
AI increasing knowledge
How is Precision Medicine applied? Sequencing in Cancer accelerated the Genomics information in use for 24 hours response of the sequence – adopted around the World.
at MGH lung cancers are treated with genomic sequencing
identification of the patients suitability for a targeted treatment
treatment during pregnacy at home vs hospitalization
History of company: Tools first: Mass spectrometry, one year for one sequence, protein identification and carrying to Mass spectrometry
Interactions need understanding acquiring electro spectrometry allowing analytical chemistry on proteins
Broad range of products: Clinical research to meet regulatory requirements entry into Reagents products.
Clinical Trials made effective by Thermo Scientific Products
Capabilities in registries, patient safety in psoriasis
Large role in experimental medicine drives efficiency in LABS
SIze of customers: small Biotech and large Pharma
Manufacture medicines: work with partnersbuilt by acquisitions small molecules,
100 engagements research, supply chain making medicines available at sites
Role for AI at Thermo Scientific:
Productivity – Cost effective for processes in use by 120,000 employees
Super customer interaction perfected by interogations with internal manuals to provide answers quickly
Improvement of products
Excitement Points: Responsiveness to COVID pandemic
Tazeen Ahmad, SMid-Cap Biotech Analyst, BofA Global Research
Are you using AI
Neuroinflammation
Cynthia Lemere, PhD, BWH, HMS
What systems are primarily impacted by the Immunes system
Drug delivery for inflammation huge area
Getting antibodies to the Brain
Precision medicine, genetics,specific person with specific immune disease
Panelists
Jo Viney, PhD, Cofounder, President & CEO, Seismic Therapeutic
Pandemics highlighted the impact of the immune system
Targeting cytokines in specific locations – hew approach
Modalities on hand: protein degradation mediation by bringing two cells together
AI is used for Patient stratification
AI to be used in Pathways involved in disease process to identify Biologics, PROTAC,
AI and ML for training models from interaction between proteins
ChatGPT to predict interactions among proteins
Immune disease and remission bust the immune system to improve quality of life of patient undergoing interventions
T-cell engaggers – in cases of refractory – great approach for boosting the immune system: removal of antibidies, recycling antibodies,
Two ends: Cell depletion vs Early detection
Therapy is every 6 months, cell depletion takes 3 months to come back.
Target immune system in the periphery,
Immune system in neurodegenerative diseases: Parkinson’s local modulation to penetrate neurological system
Markers to cross the BBB or not cross in neurological diseases
Immune disease is POLYGENIC multiple o=etiologies, mutation, genetics, which cell and which pathway to target a therapeutics: Biologics
Patient stratification is key for Precision Medicine at the cell level
T-cell, B-cell, Cytokines and antibodies mediated disease
ADGs degradation
9:45 AM – 10:10 AM
Picasso Ballroom
H. Jeffrey Wilkins, MD, Abcuro
Inflammation play a role in activating the immune system
zin the days of Medical School: inhibition of cytokines
Today: specificity to target cells for depletion
Specific biomarkers for response to therapies
cell types by mutations and physiology and causality in the inflammation area: we know why they have inflammation we need to learn interventions for inflammation
Asthma in the 40s as an inflammatory disease
assess treatment of inflammation
Neuro-inflammation – not well understood
What is the cause that drive the disease: understanding encephalitis?
NiranJana Nagarajan, PhD, MGB Ventures
Biology is the driver not AI
depletion of cells in a certain stage
Translation from disease to other diseases in the case of cell therapy potential – active area companies are trying solutions
Daniel Kuritzkes, MD, Chief, Division of Infectious Diseases, Brigham and Women’s Hospital; Harriet Ryan Albee Professor of Medicine, Harvard Medical School
Pathways in vaccine design
How to educate population on Vaccines
other approaches than vaccines
Alec Stranahan, PhD, SMid-Cap Biotech Analyst, BofA Global Research
Vaccine approval
Next generation vaccines
Panelist
Stéphane Bancel, CEO, Moderna
Vaccine design: long term vaccines weakens in aged population
data on role of AVV in Multiple Sclerosis
working on in the US vs France, Netherland in Europe different approaches
Vaccine for HIV
Vaccine was approved last year for children, pharmacies shortage
Season of FLu three times more vaccines in use
Employees run vaccine clinics on site
Vaccines not related to COVID
Misinformation from COVID vaccine
5% of COVID hospitalized were on the booster
Combination vaccines for high risk populations
Healthcare providers need to be involved in Education, many do not have an interest in the education on vaccines
Local stories from Vaccine manufectures and developer to be used in education in the communities
Individual DNA cancer celll signature of the cancer – data over time for development of vaccine to cancer many more tumor types are needed
Checkpoints in early disease
biopsy are too expensive
Side effect studies going on
mono-therapy vs immunotherapy costs involved
Naive virus to get into the Liver two diseases – cassets for sose management
Recombinant antibodies technology from the 70s
PD-1
COVID – was nto in the plan for development – design in silicon in two weeks – no change after this design
Large/SMid-Cap Biotech and Major Pharma Analyst, BofA Global Research
TCM
CAR-T
advantages of each cell type
Angele Shen, MGB Innovations
CAR-T
What would be a quick breakthrough?
Panelists
Jeff Bluestone, PhD, CEO & President, Sonoma Biotherapeutics
Cell therapy for cell depletion elimination of B-cells like its role in Multiple Sclerosis
Working with regulatory T-cells
Population of cells to study: T-cells master regulator in multiple ways – produce metabolic factors, infection tone in activation of other cells
Biology of cell: RNA, DNA
TCR – target antigens in tissues they are in in immune suppression
FInding the right peptide bindes to a certain MAC
CAR-T – recornize the cells in the local milieu like in patients with RA as an autoimmune disease
Clinical models ascertain cell types involvement leading to clinical trial insights then to therapies on a decision tree
recent data on CAR-T immune response in allogeneic for potential use in neurodegenerative diseases
patients and companies over react on immune therapy: Patients and Science vs hype
next generation: POC,
Gene therapy specificities vs Cell therapies – each approach will develop a different drug
FDA and NIH has in 11/2023 a meeting on Regulation of Cell therapy on stability and their approach to immune disease where there are already several drugs
approvals challenges companies
Price, too expensive a treatment is cell therapy
Chad Cowan, PhD, Executive Advisor, Century Therapeutics
use Natural Killer cells to elicit long-term immune response, T-cells,
active Beta cells]Regulatory monitoring use
DM – regulatory cells made from Stem cells
mission durable response
Clinical issues – not easy way for treatment wiht a cell line and bioreactors and modalities less similar to autologoous celles
CAR-T in oncology lessons now are transferred to Immune disease
Cell therapy requires technologies to mature multiple modalities and multiple drugs not one cell therapy for all immune diseases
Stability of the therapy vs rejection by immune system
FDA making cells is not as making drugs – higher level of scrutiny for cell therapy
SYNTHETIC BIOLOGY on B-cells for future breakthrough
Samantha Singer, President & CEO, Abata Therapeutics
Immune response involve many cell types in many diseases
Oncology the use of T-cells as tissue residents staying in tissue long time
Specific biology of the disease and regulatory cells receptors optimizing TCR presentation in pathology of tissue residents phyno types
activate in nervous system or in pancreas – intersection of cell biology with disease biology
Market feasibility – scaling, biology, pathology for reimbursement
antibody therapy may be appropriate than cell therapy is only a novel option
Cell manufacturing requires optimization of process, companies commercializing across all cell types
comprehensive approach for systemic immune suppression
: healthy tissue vs diseased tissue with cell theray implanted cells as residents in tissue
clinical data on product performance and on the biology reactions
Jose Florez, MD, PhD, Physician-in-Chief and Chair, Department of Medicine, Massachusetts General Hospital; Professor, Harvard Medical School
40 minutes to deal with big needs collected from 100 faculties at Harvard Medical School
The ten issues on one slide
How could we use compute to distill data
Bruce Levy, MD, Physician-In-Chief and Co-Chair, Department of Medicine, Brigham and Women’s Hospital; Parker B. Francis Professor of Medicine, Harvard Medical School
Transformation from the Present to the Future
identifying the needs
Infectious diseases: Rapid diagnostics need
resistance to antibiotics and metabolic reactions endogenous
Pandemics globally of diseases erradicated in the past: Pox, polio
Improving health in Geriatrics, not population growing but geriatric population growing. Beyong age 60 a citizen will use 1 or 2 physicians each
7,000 diseases, Genetic diseases requires integration and innovations in therapy
Innovations in Home devices
Panelists
Rox Anderson, MD, Lancer Endowed Chair of Dermatology;, Director, Wellman Center for Photomedicine, MGH; Professor of Dermatology, HMS
Access to data across institutions
Nicole Davis, PhD, Biomedical Communications
We asked 104 expert practitioners, content collected was analyzed
detection early
keeping the Human brain healthy
geriatrics Medicine, aging and compound effects on health system with aging and Health equity
Bias in Data
Jean-François Formela, MD, Partner, Atlas Venture
genetic information used in therapeutics design
Steven Greenberg, MD, Neurologist, Brigham and Women’s Hospital; Professor of Neurology, Harvard Medical School
Human genome completed in 1999, human genetic diseases were discovered learn about the disease at the tissue level with genomics and a system approach
Pathogenic drivers, systme integration by therapeutics approaches to pathways multiple cytokines in allergic reactions Pfizer had two biomarkers and therapies for systemic biology of disease
Pediatrics has its own challenges
Imaging medicine
Living longer at a lower cost – HOW TO ACHIEVE THAT?
growth abnormality in children: Body growth and Skull shrink
John Lepore, MD, CEO, ProFound Therapeutics;, CEO-Partner, Flagship Pioneering
Pathway, targeting therapy to patients in a System biological approach
Database of systme biology has missing components not included in the Human genome project – completion of the Data
Definition of End points needs revisiting
Identifying specific populations vs getting quickly to market
Diseases of aging: Muscles diseases – how to promote improvement in muscle mass
CONCLUSIONS
Gray Tsunami
Brain health
Cancer treatment paradigm shift
Fibrosis in many diseases
infectious disease in changing World
Equity in HC
Clinical Data is VAST
Systemic view of Human disease
New approaches to Psychaitry
Rare disease treatment needs a charter
In addition,
new generation of pain treatment
skin treatment new drugs
Chronic disease: improve treatment and prevention.
Tazeen Ahmad, SMid-Cap Biotech Analyst, BofA Global Research
FDA sets criteria – How is that done?
Autoimmune disease therapies – What is in the horizon?
Paul Anderson, MD, PhD, Chief Academic Officer, Mass General Brigham;
drug development
drug pricing in Europe
New book
RA needs more medicines
UNCONTROLLED SPREAD
In Uncontrolled Spread, a New York Times Best Seller, Dr. Scott Gottlieb identifies the reasons why the US was caught unprepared for the pandemic and how the country can improve its strategic planning to prepare for future viral threats.
Panelist
Scott Gottlieb, MD, Physician; Former Commissioner, Food and Drug Administration (2017-2019)
FDA approval 1st gene therapy in his tenure
Price of drugs: efficatious vs time to deveop
competitors in the marketplace are there for market share
New Book: Episodes in the FDA, appproval process at FDA, Gene therapy 1st in class approved – a special moment. Back in 1980s era translated to antibodies, to T-cell pioneering work.
Publisher worried it will not sell very well
FDA had concerns about manufacturing aspects
In 2024 we understand Biologics on novel platforms
Worries that Medicare will not reimbursement and cover the new therapies: Cell therapy
Statins approval had a known very large market vs Cell therapy not known which Cancer patients will benefit???
Black box involved in Autoimmune, studies bring exciting results
In 2018 – needs arise for early approved of drugs in AD, amyloid plaque – change in thinking and is controversial
In early 2020, change in settings of clinical trials, placido no more the only way for Randomized trials
Approval for AD drug vs othe indication – the process is difference (DMD a case to think about)
AI & NLP: Train on data of 10,000 lesions
FDA choose not to regulate AI the physician is in the Middle
Who is wrong: CHatGPT or the clinician ?
Data set on gene may represents NEW biologies that Physicians had not seen before
Data validation on medical devices and their approval after regulating them
Diagnostics tests: Validation Panels are involved
Regulated on input data vs Output data and validate the input data
Platforms are needed for regulation of AI involvement in the drug discovery and the drug approval process
investment in this platforms will be done by Whom?? It will come
Framework for AI at FDA: Regulatory gray data for applications and standards for output – not a novel regulatory concept
If AI will be applied widely, I/O accuracy is a must have
may be achievable soon?
FDA is evolutionary organization in its decision process NOT a REVOLUTIONARY organization. Simulation work started in 2003, 40 people doing that then.
Recently, new team in Agency working of Safety with tools and technologies that are common in Science – Approvals to drug labels and off labels that 20 years ago would not have happened
Tolerance for higher prices is to support Private sector that brings the innovating drugs to market
Chief Medical & Digital Officer, UC San Diego Health
Kevin Mahoney
CEO, University of Pennsylvania Health System
Niall Martin, PhD
CEO, Artios Pharma
James Mawson
CEO, Global Corporate Venturing
Mark McKenna
Chairman & CEO, Mirador Therapeutics
Jane Moran
Chief Information and Digital Officer, Mass General Brigham
William Morris, MD
Chief Medical Information Officer, Google Cloud
Rohan Palekar
CEO, 89bio
Raju Prasad, PhD
Chief Financial Officer, CRISPR Therapeutics
Xiayang Qiu, PhD
CEO, Regor Therapeutics
Harith Rajagopalan MD, PhD
CEO & Co-Founder, Fractyl Health
Shiv Rao, MD
CEO & Founder, Abridge
Kerry Ressler, MD, PhD
Chief Scientific Officer, McLean Hospital; Professor of Psychiatry, Harvard Medical School
Matthew Roden, PhD
President & CEO, Aktis Oncology
Sandi See Tai, MD
Chief Development Officer, Lexeo Therapeutics
Samantha Singer
President & CEO, Abata Therapeutics
Joanne Smith-Farrell, PhD
CEO & Director, Be Biopharma
Emma Somers-Roy
Chief Investment Officer, Mass General Brigham
Adam Steensberg, MD
President & CEO, Zealand Pharma
Tadaaki Taniguchi, MD, PhD
Chief Medical Officer, Astellas Pharma
Elsie Taveras, MD
Chief Community Health & Health Equity Officer, Mass General Brigham; Conrad Taff Endowed Chair and Professor of Pediatrics, Harvard Medical School
Jo Viney, PhD
Cofounder, President & CEO, Seismic Therapeutic
Ron Walls, MD
Chief Operating Officer, Mass General Brigham; Neskey Family Professor of Emergency Medicine, Harvard Medical School
Christophe Weber
President & CEO, Takeda
Fraser Wright, PhD
Chief Gene Therapy Officer, Kriya Therapeutics
Speakers
Anna Åsberg
Vice President, AstraZeneca Pharmaceuticals
Tazeen Ahmad
SMid-Cap Biotech Analyst, BofA Global Research
Jessica Allegretti, MD
Director, Crohn’s and Colitis Center, Brigham and Women’s Hospital; Associate Professor of Medicine, Harvard Medical School
Rox Anderson, MD
Lancer Endowed Chair of Dermatology; Director, Wellman Center for Photomedicine, MGH; Professor of Dermatology, HMS
Katherine Andriole, PhD
Director of Academic Research and Education, Mass General Brigham Data Science Office; Associate Professor, Harvard Medical School
Caroline Apovian, MD
Co-Director, Center for Weight Management and Wellness, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
Vanita Aroda, MD
Director, Diabetes Clinical Research, Brigham and Women’s Hospital; Associate Professor, Harvard Medical School
Natalie Artzi, PhD
Associate Professor of Medicine, Brigham and Women’s Hospital & Harvard Medical School
John Bishai, PhD
Global Healthcare Investment Banking, BofA Securities
David Blumenthal, MD
Professor of Practice of Public Health and Health Policy, Harvard TH Chan School of Public Health; Research Fellow, Harvard Kennedy School of Government; Samuel O. Thier Professor of Medicine, Emeritus, Harvard Medical School
Giles Boland, MD
President, Brigham and Women’s Hospital and Brigham and Women’s Physicians Organization; Philip H. Cook Distinguished Professor of Radiology, Harvard Medical School
Andrew Bressler
Washington Healthcare Policy Analyst, BofA Global Research
James Brink, MD
Enterprise Chief, Radiology, Mass General Brigham; Juan M. Taveras Professor of Radiology, Harvard Medical School
David Brown, MD
President, Academic Medical Centers, Mass General Brigham; Mass General Trustees Professor of Emergency Medicine, Harvard Medical School
Tyler Bryson
Corporate Vice President, US Health & Public Sector Industries, Microsoft Corporation
Jonathan Carlson, MD, PhD
Director of Chemistry, Center for Systems Biology, Massachusetts General Hospital; Assistant Professor of Medicine, Harvard Medical School
Miceal Chamberlain
President of Massachusetts, Bank of America
Moitreyee Chatterjee-Kishore, PhD
Head of Development, Immuno-Oncology and Cancer Cell Therapy, Astellas Pharma Inc.
Dong Feng Chen, MD, PhD
Associate Scientist, Massachusetts Eye and Ear; Associate Professor, Harvard Medical School
Jasmeer Chhatwal, MD, PhD
Associate Neurologist, Massachusetts General Hospital; Associate Professor of Neurology, Harvard Medical School
E. Antonio Chiocca, MD, PhD
Chair, Department of Neurosurgery, Brigham and Women’s Hospital; Harvey W. Cushing Professor of Neurosurgery, Harvard Medical School
Bryan Choi, MD, PhD
Associate Director, Center for Brain Tumor Immunology and Immunotherapy, Massachusetts General Hospital; Assistant Professor of Neurosurgery, Harvard Medical School
Deepak Chopra, MD
Founder, The Chopra Foundation
Yolonda Colson, MD, PhD
Chief, Division of Thoracic Surgery, Massachusetts General Hospital; Hermes C. Grillo Professor of Surgery, Harvard Medical School
Chad Cowan, PhD
Executive Advisor, Century Therapeutics
Cristina Cusin, MD
Director, MGH Ketamine Clinic and Psychiatrist, Depression Clinical and Research Program, Massachusetts General Hospital; Associate Professor in Psychiatry, Harvard Medical School
Nicole Davis, PhD
Biomedical Communications
Marcela del Carmen, MD
President, Massachusetts General Hospital and Massachusetts General Physicians Organization (MGPO); Executive Vice President, Mass General Brigham; Professor of Obstetrics, Gynecology and Reproductive Biology, Harvard Medical School
Gerard Doherty, MD
Surgeon-in-Chief, Mass General Brigham Cancer; Surgeon-in-Chief, Brigham and Women’s Hospital; Moseley Professor of Surgery, Harvard Medical School
Liz Everett Krisberg
Head of Bank of America Institute
Maurizio Fava, MD
Chair, Department of Psychiatry, Massachusetts General Hospital; Slater Family Professor of Psychiatry, Harvard Medical School
Keith Flaherty, MD
Director of Clinical Research, Mass General Cancer Center; Professor of Medicine, Harvard Medical School
Jose Florez, MD, PhD
Physician-in-Chief and Chair, Department of Medicine, Massachusetts General Hospital; Professor, Harvard Medical School
Jean-François Formela, MD
Partner, Atlas Venture
Fritz François, MD
Executive Vice President and Vice Dean, Chief of Hospital Operations, NYU Langone Health
Joanna Gajuk
Health Care Facilities and Managed Care Analyst, BofA Global Research
Jason Gerberry
Specialty Pharma and SMid-Cap Biotech Analyst, BofA Global Research
Gad Getz, PhD
Director of Bioinformatics, Krantz Center for Cancer Research and Department of Pathology; Paul C. Zamecnik Chair in Cancer Research, Mass General Cancer Center; Professor of Pathology, Harvard Medical School
Alexandra Golby, MD
Neurosurgeon; Director of Image-guided Neurosurgery, Brigham and Women’s Hospital; Professor of Neurosurgery, Professor of Radiology, Harvard Medical School
Allan Goldstein, MD
Chief of Pediatric Surgery, Massachusetts General Hospital; Surgeon-in-Chief, Mass General for Children; Marshall K. Bartlett Professor of Surgery, Harvard Medical School
Scott Gottlieb, MD
Physician; Former Commissioner, Food and Drug Administration (2017-2019)
David Grayzel, MD
Partner, Atlas Venture
Steven Greenberg, MD
Neurologist, Brigham and Women’s Hospital; Professor of Neurology, Harvard Medical School
Steven Grinspoon, MD
Chief, Metabolism Unit, Massachusetts General Hospital; Professor of Medicine, Harvard Medical School
Daphne Haas-Kogan, MD
Chief, Enterprise Radiation Oncology, Mass General Brigham; Professor, Harvard Medical School
Roger Hajjar, MD
Director, Gene & Cell Therapy Institute, Mass General Brigham
John Hanna, MD, PhD
Associate Professor, Brigham and Women’s Hospital & Harvard Medical School
Yvonne Hao
Secretary of Economic Development, Commonwealth of Massachusetts
Nobuhiko Hata PhD
Director, Surgical Navigation and Robotics Laboratory, Brigham and Women’s Hospital; Professor of Radiology, Harvard Medical School
Maura Healey
Governor of the Commonwealth of Massachusetts
Elizabeth Henske, MD
Director, Center for LAM Research and Clinical Care, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
Leigh Hochberg MD, PhD
Director of Neurotechnology and Neurorecovery, Massachusetts General Hospital; Senior Lecturer on Neurology, Harvard Medical School
Daphne Holt, MD, PhD
Director of the Resilience and Prevention Program, Massachusetts General Hospital; Associate Professor of Psychiatry, Harvard Medical School
Susan Huang, MD
EVP, Chief Executive, Providence Clinical Network, Providence Southern CA
Keith Isaacson, MD
Director of Minimally Invasive Gynecologic Surgery and Infertility, Newton Wellesley Hospital; Associate Professor of Obstetrics, Gynecology and Reproductive Biology, Harvard Medical School
Ole Isacson, MD-PhD
Founding Director, Neuroregeneration Research Institute, McLean Hospital; Professor of Neurology and Neuroscience, Harvard Medical School
Haim Israel
Head of Global Thematic Investing Research, BofA Global Research
Farouc Jaffer, MD, PhD
Director, Coronary Intervention, Massachusetts General Hospital; Associate Professor of Medicine, Harvard Medical School
Russell Jenkins, MD, PhD
Krantz Family Center for Cancer Research, Massachusetts General Hospital; Mass General Cancer Center, Center for Melanoma; Assistant Professor of Medicine, Harvard Medical School
Hadine Joffe, MD
Executive Director of the Connors Center for Women’s Health and Gender Biology; Interim Chair, Department of Psychiatry, Brigham and Women’s Hospital; Paula A. Johnson Professor of Psychiatry in the Field of Women’s Health, Harvard Medical School
Benjamin Kann, MD
Assistant Professor, Brigham and Women’s Hospital & Harvard Medical School
Tatsuo Kawai, MD, PhD
Director of the Legorreta Center for Clinical Transplantation Tolerance, A.Benedict Cosimi Chair in Transplant Surgery, Massachusetts General Hospital; Professor of Surgery, Harvard Medical School
Albert Kim, MD
Assistant Physician, Mass General Cancer Center; Assistant Professor, Harvard Medical School
Roger Kitterman
Senior Vice President, Ventures and Business Development & Licensing, Mass General Brigham Managing Partner, Mass General Brigham Ventures
Lotte Bjerre Knudsen, DMSc
Chief Scientific Advisor, Novo Nordisk
Vesela Kovacheva, MD, PhD
Director of Translational and Clinical Research, Mass General Brigham; Assistant Professor of Anesthesia, Harvard Medical School
Jonathan Kraft
President, The Kraft Group; Board Chair, Massachusetts General Hospital
John Krystal, MD
Chair, Department of Psychiatry, Yale School of Medicine
Daniel Kuritzkes, MD
Chief, Division of Infectious Diseases, Brigham and Women’s Hospital; Harriet Ryan Albee Professor of Medicine, Harvard Medical School
Bruce Levy, MD
Physician-In-Chief and Co-Chair, Department of Medicine, Brigham and Women’s Hospital; Parker B. Francis Professor of Medicine, Harvard Medical School
Katherine Liao, MD
Associate Physician, Department of Rheumatology, Inflammation, and Immunity, Brigham and Women’s Hospital; Associate Professor of Medicine and Biomedical Informatics, Harvard Medical School
David Louis, MD
Enterprise Chief, Pathology, Mass General Brigham Benjamin Castleman Professor of Pathology, Harvard Medical School
Tim Luker, PhD
VP, Ventures & West Coast Head, Eli Lilly
Andrew Luster, MD, PhD
Chief, Division of Rheumatology, Allergy and Immunology; Director, Center for Immunology and Inflammatory Diseases, Massachusetts General Hospital; Persis, Cyrus and Marlow B. Harrison Professor of Medicine, Harvard Medical School
Allen Lutz
Health Care Services Analyst, BofA Global Research
Calum MacRae MD, PhD
Vice Chair for Scientific Innovation, Department of Medicine, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
Joren Madsen, MD, PhD
Director, MGH Transplant Center; Paul S. Russell/Warner-Lambert Professor of Surgery, Harvard Medical School
Faisal Mahmood, PhD
Associate Professor, Brigham and Women’s Hospital & Harvard Medical School
Peter Marks, MD, PhD
Director, Center for Biologics Evaluation and Research, FDA
Marcela Maus, MD, PhD
Director of Cellular Therapy and Paula O’Keeffe Chair in Cancer Research, Krantz Family Center for Cancer Research and Mass General Cancer Center; Associate Director, Gene and Cell Therapy Institute, Mass General Brigham; Associate Professor, Harvard Medical School
Thorsten Mempel, MD, PhD
Associate Director, Center for Immunology and Inflammatory Diseases, Massachusetts General Hospital; Professor of Medicine, Harvard Medical School
Rebecca Mishuris, MD
Chief Medical Information Officer, Mass General Brigham; Member of the Faculty, Harvard Medical School
Pradeep Natarajan, MD
Director of Preventive Cardiology, Paul & Phyllis Fireman Endowed Chair in Vascular Medicine, Massachusetts General Hospital; Associate Professor of Medicine, Harvard Medical School
Nawal Nour, MD
Chair, Department of Obstetrics and Gynecology, Brigham and Women’s Hospital; Associate Professor, Kate Macy Ladd Professorship, Harvard Medical School
Heather O’Sullivan, MS, RN, AGNP
President, Mass General Brigham Healthcare at Home
Anne Oxrider
Senior Vice President, Benefits Executive, Bank of America
Claire-Cecile Pierre, MD
Vice President, Community Health Programs, Mass General Brigham; Instructor in Medicine, Harvard Medical School
Richard Pierson III, MD
Scientific Director, Center for Transplantation Sciences, Massachusetts General Hospital; Professor of Surgery, Harvard Medical School
Mark Poznansky, MD, PhD
Director, Vaccine and Immunotherapy Center, Massachusetts General Hospital; Steve and Deborah Gorlin MGH Research Scholar; Professor of Medicine, Harvard Medical School
Yakeel Quiroz, PhD
Director, Familial Dementia Neuroimaging Lab and Director, Multicultural Alzheimer’s Prevention Program, Massachusetts General Hospital; Paul B. and Sandra M. Edgerley MGH Research Scholar; Associate Professor, Harvard Medical School
Heidi Rehm, PhD
Chief Genomics Officer, Massachusetts General Hospital; Professor of Pathology, Harvard Medical School
Leonardo Riella, MD, PhD
Medical Director of Kidney Transplantation, Massachusetts General Hospital; Harold and Ellen Danser Endowed Chair in Transplantation, Harvard Medical School
Jorge Rodriguez, MD
Clinician-investigator, Brigham and Women’s Hospital; Assistant Professor, Harvard Medical School
Adam Ron
Health Care Facilities and Managed Care Analyst, BofA Global Research
David Ryan, MD
Physician-in-Chief, Mass General Brigham Cancer; Professor of Medicine, Harvard Medical School
Michael Ryskin
Life Science Tools & Diagnostics Analyst, BofA Global Research
Alkesh Shah
Head of US Equity Software Research, BofA Global Research
Angela Shen, MD
Vice President, Strategic Innovation Leaders, Mass General Brigham Innovation
Gregory Simon
President, Simonovation
Prabhjot Singh, MD, PhD
Senior Advisor, Strategic Initiatives Peterson Health Technology Institute
Brendan Singleton
Healthcare Equity Capital Markets, BofA Securities
Caroline Sokol, MD, PhD
Assistant Physician, Massachusetts General Hospital; Assistant Professor, Harvard Medical School
Daniel Solomon, MD
Matthew H. Liang Distinguished Chair in Arthritis and Population Health, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
Scott Solomon, MD
Director, Clinical Trials Outcomes Center; Edward D. Frohlich Distinguished Chair in Cardiovascular Pathophysiology, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
Fatima Cody Stanford, MD
Obesity Medicine Physician Scientist, Massachusetts General Hospital; Associate Professor of Medicine and Pediatrics, Harvard Medical School
Shannon Stott, PhD
Associate Investigator, Krantz Family Center for Cancer Research and Mass General Cancer Center; d’Arbeloff Research Scholar, Massachusetts General Hospital; Associate Investigator, Krantz Family Center for Cancer Research Harvard Medical School
Alec Stranahan, PhD
SMid-Cap Biotech Analyst, BofA Global Research
Marc Succi, MD
Executive Director, Mass General Brigham MESH Incubator; Associate Chair of Innovation & Commercialization, Mass General Brigham Radiology; Assistant Professor, Harvard Medical School
Guillermo Tearney, MD, PhD
Principal Investigator, Wellman Center for Photomedicine, Massachusetts General Hospital; Remondi Family Endowed MGH Research Institute Chair; Professor of Pathology, Harvard Medical School
David Ting, MD
Associate Clinical Director for Innovation, Mass General Cancer Center; Associate Professor of Medicine, Harvard Medical School
Raul Uppot, MD
Interventional Radiologist, Massachusetts General Hospital; Associate Professor, Harvard Medical School
Chris Varma, PhD
Co-founder, Chairman & CEO, Frontier Medicines
Kaveeta Vasisht, MD, PharmD
Associate Commissioner, Women’s Health, U.S. Food and Drug Administration
Alexandra-Chloé Villani PhD
Investigator, Massachusetts General Hospital; Assistant Professor, Harvard Medical School
Kate Walsh
Secretary of Health and Human Services, State of Massachusetts
David Walt, PhD
Professor of Pathology, Brigham and Women’s Hospital; Hansjörg Wyss Professor of Biologically Inspired Engineering, Harvard Medical School
Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use
In this curation we wish to present two breaking through goals:
Goal 1:
Exposition of a new direction of research leading to a more comprehensive understanding of Metabolic Dysfunctional Diseases that are implicated in effecting the emergence of the two leading causes of human mortality in the World in 2023: (a) Cardiovascular Diseases, and (b) Cancer
Goal 2:
Development of Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics for these eight subcellular causes of chronic metabolic diseases. It is anticipated that it will have a potential impact on the future of Pharmaceuticals to be used, a change from the present time current treatment protocols for Metabolic Dysfunctional Diseases.
According to Dr. Robert Lustig, M.D, an American pediatric endocrinologist. He is Professor emeritus of Pediatrics in the Division of Endocrinology at the University of California, San Francisco, where he specialized in neuroendocrinology and childhood obesity, there are eight subcellular pathologies that drive chronic metabolic diseases.
These eight subcellular pathologies can’t be measured at present time.
In this curation we will attempt to explore methods of measurement for each of these eight pathologies by harnessing the promise of the emerging field known as Bioelectronics.
Unmeasurable eight subcellular pathologies that drive chronic metabolic diseases
Glycation
Oxidative Stress
Mitochondrial dysfunction [beta-oxidation Ac CoA malonyl fatty acid]
Insulin resistance/sensitive [more important than BMI], known as a driver to cancer development
Membrane instability
Inflammation in the gut [mucin layer and tight junctions]
Epigenetics/Methylation
Autophagy [AMPKbeta1 improvement in health span]
Diseases that are not Diseases: no drugs for them, only diet modification will help
Image source
Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease
These eight Subcellular Pathologies driving Chronic Metabolic Diseases are becoming our focus for exploration of the promise of Bioelectronics for two pursuits:
Will Bioelectronics be deemed helpful in measurement of each of the eight pathological processes that underlie and that drive the chronic metabolic syndrome(s) and disease(s)?
IF we will be able to suggest new measurements to currently unmeasurable health harming processes THEN we will attempt to conceptualize new therapeutic targets and new modalities for therapeutics delivery – WE ARE HOPEFUL
In the Bioelecronics domain we are inspired by the work of the following three research sources:
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Wikipedia
THE VOICE of Dr. Justin D. Pearlman, MD, PhD, FACC
PENDING
THE VOICE of Stephen J. Williams, PhD
Ten TakeAway Points of Dr. Lustig’s talk on role of diet on the incidence of Type II Diabetes
25% of US children have fatty liver
Type II diabetes can be manifested from fatty live with 151 million people worldwide affected moving up to 568 million in 7 years
A common myth is diabetes due to overweight condition driving the metabolic disease
There is a trend of ‘lean’ diabetes or diabetes in lean people, therefore body mass index not a reliable biomarker for risk for diabetes
Thirty percent of ‘obese’ people just have high subcutaneous fat. the visceral fat is more problematic
there are people who are ‘fat’ but insulin sensitive while have growth hormone receptor defects. Points to other issues related to metabolic state other than insulin and potentially the insulin like growth factors
At any BMI some patients are insulin sensitive while some resistant
Visceral fat accumulation may be more due to chronic stress condition
Fructose can decrease liver mitochondrial function
A methionine and choline deficient diet can lead to rapid NASH development
Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.
Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.
As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.
Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.
Innovation to Reality
The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.
Harness What’s Possible at the Edge
With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.
Generative AI Solutions
The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.
Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.
Data is the most under-valued and de-glamorized aspect of AI. Learn why shifting the focus from model/algorithm development to quality of the data is the next and most efficient, way to improve the decision-making abilities of AI.
Data labeling is key to determining the success or failure of AI applications. Learn how to implement a data-first approach that can transform AI inference, resulting in better models that make better decisions.
Question the status quo. Build stakeholder trust. These are foundational elements of thought leadership in AI. Explore how organizations can use their data and algorithms in ethical and responsible ways while building bigger and more effective systems.
Haniyeh Mahmoudian
Global AI Ethicist, DataRobot
Mainstage Break (10:35 a.m. – 11:05 a.m.)
Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.
With its next-generation machine learning models fueling precision medicine, French biotech company, Owkin, captured the attention of the pharma industry. Learn how they did it and get tips to navigate the complex task of scaling your innovation.
Networking and refreshments for our live audience.
Innovation to Reality (11:05 a.m. – 12:30 p.m.)
The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.
Deploying AI in real-world environments benefits from human input before and during implementation. Get an inside look at how organizations can ensure reliable results with the key questions and competing needs that should be considered when implementing AI solutions.
AI is evolving from the research lab into practical real world applications. Learn what issues should be top of mind for businesses, consumers, and researchers as we take a deep dive into AI solutions that increase modern productivity and accelerate intelligence transformation.
Getting AI to work 80% of the time is relatively straightforward, but trustworthy AI requires deployments that work 100% of the time. Unpack some of the biggest challenges that come up when eliminating the 20% gap.
Bali Raghavan
Head of Engineering, Forward
Lunch and Networking Break (12:30 p.m. – 1:30 p.m.)
Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.
Harness What’s Possible at the Edge (1:30 p.m. – 3:15 p.m.)
With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.
To create sustainable business impact, AI capabilities need to be tailored and optimized to an industry or organization’s specific requirements and infrastructure model. Hear how customers’ challenges across industries can be addressed in any compute environment from the cloud to the edge with end-to-end hardware and software optimization.
Kavitha Prasad
VP & GM, Datacenter, AI and Cloud Execution and Strategy, Intel Corporation
Decision making has moved from the edge to the cloud before settling into a hybrid setup for many AI systems. Through the examination of key use-cases, take a deep dive into understanding the benefits and detractors of operating a machine-learning system at the point of inference.
Enable your organization to transform customer experiences through AI at the edge. Learn about the required technologies, including teachable and self-learning AI, that are needed for a successful shift to the edge, and hear how deploying these technologies at scale can unlock richer, more responsive experiences.
Reimagine AI solutions as a unified system, instead of individual components. Through the lens of autonomous vehicles, discover the pros and cons of using an all-inclusive AI-first approach that includes AI decision-making at the edge and see how this thinking can be applied across industry.
Raquel Urtasun
Founder & CEO, Waabi
Mainstage Break (3:15 p.m. – 3:45 p.m.)
Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.
Advances in machine learning are enabling artists and creative technologists to think about and use AI in new ways. Discuss the concept of creative AI and look at project examples from London’s art scene that illustrate the various ways creative AI is bridging the gap between the traditional art world and the latest technological innovations.
Luba Elliott
Curator, Producer, and Researcher, Creative AI
Generative AI Solutions (3:45 p.m. – 5:10 p.m.)
The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.
Change the design problem with AI. The creative nature of generative AI enhances design capabilities, finding efficiencies and opportunities that humans alone might not conceive. Explore business applications including project planning, construction, and physical design.
Deep learning is data hungry technology. Manually labelled training data has become cost prohibitive and time-consuming. Get a glimpse at how interactive large-scale synthetic data generation can accelerate the AI revolution, unlocking the potential of data-driven artificial intelligence.
Danny Lange
SVP of Artificial Intelligence, Unity Technologies
Push beyond the typical uses of AI. Explore the nexus of art, technology, and human creativity through the unique innovation of kinetic data sculptures that use machines to give physical context and shape to data to rethink how we engage with the physical world.
Refik Anadol
CEO, RAS Lab; Lecturer, UCLA
Last Call with the Editors (5:10 p.m. – 5:20 p.m.)
Before we wrap day 1, join our last call with all of our editors to get their analysis on the day’s topics, themes, and guests.
Networking Reception (5:20 p.m. – 6:20 p.m.)
WEDNESDAY, MARCH 30
Evolving the Algorithms
What’s Next for Deep Learning
Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.
AI in Day-To-Day Business
Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.
Making AI Work for All
As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.
Envisioning the Next AI
Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.
Day 2: Evolving the Algorithms (9:00 a.m. – 5:25 p.m.)
What’s Next for Deep Learning (9:10 a.m. – 10:25 a.m.)
Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.
Transformer-based language models are revolutionizing the way neural networks process natural language. This deep dive looks at how organizations can put their data to work using transformer models. We consider the problems that business may face as these massive models mature, including training needs, managing parallel processing at scale, and countering offensive data.
Critical thinking may be one step closer for AI by combining large-scale transformers with smart sampling and filtering. Get an early look at how AlphaCode’s entry into competitive programming may lead to a human-like capacity for AI to write original code that solves unforeseen problems.
As advanced AI systems gain greater capabilities in our search for artificial general intelligence, it’s critical to teach them how to understand human intentions. Look at the latest advancements in AI systems and how to ensure they can be truthful, helpful, and safe.
Mira Murati
SVP, Research, Product, & Partnerships, OpenAI
Mainstage Break (10:25 a.m. – 10:55 a.m.)
Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.
Good data is the bedrock of a self-service data consumption model, which in turn unlocks insights, analytics, personalization at scale through AI. Yet many organizations face immense challenges setting up a robust data foundation. Dive into a pragmatic perspective on abstracting the complexity and untangling the conflicts in data management for better AI.
Naveen Kamat
Executive Director, Data and AI Services, Kyndryl
AI in Day-To-Day Business (10:55 a.m. – 12:20 p.m.)
Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.
Effectively operationalized AI/ML can unlock untapped potential in your organization. From enhancing internal processes to managing the customer experience, get the pragmatic advice and takeaways leaders need to better understand their internal data to achieve impactful results.
Use AI to maximize reliability of supply chains. Learn the dos and don’ts to managing key processes within your supply chain, including workforce management, streamlining and simplification, and reaping the full value of your supply chain solutions.
Darcy MacClaren
Senior Vice President, Digital Supply Chain, SAP North America
Machine and reinforcement learning enable Spotify to deliver the right content to the right listener at the right time, allowing for personalized listening experiences that facilitate discovery at a global scale. Through user interactions, algorithms suggest new content and creators that keep customers both happy and engaged with the platform. Dive into the details of making better user recommendations.
Tony Jebara
VP of Engineering and Head of Machine Learning, Spotify
Lunch and Networking Break (12:20 p.m. – 1:15 p.m.)
Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.
Making AI Work for All (1:15 p.m. – 2:35 p.m.)
As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.
Walk through the practical steps to map and understand the nuances, outliers, and special cases in datasets. Get tips to ensure ethical and trustworthy approaches to training AI systems that grow in scope and scale within a business.
Lauren Bennett
Group Software Engineering Lead, Spatial Analysis and Data Science, Esri
Get an inside look at the long- and short-term benefits of addressing inequities in AI opportunities, ranging from educating the tech youth of the future to a 10,000-foot view on what it will take to ensure that equity top is of mind within society and business alike.
Public policies can help to make AI more equitable and ethical for all. Examine how policies could impact corporations and what it means for building internal policies, regardless of what government adopts. Identify actionable ideas to best move policies forward for the widest benefit to all.
Nicol Turner Lee
Director, Center for Technology Innovation, Brookings Institution
Mainstage Break (2:35 p.m. – 3:05 p.m.)
Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.
From the U.S. to China, the global robo-taxi race is gaining traction with consumers and regulators alike. Go behind the scenes with AutoX – a Level 4 driving technology company – and hear how it overcame obstacles while launching the world’s second and China’s first public, fully driverless robo-taxi service.
Jianxiong Xiao
Founder and CEO, AutoX
Envisioning the Next AI (3:05 p.m. – 4:50 p.m.)
Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.
The use of AI in finance is gaining traction as organizations realize the advantages of using algorithms to streamline and improve the accuracy of financial tasks. Step through use cases that examine how AI can be used to minimize financial risk, maximize financial returns, optimize venture capital funding by connecting entrepreneurs to the right investors; and more.
Sameena Shah
Managing Director, J.P. Morgan AI Research, JP Morgan Chase
In a study of simulated robotic evolution, it was observed that more complex environments and evolutionary changes to the robot’s physical form accelerated the growth of robot intelligence. Examine this cutting-edge research and decipher what this early discovery means for the next generation of AI and robotics.
Agrim Gupta
PhD Student, Stanford Vision and Learning Lab, Stanford University
Understanding human thinking and reasoning processes could lead to more general, flexible and human-like artificial intelligence. Take a close look at the research building AI inspired by human common-sense that could create a new generation of tools for complex decision-making.
Zenna Tavares
Research Scientist, Columbia University; Co-Founder, Basis
Look under the hood at this innovative approach to AI learning with multi-agent and human-AI interactions. Discover how bots work together and learn together through personal interactions. Recognize the future implications for AI, plus the benefits and obstacles that may come from this new process.
David Ferrucci was the principal investigator for the team that led IBM Watson to its landmark Jeopardy success, awakening the world to the possibilities of AI. We pull back the curtain on AI for a wide-ranging discussion on explicable models, and the next generation of human and machine collaboration creating AI thought partners with limitless applications.
From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery
Curator: Stephen J. Williams, PhD
Marc W. Kirschner*
Department of Systems Biology Harvard Medical School
Boston, Massachusetts 02115
With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like species, arise by descent with modification, so in their earliest forms even the founders of great dynasties are only marginally different than their sister fields and species. It is only in retrospect that we can recognize the significant founding events. Before embarking on a definition of systems biology, it may be worth remembering that confusion and controversy surrounded the introduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retrospect molecular biology was new and different. It introduced both new subject matter and new technological approaches, in addition to a new style.
As a point of departure for systems biology, consider the quintessential experiment in the founding of molecular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonlethal mutation in these genes in a multicellular organism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiological. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.
That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been perfected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new continent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the simplistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organisms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combinations, something that is not assured in chemistry. It also downplays the significant regulatory features that involve interactions between gene products, their localization, binding, posttranslational modification, degradation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the conserved genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different phenotypes in different tissues of metazoan organisms. These circuits may have certain robustness, but more important they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes themselves. Among other things it loads the deck in evolutionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.
Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the phenotype. One aspect of systems biology is the development of techniques to examine broadly the level of protein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.
High-throughput biology has opened up another important area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and reproducible environment. The real world of ecology, evolution, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later extended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some geneticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and protein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quantitative effects, partially masked or accentuated by other genetic and environmental conditions. To understand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environmental variation.
Extracts and explants are relatively accessible to synthetic manipulation. Next there is the explicit reconstruction of circuits within cells or the deliberate modification of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of describing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, proteins, cells in tissues, and whole organisms in their environment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.
You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biological organization and processes in terms of the molecular constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the importance of defining a succession of physiological states in that process, and on evolutionary biology and ecology for the appreciation that all aspects of the organism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology generates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of systems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.
Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.
5.1.5. Large-Scale Proteomics
While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.
All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.
5.2. Genetic Approaches
Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.
Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.
5.2.1. Resistance Cloning
The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.
In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].
While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].
When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].
While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.
5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens
When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).
An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].
The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].
Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].
An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].
From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.
SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY
Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence
The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question
1. Introduction
The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.
Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)
Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).
Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16].
2. Systems Biology in Cancer Research
Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].
In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.
2.1. Biological Network Analysis for Biomarker Validation
The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].
2.2. De Novo Construction of Biological Networks
While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].
2.3. Network Based Machine Learning
A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].
In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.
3. Network-Based Learning in Cancer Research
As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.
3.1. Molecular Characterization with Network Information
Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].
Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.
3.2. Tumor Heterogeneity Study with Network Information
The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].
In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].
3.3. Drug Target Identification with Network Information
In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].
Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.
4. Deep Learning in Cancer Research
DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].
In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].
4.1. Challenges for Deep Learning in Cancer Research
Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].
Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).
4.2. Molecular Charactization with Network and DNN Model
DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].
Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].
4.3. Tumor Heterogeneity with Network and DNN Model
As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].
4.4. Drug Target Identification with Networks and DNN Models
In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].
The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].
4.5. Graph Neural Network Model
In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].
In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].
4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge
The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].
As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].
Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.
“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”
Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell2014, 158, 929–944. [Google Scholar] [CrossRef] [PubMed]
Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell2018, 173, 283–285. [Google Scholar] [CrossRef]
Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol.2007, 3, 140. [Google Scholar] [CrossRef]
Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol.2017, 1, 25. [Google Scholar] [CrossRef] [PubMed]
Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol.2019, 20, e262–e273. [Google Scholar] [CrossRef]
Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods2015, 12, 615. [Google Scholar]
Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun.2020, 11, 729. [Google Scholar] [CrossRef]
Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet.2019, 10, 13. [Google Scholar] [CrossRef]
Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun.2020, 11, 728. [Google Scholar] [CrossRef]
Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res.2018, 24, 1248–1259. [Google Scholar] [CrossRef]
Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis2019, 8, 44. [Google Scholar] [CrossRef]
Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics2019, 35, 5191–5198. [Google Scholar] [CrossRef]
Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature2020, 578, 82. [Google Scholar] [CrossRef] [PubMed]
King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science2003, 302, 643–646. [Google Scholar] [CrossRef] [PubMed]
Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol.2010, 28, 1075. [Google Scholar] [CrossRef] [PubMed]
Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol.2009, 27, 1160. [Google Scholar] [CrossRef]
Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol.2014, 5, 412. [Google Scholar] [CrossRef] [PubMed]
Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform.2019, 20, 572–584. [Google Scholar] [CrossRef] [PubMed]
Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet.2016, 17, 630. [Google Scholar] [CrossRef] [PubMed]
Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet.2017, 8, 84. [Google Scholar] [CrossRef]
Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med.2011, 17, 297. [Google Scholar] [CrossRef] [PubMed]
Use of Systems Biology in Anti-Microbial Drug Development
Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965
In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.
Genome Sequences and Proteomic Structural Databases
In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.
Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.
There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.
We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018; Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.
FIGURE 2
Figure 2.(A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).
Examples of Understanding and Combatting Resistance
The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.
FIGURE 3
Figure 3.(A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).
Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:
Article ID #281: 2020 AAAI US$1M Annual Award for Societal Impact of Artificial Intelligence goes to MIT’s CSAIL Professor, Regina Barzilay. Published 9/23/2020
WordCloud Image Produced by Adam Tubman
“The model learns to make subjective assessments without the bias of human labeling for training, but with some guidance and therefore not completely unsupervised learning,” she said.
The software was assessed in a reader study using a set of 792 screening mammograms that included many challenging borderline samples and came from three institutions, two continents, and three vendors, according to Watanabe. The seven radiologists in the reader study had spent at least 75% of their time reading mammograms for the last three years and read more than 5,000 mammograms each year.
The readers had significant inter-reader variability in their density assessments, producing a kappa of 0.35 for the specific BI-RADS A-D category assessments, as well as a kappa of 0.6 in the less-challenging binary classification of dense versus nondense breast tissue, according to Watanabe.
The AI software also demonstrated a level of agreement with the reader results that correlated with the degree of reader consensus.
“In cases where there was 100% reader agreement, cmDensity was near perfect and was perfect for four-class and two-class assessments, respectively, with kappas of 0.97 and 1.0,” she said.
The few outlier assessments for the specific BI-RADS categories were off by just one BI-RADS class, Watanabe said.
The software was also superior in terms of intra-reader variability, yielding an intraclass correlation coefficient (ICC) of 0.99, compared with an ICC range of 0.70 to 0.82 for the radiologists, according to the researchers.
Barzilay’s work in AI, which ranges from tools for early cancer detection to platforms to identify new antibiotics, is increasingly garnering recognition: On Wednesday, the Association for the Advancement of Artificial Intelligence named Barzilay as the inaugural recipient of a new annual award honoring an individual developing or promoting AI for the good of society. The award comes with a $1 million prize sponsored by the Chinese education technology company Squirrel AI Learning.
Barzilay’s treatment was successful, and she believes her clinical team at MGH did the best they could in providing her with standard care. At the same time, she said, “it was extremely not satisfying to see how the simplest things that the technology can address were not addressed” — including a delayed diagnosis, an inability to collect data, and statistical flaws in studies used to make treatment decisions.
AAAI and Squirrel AI Learning Announce the Establishment of US$1M Annual Award for Societal Impact of Artificial Intelligence
May 28, 2019 Beijing, China
The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new $1M annual award for societal benefits of AI. The award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society.
The new Squirrel AI Award for Artificial Intelligence to Benefit Humanity was announced jointly by Derek Haoyang Li, Founder and Chairman of Squirrel AI Learning, and Yolanda Gil, President of AAAI, at the 2019 conference for AI for adaptive Education (AIAED) in Beijing.
This session will provide information regarding methodologic and computational aspects of proteogenomic analysis of tumor samples, particularly in the context of clinical trials. Availability of comprehensive proteomic and matching genomic data for tumor samples characterized by the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) and The Cancer Genome Atlas (TCGA) program will be described, including data access procedures and informatic tools under development. Recent advances on mass spectrometry-based targeted assays for inclusion in clinical trials will also be discussed.
Amanda G Paulovich, Shankha Satpathy, Meenakshi Anurag, Bing Zhang, Steven A Carr
Methods and tools for comprehensive proteogenomic characterization of bulk tumor to needle core biopsies
Shankha Satpathy
TCGA has 11,000 cancers with >20,000 somatic alterations but only 128 proteins as proteomics was still young field
CPTAC is NCI proteomic effort
Chemical labeling approach now method of choice for quantitative proteomics
Looked at ovarian and breast cancers: to measure PTM like phosphorylated the sample preparation is critical
Data access and informatics tools for proteogenomics analysis
Bing Zhang
Raw and processed data (raw MS data) with linked clinical data can be extracted in CPTAC
Python scripts are available for bioinformatic programming
Pathways to clinical translation of mass spectrometry-based assays
Meenakshi Anurag
· Using kinase inhibitor pulldown (KIP) assay to identify unique kinome profiles
· Found single strand break repair defects in endometrial luminal cases, especially with immune checkpoint prognostic tumors
· Paper: JNCI 2019 analyzed 20,000 genes correlated with ET resistant in luminal B cases (selected for a list of 30 genes)
· Validated in METABRIC dataset
· KIP assay uses magnetic beads to pull out kinases to determine druggable kinases
· Looked in xenografts and was able to pull out differential kinomes
· Matched with PDX data so good clinical correlation
· Were able to detect ESR1 fusion correlated with ER+ tumors
The adoption of omic technologies in the cancer clinic is giving rise to an increasing number of large-scale high-dimensional datasets recording multiple aspects of the disease. This creates the need for frameworks for translatable discovery and learning from such data. Like artificial intelligence (AI) and machine learning (ML) for the cancer lab, methods for the clinic need to (i) compare and integrate different data types; (ii) scale with data sizes; (iii) prove interpretable in terms of the known biology and batch effects underlying the data; and (iv) predict previously unknown experimentally verifiable mechanisms. Methods for the clinic, beyond the lab, also need to (v) produce accurate actionable recommendations; (vi) prove relevant to patient populations based upon small cohorts; and (vii) be validated in clinical trials. In this educational session we will present recent studies that demonstrate AI and ML translated to the cancer clinic, from prognosis and diagnosis to therapy.
NOTE: Dr. Fish’s talk is not eligible for CME credit to permit the free flow of information of the commercial interest employee participating.
Ron C. Anafi, Rick L. Stevens, Orly Alter, Guy Fish
Overview of AI approaches in cancer research and patient care
Rick L. Stevens
Deep learning is less likely to saturate as data increases
Deep learning attempts to learn multiple layers of information
The ultimate goal is prediction but this will be the greatest challenge for ML
ML models can integrate data validation and cross database validation
What limits the performance of cross validation is the internal noise of data (reproducibility)
Learning curves: not the more data but more reproducible data is important
Neural networks can outperform classical methods
Important to measure validation accuracy in training set. Class weighting can assist in development of data set for training set especially for unbalanced data sets
Discovering genome-scale predictors of survival and response to treatment with multi-tensor decompositions
Orly Alter
Finding patterns using SVD component analysis. Gene and SVD patterns match 1:1
Comparative spectral decompositions can be used for global datasets
Validation of CNV data using this strategy
Found Ras, Shh and Notch pathways with altered CNV in glioblastoma which correlated with prognosis
These predictors was significantly better than independent prognostic indicator like age of diagnosis
Identifying targets for cancer chronotherapy with unsupervised machine learning
Ron C. Anafi
Many clinicians have noticed that some patients do better when chemo is given at certain times of the day and felt there may be a circadian rhythm or chronotherapeutic effect with respect to side effects or with outcomes
ML used to determine if there is indeed this chronotherapy effect or can we use unstructured data to determine molecular rhythms?
Found a circadian transcription in human lung
Most dataset in cancer from one clinical trial so there might need to be more trials conducted to take into consideration circadian rhythms
Stratifying patients by live-cell biomarkers with random-forest decision trees
Stratifying patients by live-cell biomarkers with random-forest decision trees
Guy Fish CEO Cellanyx Diagnostics
Some clinicians feel we may be overdiagnosing and overtreating certain cancers, especially the indolent disease
This educational session focuses on the chronic wound healing, fibrosis, and cancer “triad.” It emphasizes the similarities and differences seen in these conditions and attempts to clarify why sustained fibrosis commonly supports tumorigenesis. Importance will be placed on cancer-associated fibroblasts (CAFs), vascularity, extracellular matrix (ECM), and chronic conditions like aging. Dr. Dvorak will provide an historical insight into the triad field focusing on the importance of vascular permeability. Dr. Stewart will explain how chronic inflammatory conditions, such as the aging tumor microenvironment (TME), drive cancer progression. The session will close with a review by Dr. Cukierman of the roles that CAFs and self-produced ECMs play in enabling the signaling reciprocity observed between fibrosis and cancer in solid epithelial cancers, such as pancreatic ductal adenocarcinoma.
Harold F Dvorak, Sheila A Stewart, Edna Cukierman
The importance of vascular permeability in tumor stroma generation and wound healing
Harold F Dvorak
Aging in the driver’s seat: Tumor progression and beyond
Sheila A Stewart
Why won’t CAFs stay normal?
Edna Cukierman
Tuesday, June 23
3:00 PM – 5:00 PM EDT
Other Articles on this Open Access Online Journal on Cancer Conferences and Conference Coverage in Real Time Include