Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence – Breakthroughs in Theories and Technologies’ Category

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Tweet Collection of 2022 #EmTechDigital @MIT, March 29-30, 2022

Tweet Author: Aviva Lev-Ari, PhD, RN

Selective Tweet Retweets for The Technology Review: Aviva Lev-Ari, PhD, RN

 

UPDATED on 4/11/2022

Analytics for @AVIVA1950 Tweeting at #EmTechDigital

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2022/04/11/analytics-for-aviva1950-tweeting-at-emtechdigital/

 

Aviva Lev-Ari

17.9K Tweets

See new Tweets

Aviva Lev-Ari

@AVIVA1950

Mar 30

#EmTechDigital

@AVIVA1950

@pharma_BI

@techreview

FRONTIER OF #AI follow my tweets of this event more than few tweets per speaker

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

#error in programmatic labeling use auto #ml aggregate #transactions

1

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

RajivShah Snorkel AI #programmatic #labelling solution #heuristics converted #code #tagging integration of #labelled data #classification algorithms #scores #BERT improve ing quality of data labeling #functions #knowledge #graphs

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

#NLP #customization of #tools data #standardization in #healthcare and trucking #datasystem #heterogeneity is highest #data life cycle of #ML

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

in last decade #ML advanced #opencode frees effort to #dataset avoid #label inconsistency #images #small vs #big #data-centric #ai #system #dataset #slice #data #curation #teams develop #tools #storage #migration #Legacy

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTechDigital

@MIT

, March 29-30, 2022 https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

Real Time Coverage: Aviva Lev-Ari, PhD, RN #EmTechDigital

@AVIVA1950

@techReview

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

@AVIVA1950

#EmTechDigital

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 26

#EmTech2022

@MIT

Quote Tweet

Stephen J Williams

@StephenJWillia2

  • Mar 25

@AVIVA1950 #EMT twitter.com/Pharma_BI/stat…

1

1

You Retweeted

MIT Technology Review

@techreview

Mar 30

That’s a wrap on #EmTechDigital 2022! Thanks for joining us in-person and online.

2

8

23

Show this thread

You Retweeted

LANDING AI

@landingAI

Mar 29

If you missed

@AndrewYNg

’s #EmTechDigital session, you can still learn more about #DataCentricAI here: https://bit.ly/3iM8bPq

@techreview

@strwbilly

2

1

You Retweeted

Mark Weber

@markRweber

Mar 29

On #bias embedded in historical data. #syntheticdata can help us build models for the world we aspire to rather than the prejudiced one of the past. Paraphrasing

@danny_lange

of

@unity

at #EmTechDigital #generativeai

Selective Tweets and Retweets from @StephenJWillia2

 

 

Read Full Post »

2022 EmTechDigital @MIT, March 29-30, 2022

Real Time Coverage: Aviva Lev-Ari, PhD, RN 

#EmTechDigital

@AVIVA1950

@pharma_BI

@techreview

SPEAKERS

https://event.technologyreview.com/emtech-digital-2022/speakers

Ali
Alvi

Turing Group Program Manager

Microsoft

Refik
Anadol

CEO, RAS Lab; Lecturer

UCLA

Lauren
Bennett

Group Software Engineering Lead, Spatial Analysis and Data Science

Esri

Elizabeth
Bramson-Boudreau

CEO

MIT Technology Review

Tara
Chklovski

Founder & CEO

Technovation

Sheldon
Fernandez

CEO

DarwinAI

David
Ferrucci

Founder, CEO, & Chief Scientist

Elemental Cognition

Anthony
Green

Podcast Producer

MIT Technology Review

Agrim
Gupta

PhD Student, Stanford Vision and Learning Lab

Stanford University

Mike
Haley

VP of Research

Autodesk

Will Douglas
Heaven

Senior Editor for AI

MIT Technology Review

Natasha
Jaques

Senior Research Scientist

Google Brain

Tony
Jebara

VP of Engineering and Head of Machine Learning

Spotify

Clinton
Johnson

Racial Equity Unified Team Lead

Esri

Danny
Lange

SVP of Artificial Intelligence

Unity Technologies

Julia (Xing)
Li

Deputy General Manager

Baidu USA

Darcy
MacClaren

Senior Vice President, Digital Supply Chain

SAP North America

Haniyeh
Mahmoudian

Global AI Ethicist

DataRobot

Andrew
Moore

GM and VP, Google Cloud AI

Google

Mira
Murati

SVP, Research, Product, & Partnerships

OpenAI

Prem
Natarajan

Vice President Alexa AI, Head of NLU

Amazon

Andrew
Ng

Founder and CEO

Landing AI

Amy
Nordrum

Editorial Director, Special Projects & Operations

MIT Technology Review

Kavitha
Prasad

VP & GM, Datacenter, AI and Cloud Execution and Strategy

Intel Corporation

Bali
Raghavan

Head of Engineering

Forward

Rajiv
Shah

Principal Data Scientist

Snorkel AI

Sameena
Shah

Managing Director, J.P. Morgan AI Research

JP Morgan Chase

David
Simchi-Levi

Director, Data Science Lab

MIT

Jennifer
Strong

Senior Editor for Podcasts and Live Journalism

MIT Technology Review

Fiona
Tan

CTO

Wayfair

Zenna
Tavares

Research Scientist, Columbia University; Co-Founder

Basis

Nicol
Turner Lee

Director, Center for Technology Innovation

Brookings Institution

Raquel
Urtasun

Founder & CEO

Waabi

Oriol
Vinyals

Principal Scientist

DeepMind

MIT Inside Track

David
Cox

IBM Director

MIT-IBM Watson AI Lab

Luba
Elliott

Curator, Producer, and Researcher

Creative AI

Charlotte
Jee

Reporter, News

MIT Technology Review

Naveen
Kamat

Executive Director, Data and AI Services

Kyndryl

Joseph
Lehar

Senior Vice President, R&D Strategy

Owkin

Stefanie
Mueller

Associate Professor

MIT CSAIL

Jianxiong
Xiao

Founder and CEO

AutoX

TUESDAY, MARCH 29

 

Data-Centric AI

Better Data, Better AI

Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.

Innovation to Reality

The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.

Harness What’s Possible at the Edge

With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.

Generative AI Solutions

The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.

Day 1: Data-Centric AI (9:00 a.m. – 5:20 p.m.)

Day 1: Data-Centric AI (9:00 a.m. – 5:20 p.m.)

9:00 AM

Welcome Remarks

Will Douglas Heaven

Senior Editor for AI, MIT Technology Review

Better Data, Better AI (9:10 a.m. – 10:35 a.m.)

Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.

9:10 AM

Empowering Data-Centric AI

Data is the most under-valued and de-glamorized aspect of AI. Learn why shifting the focus from model/algorithm development to quality of the data is the next and most efficient, way to improve the decision-making abilities of AI.

Andrew Ng

Founder and CEO, Landing AI

9:40 AM

The Mechanics of Data-First AI

Data labeling is key to determining the success or failure of AI applications. Learn how to implement a data-first approach that can transform AI inference, resulting in better models that make better decisions.

Rajiv Shah

Principal Data Scientist, Snorkel AI

10:10 AM

Thought Leadership in Responsible AI

Question the status quo. Build stakeholder trust. These are foundational elements of thought leadership in AI. Explore how organizations can use their data and algorithms in ethical and responsible ways while building bigger and more effective systems.

Haniyeh Mahmoudian

Global AI Ethicist, DataRobot

Mainstage Break (10:35 a.m. – 11:05 a.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

10:35 AM

MIT Inside Track: From AI Startup to Tech “Unicorn” (available online only)

With its next-generation machine learning models fueling precision medicine, French biotech company, Owkin, captured the attention of the pharma industry. Learn how they did it and get tips to navigate the complex task of scaling your innovation.

Joseph Lehar

Senior Vice President, R&D Strategy, Owkin

Networking Break

Networking and refreshments for our live audience.

Innovation to Reality (11:05 a.m. – 12:30 p.m.)

The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.

11:05 AM

Secrets of Successful AI Deployments

Deploying AI in real-world environments benefits from human input before and during implementation. Get an inside look at how organizations can ensure reliable results with the key questions and competing needs that should be considered when implementing AI solutions.

Andrew Moore

GM and VP, Google Cloud AI, Google

11:35 AM

From Research Lab to Real World

AI is evolving from the research lab into practical real world applications. Learn what issues should be top of mind for businesses, consumers, and researchers as we take a deep dive into AI solutions that increase modern productivity and accelerate intelligence transformation.

Julia (Xing) Li

Deputy General Manager, Baidu USA

12:00 PM

Closing the 20% Performance Gap

Getting AI to work 80% of the time is relatively straightforward, but trustworthy AI requires deployments that work 100% of the time. Unpack some of the biggest challenges that come up when eliminating the 20% gap.

Bali Raghavan

Head of Engineering, Forward

Lunch and Networking Break (12:30 p.m. – 1:30 p.m.)

12:30 PM

Lunch and Networking Break

Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.

Harness What’s Possible at the Edge (1:30 p.m. – 3:15 p.m.)

With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.

1:30 PM

AI Integration Across Industries – Presented by Intel

To create sustainable business impact, AI capabilities need to be tailored and optimized to an industry or organization’s specific requirements and infrastructure model. Hear how customers’ challenges across industries can be addressed in any compute environment from the cloud to the edge with end-to-end hardware and software optimization.

Kavitha Prasad

VP & GM, Datacenter, AI and Cloud Execution and Strategy, Intel Corporation

Elizabeth Bramson-Boudreau

CEO, MIT Technology Review

1:55 PM

Explainability at the Edge

Decision making has moved from the edge to the cloud before settling into a hybrid setup for many AI systems. Through the examination of key use-cases, take a deep dive into understanding the benefits and detractors of operating a machine-learning system at the point of inference.

Sheldon Fernandez

CEO, DarwinAI

2:25 PM

AI Experiences at the Edge

Enable your organization to transform customer experiences through AI at the edge. Learn about the required technologies, including teachable and self-learning AI, that are needed for a successful shift to the edge, and hear how deploying these technologies at scale can unlock richer, more responsive experiences.

Prem Natarajan

Vice President Alexa AI, Head of NLU, Amazon

2:50 PM

The Road Ahead

Reimagine AI solutions as a unified system, instead of individual components. Through the lens of autonomous vehicles, discover the pros and cons of using an all-inclusive AI-first approach that includes AI decision-making at the edge and see how this thinking can be applied across industry.

Raquel Urtasun

Founder & CEO, Waabi

Mainstage Break (3:15 p.m. – 3:45 p.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

3:15 PM

Networking Break

Networking and refreshments for our live audience.

MIT Inside Track: The Impact of Creative AI (available online only)

Advances in machine learning are enabling artists and creative technologists to think about and use AI in new ways. Discuss the concept of creative AI and look at project examples from London’s art scene that illustrate the various ways creative AI is bridging the gap between the traditional art world and the latest technological innovations.

Luba Elliott

Curator, Producer, and Researcher, Creative AI

Generative AI Solutions (3:45 p.m. – 5:10 p.m.)

The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.

3:45 PM

Enhancing Design through Generative AI

Change the design problem with AI. The creative nature of generative AI enhances design capabilities, finding efficiencies and opportunities that humans alone might not conceive. Explore business applications including project planning, construction, and physical design.

Mike Haley

VP of Research, Autodesk

4:15 PM

Using Synthetic Data and Simulations

Deep learning is data hungry technology. Manually labelled training data has become cost prohibitive and time-consuming. Get a glimpse at how interactive large-scale synthetic data generation can accelerate the AI revolution, unlocking the potential of data-driven artificial intelligence.

Danny Lange

SVP of Artificial Intelligence, Unity Technologies

4:40 PM

The Art of AI

Push beyond the typical uses of AI. Explore the nexus of art, technology, and human creativity through the unique innovation of kinetic data sculptures that use machines to give physical context and shape to data to rethink how we engage with the physical world.

Refik Anadol

CEO, RAS Lab; Lecturer, UCLA

Last Call with the Editors (5:10 p.m. – 5:20 p.m.)

5:10 PM

Last Call with the Editors

Before we wrap day 1, join our last call with all of our editors to get their analysis on the day’s topics, themes, and guests.

Networking Reception (5:20 p.m. – 6:20 p.m.)

WEDNESDAY, MARCH 30

Evolving the Algorithms

What’s Next for Deep Learning

Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.

AI in Day-To-Day Business

Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.

Making AI Work for All

As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.

Envisioning the Next AI

Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.

Day 2: Evolving the Algorithms (9:00 a.m. – 5:25 p.m.)

9:00 AM

Welcome Remarks

Will Douglas Heaven

Senior Editor for AI, MIT Technology Review

What’s Next for Deep Learning (9:10 a.m. – 10:25 a.m.)

Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.

9:10 AM

Transforming Traditional Algorithms

Transformer-based language models are revolutionizing the way neural networks process natural language. This deep dive looks at how organizations can put their data to work using transformer models. We consider the problems that business may face as these massive models mature, including training needs, managing parallel processing at scale, and countering offensive data.

Ali Alvi

Turing Group Program Manager, Microsoft

9:35 AM

Human-like Problem Solving

Critical thinking may be one step closer for AI by combining large-scale transformers with smart sampling and filtering. Get an early look at how AlphaCode’s entry into competitive programming may lead to a human-like capacity for AI to write original code that solves unforeseen problems.

Oriol Vinyals

Principal Scientist, DeepMind

10:00 AM

Aligning AI Technologies at Scale

As advanced AI systems gain greater capabilities in our search for artificial general intelligence, it’s critical to teach them how to understand human intentions. Look at the latest advancements in AI systems and how to ensure they can be truthful, helpful, and safe.

Mira Murati

SVP, Research, Product, & Partnerships, OpenAI

Mainstage Break (10:25 a.m. – 10:55 a.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

10:25 AM

Networking Break

Networking and refreshments for our live audience.

Business-Ready Data Holds the Key to AI Democratization – Presented by Kyndryl

Good data is the bedrock of a self-service data consumption model, which in turn unlocks insights, analytics, personalization at scale through AI. Yet many organizations face immense challenges setting up a robust data foundation. Dive into a pragmatic perspective on abstracting the complexity and untangling the conflicts in data management for better AI.

Naveen Kamat

Executive Director, Data and AI Services, Kyndryl

AI in Day-To-Day Business (10:55 a.m. – 12:20 p.m.)

Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.

10:55 AM

Improving Business Processes with AI

Effectively operationalized AI/ML can unlock untapped potential in your organization. From enhancing internal processes to managing the customer experience, get the pragmatic advice and takeaways leaders need to better understand their internal data to achieve impactful results.

Fiona Tan

CTO, Wayfair

11:25 AM

Accelerating the Supply Chain

Use AI to maximize reliability of supply chains. Learn the dos and don’ts to managing key processes within your supply chain, including workforce management, streamlining and simplification, and reaping the full value of your supply chain solutions.

Darcy MacClaren

Senior Vice President, Digital Supply Chain, SAP North America

David Simchi-Levi

Director, Data Science Lab, MIT

11:55 AM

Putting Recommendation Algorithms to Work

Machine and reinforcement learning enable Spotify to deliver the right content to the right listener at the right time, allowing for personalized listening experiences that facilitate discovery at a global scale. Through user interactions, algorithms suggest new content and creators that keep customers both happy and engaged with the platform. Dive into the details of making better user recommendations.

Tony Jebara

VP of Engineering and Head of Machine Learning, Spotify

Lunch and Networking Break (12:20 p.m. – 1:15 p.m.)

12:20 PM

Lunch and Networking Break

Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.

Making AI Work for All (1:15 p.m. – 2:35 p.m.)

As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.

1:15 PM

Mapping Equity

Walk through the practical steps to map and understand the nuances, outliers, and special cases in datasets. Get tips to ensure ethical and trustworthy approaches to training AI systems that grow in scope and scale within a business.

Lauren Bennett

Group Software Engineering Lead, Spatial Analysis and Data Science, Esri

Clinton Johnson

Racial Equity Unified Team Lead, Esri

1:45 PM

Bridging the AI Accessibility Gap

Get an inside look at the long- and short-term benefits of addressing inequities in AI opportunities, ranging from educating the tech youth of the future to a 10,000-foot view on what it will take to ensure that equity top is of mind within society and business alike.

Tara Chklovski

Founder & CEO, Technovation

2:10 PM

The AI Policies We Need

Public policies can help to make AI more equitable and ethical for all. Examine how policies could impact corporations and what it means for building internal policies, regardless of what government adopts. Identify actionable ideas to best move policies forward for the widest benefit to all.

Nicol Turner Lee

Director, Center for Technology Innovation, Brookings Institution

Mainstage Break (2:35 p.m. – 3:05 p.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

2:35 PM

Networking Break

Networking and refreshments for our live audience.

MIT Inside Track: Accelerating the Advent of Autonomous Driving (available online only)

From the U.S. to China, the global robo-taxi race is gaining traction with consumers and regulators alike. Go behind the scenes with AutoX – a Level 4 driving technology company – and hear how it overcame obstacles while launching the world’s second and China’s first public, fully driverless robo-taxi service.

Jianxiong Xiao

Founder and CEO, AutoX

Envisioning the Next AI (3:05 p.m. – 4:50 p.m.)

Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.

3:05 PM

How AI Is Powering the Future of Financial Services – Presented by JP Morgan Chase

The use of AI in finance is gaining traction as organizations realize the advantages of using algorithms to streamline and improve the accuracy of financial tasks. Step through use cases that examine how AI can be used to minimize financial risk, maximize financial returns, optimize venture capital funding by connecting entrepreneurs to the right investors; and more.

Sameena Shah

Managing Director, J.P. Morgan AI Research, JP Morgan Chase

3:30 PM

Evolution of Mind and Body

In a study of simulated robotic evolution, it was observed that more complex environments and evolutionary changes to the robot’s physical form accelerated the growth of robot intelligence. Examine this cutting-edge research and decipher what this early discovery means for the next generation of AI and robotics.

Agrim Gupta

PhD Student, Stanford Vision and Learning Lab, Stanford University

4:00 PM

A Path to Human-like Common Sense

Understanding human thinking and reasoning processes could lead to more general, flexible and human-like artificial intelligence. Take a close look at the research building AI inspired by human common-sense that could create a new generation of tools for complex decision-making.

Zenna Tavares

Research Scientist, Columbia University; Co-Founder, Basis

4:25 PM

Social Learning Bots

Look under the hood at this innovative approach to AI learning with multi-agent and human-AI interactions. Discover how bots work together and learn together through personal interactions. Recognize the future implications for AI, plus the benefits and obstacles that may come from this new process.

Natasha Jaques

Senior Research Scientist, Google Brain

Closing Segment (4:50 p.m. – 5:25 p.m.)

4:50 PM

Pulling Back the Curtain on AI

David Ferrucci was the principal investigator for the team that led IBM Watson to its landmark Jeopardy success, awakening the world to the possibilities of AI. We pull back the curtain on AI for a wide-ranging discussion on explicable models, and the next generation of human and machine collaboration creating AI thought partners with limitless applications.

David Ferrucci

Founder, CEO, & Chief Scientist, Elemental Cognition

5:15 PM

Closing Remarks

Closing Toast (5:25 p.m. – 5:45 p.m.)

Read Full Post »

@MIT Artificial intelligence system rapidly predicts how two proteins will attach: The model called Equidock, focuses on rigid body docking — which occurs when two proteins attach by rotating or translating in 3D space, but their shapes don’t squeeze or bend

Reporter: Aviva Lev-Ari, PhD, RN

This paper introduces a novel SE(3) equivariant graph matching network, along with a keypoint discovery and alignment approach, for the problem of protein-protein docking, with a novel loss based on optimal transport. The overall consensus is that this is an impactful solution to an important problem, whereby competitive results are achieved without the need for templates, refinement, and are achieved with substantially faster run times.
28 Sept 2021 (modified: 18 Nov 2021)ICLR 2022 SpotlightReaders:  Everyone Show BibtexShow Revisions
 
Keywords:protein complexes, protein structure, rigid body docking, SE(3) equivariance, graph neural networks
AbstractProtein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications such as drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no three-dimensional flexibility during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right location and the right orientation relative to the second protein. We mathematically guarantee that the predicted complex is always identical regardless of the initial placements of the two structures, avoiding expensive data augmentation. Our model approximates the binding pocket and predicts the docking pose using keypoint matching and alignment through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements over existing protein docking software and predict qualitatively plausible protein complex structures despite not using heavy sampling, structure refinement, or templates.
One-sentence SummaryWe perform rigid protein docking using a novel independent SE(3)-equivariant message passing mechanism that guarantees the same resulting protein complex independent of the initial placement of the two 3D structures.
 
SOURCE
 

MIT researchers created a machine-learning model that can directly predict the complex that will form when two proteins bind together. Their technique is between 80 and 500 times faster than state-of-the-art software methods, and often predicts protein structures that are closer to actual structures that have been observed experimentally.

This technique could help scientists better understand some biological processes that involve protein interactions, like DNA replication and repair; it could also speed up the process of developing new medicines.

Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them. This deep-learning model can learn these types of interactions from data,” says Octavian-Eugen Ganea, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Ganea’s co-lead author is Xinyuan Huang, a graduate student at ETH Zurich. MIT co-authors include Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in CSAIL, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering in CSAIL and a member of the Institute for Data, Systems, and Society. The research will be presented at the International Conference on Learning Representations.

Significance of the Scientific Development by the @MIT Team

EquiDock wide applicability:

  • Our method can be integrated end-to-end to boost the quality of other models (see above discussion on runtime importance). Examples are predicting functions of protein complexes [3] or their binding affinity [5], de novo generation of proteins binding to specific targets (e.g., antibodies [6]), modeling back-bone and side-chain flexibility [4], or devising methods for non-binary multimers. See the updated discussion in the “Conclusion” section of our paper.

 

Advantages over previous methods:

  • Our method does not rely on templates or heavy candidate sampling [7], aiming at the ambitious goal of predicting the complex pose directly. This should be interpreted in terms of generalization (to unseen structures) and scalability capabilities of docking models, as well as their applicability to various other tasks (discussed above).

 

  • Our method obtains a competitive quality without explicitly using previous geometric (e.g., 3D Zernike descriptors [8]) or chemical (e.g., hydrophilic information) features [3]. Future EquiDock extensions would find creative ways to leverage these different signals and, thus, obtain more improvements.

   

Novelty of theory:

  • Our work is the first to formalize the notion of pairwise independent SE(3)-equivariance. Previous work (e.g., [9,10]) has incorporated only single object Euclidean-equivariances into deep learning models. For tasks such as docking and binding of biological objects, it is crucial that models understand the concept of multi-independent Euclidean equivariances.

  • All propositions in Section 3 are our novel theoretical contributions.

  • We have rewritten the Contribution and Related Work sections to clarify this aspect.

   


Footnote [a]: We have fixed an important bug in the cross-attention code. We have done a more extensive hyperparameter search and understood that layer normalization is crucial in layers used in Eqs. 5 and 9, but not on the h embeddings as it was originally shown in Eq. 10. We have seen benefits from training our models with a longer patience in the early stopping criteria (30 epochs for DIPS and 150 epochs for DB5). Increasing the learning rate to 2e-4 is important to speed-up training. Using an intersection loss weight of 10 leads to improved results compared to the default of 1.

 

Bibliography:

[1] Protein-ligand blind docking using QuickVina-W with inter-process spatio-temporal integration, Hassan et al., 2017

[2] GNINA 1.0: molecular docking with deep learning, McNutt et al., 2021

[3] Protein-protein and domain-domain interactions, Kangueane and Nilofer, 2018

[4] Side-chain Packing Using SE(3)-Transformer, Jindal et al., 2022

[5] Contacts-based prediction of binding affinity in protein–protein complexes, Vangone et al., 2015

[6] Iterative refinement graph neural network for antibody sequence-structure co-design, Jin et al., 2021

[7] Hierarchical, rotation-equivariant neural networks to select structural models of protein complexes, Eismann et al, 2020

[8] Protein-protein docking using region-based 3D Zernike descriptors, Venkatraman et al., 2009

[9] SE(3)-transformers: 3D roto-translation equivariant attention networks, Fuchs et al, 2020

[10] E(n) equivariant graph neural networks, Satorras et al., 2021

[11] Fast end-to-end learning on protein surfaces, Sverrisson et al., 2020

SOURCE

https://openreview.net/forum?id=GQjaI9mLet

Read Full Post »

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

UPDATED on 11/5/2021

Introducing Isomorphic Labs

I believe we are on the cusp of an incredible new era of biological and medical research. Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.
Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company –  Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.

An AI-first approach to drug discovery and biology
The pandemic has brought to the fore the vital work that brilliant scientists and clinicians do every day to understand and combat disease. We believe that the foundational use of cutting edge computational and AI methods can help scientists take their work to the next level, and massively accelerate the drug discovery process. AI methods will increasingly be used not just for analysing data, but to also build powerful predictive and generative models of complex biological phenomena. AlphaFold2 is an important first proof point of this, but there is so much more to come. 
At its most fundamental level, I think biology can be thought of as an information processing system, albeit an extraordinarily complex and dynamic one. Taking this perspective implies there may be a common underlying structure between biology and information science – an isomorphic mapping between the two – hence the name of the company. Biology is likely far too complex and messy to ever be encapsulated as a simple set of neat mathematical equations. But just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI.

What’s next for Isomorphic Labs
This is just the beginning of what we hope will become a radical new approach to drug discovery, and I’m incredibly excited to get this ambitious new commercial venture off the ground and to partner with pharmaceutical and biomedical companies. I will serve as CEO for Isomorphic’s initial phase, while remaining as DeepMind CEO, partially to help facilitate collaboration between the two companies where relevant, and to set out the strategy, vision and culture of the new company. This will of course include the building of a world-class multidisciplinary team, with deep expertise in areas such as AI, biology, medicinal chemistry, biophysics, and engineering, brought together in a highly collaborative and innovative environment. (We are hiring!
As pioneers in the emerging field of ‘digital biology’, we look forward to helping usher in an amazingly productive new age of biomedical breakthroughs. Isomorphic’s mission could not be a more important one: to use AI to accelerate drug discovery, and ultimately, find cures for some of humanity’s most devastating diseases.

SOURCE

https://www.isomorphiclabs.com/blog

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

DeepMind plans to release hundreds of millions of protein structures for free

James Vincent July 22, 2021 11:00 am

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.
 “the culmination of the entire 10-year-plus lifetime of DeepMind” 
Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.
“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”



Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green). 
Image: DeepMind


There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like 
E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”
 “anyone can use it for anything” 
After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.
The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

The benefits of protein folding


Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.
 “it will definitely have a huge impact for the scientific community” 
New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.
Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind
Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.
Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.
Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

 Protein folding has been a “grand challenge” of biology for decades 

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.
In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition. 
Image: DeepMind
Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis.

@@@@@@@

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

@@@@@@@

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.
 “There’s many ways value can be attained.” 

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”
Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”


SOURCE

https://www.theverge.com/platform/amp/2021/7/22/22586578/deepmind-alphafold-ai-protein-folding-human-proteome-released-for-free?__twitter_impression=true

Potential Use of Protein Folding Predictions for Drug Discovery

PROTAC Technology: Opportunities and Challenges

  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240Publication Date:March 12, 2020https://doi.org/10.1021/acsmedchemlett.9b00597Copyright © 2020 American Chemical Society

Abstract

PROTACs-induced targeted protein degradation has emerged as a novel therapeutic strategy in drug development and attracted the favor of academic institutions, large pharmaceutical enterprises (e.g., AstraZeneca, Bayer, Novartis, Amgen, Pfizer, GlaxoSmithKline, Merck, and Boehringer Ingelheim, etc.), and biotechnology companies. PROTACs opened a new chapter for novel drug development. However, any new technology will face many new problems and challenges. Perspectives on the potential opportunities and challenges of PROTACs will contribute to the research and development of new protein degradation drugs and degrader tools.

Although PROTAC technology has a bright future in drug development, it also has many challenges as follows:
(1)
Until now, there is only one example of PROTAC reported for an “undruggable” target; (18) more cases are needed to prove the advantages of PROTAC in “undruggable” targets in the future.
(2)
“Molecular glue”, existing in nature, represents the mechanism of stabilized protein–protein interactions through small molecule modulators of E3 ligases. For instance, auxin, the plant hormone, binds to the ligase SCF-TIR1 to drive recruitment of Aux/IAA proteins and subsequently triggers its degradation. In addition, some small molecules that induce targeted protein degradation through “molecular glue” mode of action have been reported. (21,22) Furthermore, it has been recently reported that some PROTACs may actually achieve target protein degradation via a mechanism that includes “molecular glue” or via “molecular glue” alone. (23) How to distinguish between these two mechanisms and how to combine them to work together is one of the challenges for future research.
(3)
Since PROTAC acts in a catalytic mode, traditional methods cannot accurately evaluate the pharmacokinetics (PK) and pharmacodynamics (PD) properties of PROTACs. Thus, more studies are urgently needed to establish PK and PD evaluation systems for PROTACs.
(4)
How to quickly and effectively screen for target protein ligands that can be used in PROTACs, especially those targeting protein–protein interactions, is another challenge.
(5)
How to understand the degradation activity, selectivity, and possible off-target effects (based on different targets, different cell lines, and different animal models) and how to rationally design PROTACs etc. are still unclear.
(6)
The human genome encodes more than 600 E3 ubiquitin ligases. However, there are only very few E3 ligases (VHL, CRBN, cIAPs, and MDM2) used in the design of PROTACs. How to expand E3 ubiquitin ligase scope is another challenge faced in this area.

PROTAC technology is rapidly developing, and with the joint efforts of the vast number of scientists in both academia and industry, these problems shall be solved in the near future.

PROTACs have opened a new chapter for the development of new drugs and novel chemical knockdown tools and brought unprecedented opportunities to the industry and academia, which are mainly reflected in the following aspects:
(1)
Overcoming drug resistance of cancer. In addition to traditional chemotherapy, kinase inhibitors have been developing rapidly in the past 20 years. (12) Although kinase inhibitors are very effective in cancer therapy, patients often develop drug resistance and disease recurrence, consequently. PROTACs showed greater advantages in drug resistant cancers through degrading the whole target protein. For example, ARCC-4 targeting androgen receptor could overcome enzalutamide-resistant prostate cancer (13) and L18I targeting BTK could overcome C481S mutation. (14)
(2)
Eliminating both the enzymatic and nonenzymatic functions of kinase. Traditional small molecule inhibitors usually inhibit the enzymatic activity of the target, while PROTACs affect not only the enzymatic activity of the protein but also nonenzymatic activity by degrading the entire protein. For example, FAK possesses the kinase dependent enzymatic functions and kinase independent scaffold functions, but regulating the kinase activity does not successfully inhibit all FAK function. In 2018, a highly effective and selective FAK PROTAC reported by Craig M. Crews’ group showed a far superior activity to clinical candidate drug in cell migration and invasion. (15) Therefore, PROTAC can expand the druggable space of the existing targets and regulate proteins that are difficult to control by traditional small molecule inhibitors.
(3)
Degrade the “undruggable” protein target. At present, only 20–25% of the known protein targets (include kinases, G protein-coupled receptors (GPCRs), nuclear hormone receptors, and iron channels) can be targeted by using conventional drug discovery technologies. (16,17) The proteins that lack catalytic activity and/or have catalytic independent functions are still regarded as “undruggable” targets. The involvement of Signal Transducer and Activator of Transcription 3 (STAT3) in the multiple signaling pathway makes it an attractive therapeutic target; however, the lack of an obviously druggable site on the surface of STAT3 limited the development of STAT3 inhibitors. Thus, there are still no effective drugs directly targeting STAT3 approved by the Food and Drug Administration (FDA). In November 2019, Shaomeng Wang’s group first reported a potent PROTAC targeting STAT3 with potent biological activities in vitro and in vivo. (18) This successful case confirms the key potential of PROTAC technology, especially in the field of “undruggable” targets, such as K-Ras, a tricky tumor target activated by multiple mutations as G12A, G12C, G12D, G12S, G12 V, G13C, and G13D in the clinic. (19)
(4)
Fast and reversible chemical knockdown strategy in vivo. Traditional genetic protein knockout technologies, zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), or CRISPR-Cas9, usually have a long cycle, irreversible mode of action, and high cost, which brings a lot of inconvenience for research, especially in nonhuman primates. In addition, these genetic animal models sometimes produce phenotypic misunderstanding due to potential gene compensation or gene mutation. More importantly, the traditional genetic method cannot be used to study the function of embryonic-lethal genes in vivo. Unlike DNA-based protein knockout technology, PROTACs knock down target proteins directly, rather than acting at the genome level, and are suitable for the functional study of embryonic-lethal proteins in adult organisms. In addition, PROTACs provide exquisite temporal control, allowing the knockdown of a target protein at specific time points and enabling the recovery of the target protein after withdrawal of drug treatment. As a new, rapid and reversible chemical knockdown method, PROTAC can be used as an effective supplement to the existing genetic tools. (20)

SOURCE

PROTAC Technology: Opportunities and Challenges
  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240

Goal in Drug Design: Eliminating both the enzymatic and nonenzymatic functions of kinase.

Work-in-Progress

Induction and Inhibition of Protein in Galectins Drug Design

Work-in-Progress

Screening Proteins in DeepMind’s AlphaFold DataBase

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Work-in-Progress

Other related research published in this Open Access Online Journal include the following:

Synthetic Biology in Drug Discovery

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression  for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

Read Full Post »

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Al is on the way to lead critical ED decisions on CT

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Artificial intelligence (AI) has infiltrated many organizational processes, raising concerns that robotic systems will eventually replace many humans in decision-making. The advent of AI as a tool for improving health care provides new prospects to improve patient and clinical team’s performance, reduce costs, and impact public health. Examples include, but are not limited to, automation; information synthesis for patients, “fRamily” (friends and family unpaid caregivers), and health care professionals; and suggestions and visualization of information for collaborative decision making.

In the emergency department (ED), patients with Crohn’s disease (CD) are routinely subjected to Abdomino-Pelvic Computed Tomography (APCT). It is necessary to diagnose clinically actionable findings (CAF) since they may require immediate intervention, which is typically surgical. Repeated APCTs, on the other hand, results in higher ionizing radiation exposure. The majority of APCT performance guidance is clinical and empiric. Emergency surgeons struggle to identify Crohn’s disease patients who actually require a CT scan to determine the source of acute abdominal distress.

Image Courtesy: Jim Coote via Pixabay https://www.aiin.healthcare/media/49446

Aid seems to be on the way. Researchers employed machine learning to accurately distinguish these sufferers from Crohn’s patients who appear with the same complaint but may safely avoid the recurrent exposure to contrast materials and ionizing radiation that CT would otherwise wreak on them.

The study entitled “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department” was published on July 9 in Digestive and Liver Disease by gastroenterologists and radiologists at Tel Aviv University in Israel.

Retrospectively, Jacob Ollech and his fellow researcher have analyzed 101 emergency treatments of patients with Crohn’s who underwent abdominopelvic CT.

They were looking for examples where a scan revealed clinically actionable results. These were classified as intestinal blockage, perforation, intra-abdominal abscess, or complex fistula by the researchers.

On CT, 44 (43.5 %) of the 101 cases reviewed had such findings.

Ollech and colleagues utilized a machine-learning technique to design a decision-support tool that required only four basic clinical factors to test an AI approach for making the call.

The approach was successful in categorizing patients into low- and high-risk groupings. The researchers were able to risk-stratify patients based on the likelihood of clinically actionable findings on abdominopelvic CT as a result of their success.

Ollech and co-authors admit that their limited sample size, retrospective strategy, and lack of external validation are shortcomings.

Moreover, several patients fell into an intermediate risk category, implying that a standard workup would have been required to guide CT decision-making in a real-world situation anyhow.

Consequently, they generate the following conclusion:

We believe this study shows that a machine learning-based tool is a sound approach for better-selecting patients with Crohn’s disease admitted to the ED with acute gastrointestinal complaints about abdominopelvic CT: reducing the number of CTs performed while ensuring that patients with high risk for clinically actionable findings undergo abdominopelvic CT appropriately.

Main Source:

Konikoff, Tom, Idan Goren, Marianna Yalon, Shlomit Tamir, Irit Avni-Biron, Henit Yanai, Iris Dotan, and Jacob E. Ollech. “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department.” Digestive and Liver Disease (2021). https://www.sciencedirect.com/science/article/abs/pii/S1590865821003340

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Al App for People with Digestive Disorders

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/24/ai-app-for-people-with-digestive-disorders/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Al System Used to Detect Lung Cancer

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/07/06/yet-another-success-story-machine-learning-to-predict-immunotherapy-response/

Systemic Inflammatory Diseases as Crohn’s disease, Rheumatoid Arthritis and Longer Psoriasis Duration May Mean Higher CVD Risk

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2017/10/09/systemic-inflammatory-diseases-as-crohns-disease-rheumatoid-arthritis-and-longer-psoriasis-duration-may-mean-higher-cvd-risk/

Autoimmune Inflammatory Bowel Diseases: Crohn’s Disease & Ulcerative Colitis: Potential Roles for Modulation of Interleukins 17 and 23 Signaling for Therapeutics

Curators: Larry H Bernstein, MD FCAP and Aviva Lev-Ari, PhD, RN https://pharmaceuticalintelligence.com/2016/01/23/autoimmune-inflammtory-bowl-diseases-crohns-disease-ulcerative-colitis-potential-roles-for-modulation-of-interleukins-17-and-23-signaling-for-therapeutics/

Inflammatory Disorders: Inflammatory Bowel Diseases (IBD) – Crohn’s and Ulcerative Colitis (UC) and Others

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/gama-delta-epsilon-gde-is-a-global-holding-company-absorbing-lpbi/subsidiary-5-joint-ventures-for-ip-development-jvip/drug-discovery-with-3d-bioprinting/ibd-inflammatory-bowl-diseases-crohns-and-ulcerative-colitis/

Read Full Post »

This AI Just Evolved From Companion Robot To Home-Based Physician Helper

Reporter: Ethan Coomber, Research Assistant III, Data Science and Podcast Library Development 

Article Author: Gil Press Senior Contributor Enterprise & Cloud @Forbes 

Twitter: @GilPress I write about technology, entrepreneurs and innovation.

Intuition Robotics announced today that it is expanding its mission of improving the lives of older adults to include enhancing their interactions with their physicians. The Israeli startup has developed the AI-based, award-winning proactive social robot ElliQ which has spent over 30,000 days in older adults’ homes over the past two years. Now ElliQ will help increase patient engagement while offering primary care providers continuous actionable data and insights for early detection and intervention.

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

Higher patient engagement leads to lower costs of delivering care and the quality of the physician-patient relationship is positively associated with improved functional health, studies have found. Typically, however, primary care physicians see their patients anywhere from once a month to once a year, even though about 85% of seniors in the U.S. have at least one chronic health condition. ElliQ, with the consent of its users, can provide data on the status of patients in between office visits and facilitate timely and consistent communications between physicians and their patients.

Supporting the notion of a home-based physician assistant robot is the transformation of healthcare delivery in the U.S. More and more primary care physicians are moving from a fee-for-service business model, where doctors are paid according to the procedures used to treat a patient, to “capitation,” where doctors are paid a set amount for each patient they see. This shift in how doctors are compensated is gaining momentum as a key solution for reducing the skyrocketing costs of healthcare: “…inadequate, unnecessary, uncoordinated, and inefficient care and suboptimal business processes eat up at least 35%—and maybe over 50%—of the more than $3 trillion that the country spends annually on health care. That suggests more than $1 trillion is being squandered,” states “The Case for Capitation,” a Harvard Business Review article.

Under this new business model, physicians have a strong incentive to reduce or eliminate visits to the ER and hospitalization, so ElliQ’s assistance in early intervention and support of proactive and preventative healthcare is highly valuable. ElliQ’s “new capabilities provide physicians with visibility into the patient’s condition at home while allowing seamless communication… can assist me and my team in early detection and mitigation of health issues, and it increases patients’ involvement in their care through more frequent engagement and communication,” says in a statement Dr. Peter Barker of Family Doctors, a Mass General Brigham-affiliated practice in Swampscott, MA, that is working with Intuition Robotics.

With the new stage in its evolution, ElliQ becomes “a conversational agent for self-reported data on how people are doing based on what the doctor is telling us to look for and, at the same time, a super-simple communication channel between the physician and the patient,” says Skuler. As only 20% of the individual’s health has to do with the administration of healthcare, Skuler says the balance is already taken care of by ElliQ—encouraging exercise, watching nutrition, keeping mentally active, connecting to the outside world, and promoting a sense of purpose.

A recent article in The Communication of the ACM pointed out that “usability concerns have for too long overshadowed questions about the usefulness and acceptability of digital technologies for older adults.” Specifically, the authors challenge the long-held assumption that accessibility and aging research “fall under the same umbrella despite the fact that aging is neither an illness nor a disability.”

For Skuler, a “pyramid of value” is represented in Intuition Robotics offering. At the foundation is the physical product, easy to use and operate and doing what it is expected to do. Then there is the layer of “building relationships based on trust and empathy,” with a lot of humor and social interaction and activities for the users. On top are specific areas of value to older adults, and the first one is healthcare. There will be more in the future, anything that could help older adults live better lives, such as direct connections to the local community. ”Healthcare is an interesting experiment and I’m very much looking forward to see what else the future holds for ElliQ,” says Skuler.

Original. Reposted with permission, 7/7/2021.

Other related articles published in this Open Access Online Scientific Journal include the Following:

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber
https://pharmaceuticalintelligence.com/2021/06/23/the-future-of-speech-based-human-computer-interaction/

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Read Full Post »

Older Posts »

%d bloggers like this: