Feeds:
Posts
Comments

Drug Discovery & Structural Biology: A Massively Multitask Networks Architecture – Collaboration between Stanford and Google

Drug Discovery & Structural Biology: A Massively Multitask Networks Architecture – Collaboration between Stanford and Google

Reporter: Aviva Lev-Ari, PhD, RN

Massively Multitask Networks for Drug Discovery

Bharath Ramsundar*,†, ◦ RBHARATH@STANFORD.EDU

Steven Kearnes*,† KEARNES@STANFORD.EDU

Patrick Riley◦ PFR@GOOGLE.COM

Dale Webster◦ DRW@GOOGLE.COM

David Konerding◦ DEK@GOOGLE.COM

Vijay Pande† PANDE@STANFORD.EDU

( *Equal contribution, †Stanford University, ◦Google Inc.)

Abstract

Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct biological sources. To train these architectures at scale, we gather large amounts of data from public sources to create a dataset of nearly 40 million measurements across more than 200 biological targets. We investigate several aspects of the multitask framework by performing a series of empirical studies and obtain some interesting results:

(1) massively multitask networks obtain predictive accuracies significantly better than single-task methods,

(2) the predictive power of multitask networks improves as additional tasks and data are added,

(3) the total amount of data and the total number of tasks both contribute significantly to multitask improvement, and

(4) multitask networks afford limited transferability to tasks not in the training set.

Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process

SOURCE

http://arxiv.org/pdf/1502.02072v1.pdf

Large-Scale Machine Learning for Drug Discovery

Posted: Monday, March 02, 2015

Discovering new treatments for human diseases is an immensely complicated challenge; Even after extensive research to develop a biological understanding of a disease, an effective therapeutic that can improve the quality of life must still be found. This process often takes years of research, requiring the creation and testing of millions of drug-like compounds in an effort to find a just a few viable drug treatment candidates. These high-throughput screens are often automated in sophisticated labs and are expensive to perform.

Recently, deep learning with neural networks has been applied in virtual drug screening1,2,3, which attempts to replace or augment the high-throughput screening process with the use of computational methods in order to improve its speed and success rate.4 Traditionally, virtual drug screening has used only the experimental data from the particular disease being studied. However, as the volume of experimental drug screening data across many diseases continues to grow, several research groups have demonstrated that data from multiple diseases can be leveraged with multitask neural networks to improve the virtual screening effectiveness.

In collaboration with the Pande Lab at Stanford University, we’ve released a paper titled “Massively Multitask Networks for Drug Discovery“, investigating how data from a variety of sources can be used to improve the accuracy of determining which chemical compounds would be effective drug treatments for a variety of diseases. In particular, we carefully quantified how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug screening predictions.

Using our large-scale neural network training system, we trained at a scale 18x larger than previous work with a total of 37.8M data points across more than 200 distinct biological processes. Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data. In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future. The data in the paper represents more than 50M total CPU hours.

SOURCE
http://googleresearch.blogspot.com/2015/03/large-scale-machine-learning-for-drug.html

Google, Stanford say big data is key to deep learning for drug discovery

The researches explain the Premise of their methodology:

The efficacy of multitask learning is directly related to the availability of relevant data. Hence, obtaining greater amounts of data is of critical importance for improving the state of the art. Major pharmaceutical companies possess vast private stores of experimental measurements; our work provides a strong argument that increased data sharing could result in benefits for all.

More data will maximize the benefits achievable using current architectures, but in order for algorithmic progress to occur, it must be possible to judge the performance of proposed models against previous work. It is disappointing to note that all published applications of deep learning to virtual screening (that we are aware of) use distinct datasets that are not directly comparable. It remains to future research to establish standard datasets and performance metrics for this field.

. . .

Although deep learning offers interesting possibilities for virtual screening, the full drug discovery process remains immensely complicated. Can deep learning—coupled with large amounts of experimental data—trigger a revolution in this field? Considering the transformational effect that these methods have had on other fields, we are optimistic about the future.

SOURCE

https://gigaom.com/2015/03/02/google-stanford-say-big-data-is-key-to-deep-learning-for-drug-discovery/?utm_content=bufferb1e92&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Comments RSS

Leave a Reply

%d bloggers like this: