Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Artificial intelligence’


2,000 human brains yield clues to how genes raise risk for mental illnesses

Reporter: Irina Robu, PhD

It’s one thing to detect sites in the genome associated with mental disorders; it’s quite another to discover the biological mechanisms by which these changes in DNA work in the human brain to boost risk. In their first concerted effort to tackle the problem, 15 collaborating research teams of the National Institutes of Health-funded PsychENCODE Consortium evaluated data of 2000 human brains which might yield clues to how genes raise risk for mental illnesses.
Applying newly uncovered secrets of the brain’s molecular architecture, they established an artificial intelligence model that is six times better than preceding ones at predicting risk for mental disorders. They also identified several hundred previously unknown risk genes for mental illnesses and linked many known risk variants to specific genes. In the brain tissue and single cells, the researchers identified patterns of gene expression, marks in gene regulation as well as genetic variants that can be linked to mental illnesses.
Dr. Nenad Sestan of Yale University explained that “ the consortium’s integrative genomic analyses elucidate the mechanisms by which cellular diversity and patterns of gene expression change throughout development and reveal how neuropsychiatric risk genes are concentrated into distinct co-expression modules and cell types”. The implicated variants are typically small-effect genetic variations that fall within regions of the genome that don’t code for proteins, but instead are thought to regulate gene expression and other aspects of gene function.
In addition to the 2000 postmortem human brains, researchers examined brain tissue from prenatal development as well as people with schizophrenia, bipolar disorder,  and typical development compared findings with parallel data from non-human primates.

Their findings indicate that gene variants linked to mental illnesses exert more effects when they jointly form “modules”, communicating genes with related functions and at specific developmental time points that seem to coincide with the course of illness. Variability in risk gene expression and cell types increases during formative stages in early prenatal development and again during the teen years. However, in postmortem brains of people with a mental illness, thousands of RNAs were found to have anomalies.

According to NIMH, Geetha Senthil the multi-omic data resource caused by the PsychENCODE collaboration will pave a path for building molecular models of disease and developmental processes and may offer a platform for target identification for pharmaceutical research.

Source
https://www.nih.gov/news-events/news-releases/2000-human-brains-yield-clues-how-genes-raise-risk-mental-illnesses

Advertisements

Read Full Post »


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

Updated 12/18/2018

In the efforts to reduce healthcare costs, provide increased accessibility of service for patients, and drive biomedical innovations, many healthcare and biotechnology professionals have looked to advances in digital technology to determine the utility of IT to drive and extract greater value from healthcare industry.  Two areas of recent interest have focused how best to use blockchain and artificial intelligence technologies to drive greater efficiencies in our healthcare and biotechnology industries.

More importantly, with the substantial increase in ‘omic data generated both in research as well as in the clinical setting, it has become imperative to develop ways to securely store and disseminate the massive amounts of ‘omic data to various relevant parties (researchers or clinicians), in an efficient manner yet to protect personal privacy and adhere to international regulations.  This is where blockchain technologies may play an important role.

A recent Oncotarget paper by Mamoshina et al. (1) discussed the possibility that next-generation artificial intelligence and blockchain technologies could synergize to accelerate biomedical research and enable patients new tools to control and profit from their personal healthcare data, and assist patients with their healthcare monitoring needs. According to the abstract:

The authors introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship value of the data.  They also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare.  In this system, blockchain and deep learning technologies would provide the secure and transparent distribution of personal data in a healthcare marketplace, and would also be useful to resolve challenges faced by the regulators and return control over personal data including medical records to the individual.

The review discusses:

  1. Recent achievements in next-generation artificial intelligence
  2. Basic concepts of highly distributed storage systems (HDSS) as a preferred method for medical data storage
  3. Open source blockchain Exonium and its application for healthcare marketplace
  4. A blockchain-based platform allowing patients to have control of their data and manage access
  5. How advances in deep learning can improve data quality, especially in an era of big data

Advances in Artificial Intelligence

  • Integrative analysis of the vast amount of health-associated data from a multitude of large scale global projects has proven to be highly problematic (REF 27), as high quality biomedical data is highly complex and of a heterogeneous nature, which necessitates special preprocessing and analysis.
  • Increased computing processing power and algorithm advances have led to significant advances in machine learning, especially machine learning involving Deep Neural Networks (DNNs), which are able to capture high-level dependencies in healthcare data. Some examples of the uses of DNNs are:
  1. Prediction of drug properties(2, 3) and toxicities(4)
  2. Biomarker development (5)
  3. Cancer diagnosis (6)
  4. First FDA approved system based on deep learning Arterys Cardio DL
  • Other promising systems of deep learning include:
    • Generative Adversarial Networks (https://arxiv.org/abs/1406.2661): requires good datasets for extensive training but has been used to determine tumor growth inhibition capabilities of various molecules (7)
    • Recurrent neural Networks (RNN): Originally made for sequence analysis, RNN has proved useful in analyzing text and time-series data, and thus would be very useful for electronic record analysis. Has also been useful in predicting blood glucose levels of Type I diabetic patients using data obtained from continuous glucose monitoring devices (8)
    • Transfer Learning: focused on translating information learned on one domain or larger dataset to another, smaller domain. Meant to reduce the dependence on large training datasets that RNN, GAN, and DNN require.  Biomedical imaging datasets are an example of use of transfer learning.
    • One and Zero-Shot Learning: retains ability to work with restricted datasets like transfer learning. One shot learning aimed to recognize new data points based on a few examples from the training set while zero-shot learning aims to recognize new object without seeing the examples of those instances within the training set.

Highly Distributed Storage Systems (HDSS)

The explosion in data generation has necessitated the development of better systems for data storage and handling. HDSS systems need to be reliable, accessible, scalable, and affordable.  This involves storing data in different nodes and the data stored in these nodes are replicated which makes access rapid. However data consistency and affordability are big challenges.

Blockchain is a distributed database used to maintain a growing list of records, in which records are divided into blocks, locked together by a crytosecurity algorithm(s) to maintain consistency of data.  Each record in the block contains a timestamp and a link to the previous block in the chain.  Blockchain is a distributed ledger of blocks meaning it is owned and shared and accessible to everyone.  This allows a verifiable, secure, and consistent history of a record of events.

Data Privacy and Regulatory Issues

The establishment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 has provided much needed regulatory guidance and framework for clinicians and all concerned parties within the healthcare and health data chain.  The HIPAA act has already provided much needed guidance for the latest technologies impacting healthcare, most notably the use of social media and mobile communications (discussed in this article  Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.).  The advent of blockchain technology in healthcare offers its own unique challenges however HIPAA offers a basis for developing a regulatory framework in this regard.  The special standards regarding electronic data transfer are explained in HIPAA’s Privacy Rule, which regulates how certain entities (covered entities) use and disclose individual identifiable health information (Protected Health Information PHI), and protects the transfer of such information over any medium or electronic data format. However, some of the benefits of blockchain which may revolutionize the healthcare system may be in direct contradiction with HIPAA rules as outlined below:

Issues of Privacy Specific In Use of Blockchain to Distribute Health Data

  • Blockchain was designed as a distributed database, maintained by multiple independent parties, and decentralized
  • Linkage timestamping; although useful in time dependent data, proof that third parties have not been in the process would have to be established including accountability measures
  • Blockchain uses a consensus algorithm even though end users may have their own privacy key
  • Applied cryptography measures and routines are used to decentralize authentication (publicly available)
  • Blockchain users are divided into three main categories: 1) maintainers of blockchain infrastructure, 2) external auditors who store a replica of the blockchain 3) end users or clients and may have access to a relatively small portion of a blockchain but their software may use cryptographic proofs to verify authenticity of data.

 

YouTube video on How #Blockchain Will Transform Healthcare in 25 Years (please click below)

 

 

In Big Data for Better Outcomes, BigData@Heart, DO->IT, EHDN, the EU data Consortia, and yes, even concepts like pay for performance, Richard Bergström has had a hand in their creation. The former Director General of EFPIA, and now the head of health both at SICPA and their joint venture blockchain company Guardtime, Richard is always ahead of the curve. In fact, he’s usually the one who makes the curve in the first place.

 

 

 

Please click on the following link for a podcast on Big Data, Blockchain and Pharma/Healthcare by Richard Bergström:

References

  1. Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., and Zhavoronkov, A. (2018) Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare, Oncotarget 9, 5665-5690.
  2. Aliper, A., Plis, S., Artemov, A., Ulloa, A., Mamoshina, P., and Zhavoronkov, A. (2016) Deep Learning Applications for Predicting Pharmacological Properties of Drugs and Drug Repurposing Using Transcriptomic Data, Molecular pharmaceutics 13, 2524-2530.
  3. Wen, M., Zhang, Z., Niu, S., Sha, H., Yang, R., Yun, Y., and Lu, H. (2017) Deep-Learning-Based Drug-Target Interaction Prediction, Journal of proteome research 16, 1401-1409.
  4. Gao, M., Igata, H., Takeuchi, A., Sato, K., and Ikegaya, Y. (2017) Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds, Journal of pharmacological sciences 133, 70-78.
  5. Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M., Moskalev, A., Kolosov, A., Ostrovskiy, A., Cantor, C., Vijg, J., and Zhavoronkov, A. (2016) Deep biomarkers of human aging: Application of deep neural networks to biomarker development, Aging 8, 1021-1033.
  6. Vandenberghe, M. E., Scott, M. L., Scorer, P. W., Soderberg, M., Balcerzak, D., and Barker, C. (2017) Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer, Scientific reports 7, 45938.
  7. Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., and Zhavoronkov, A. (2017) druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with Desired Molecular Properties in Silico, Molecular pharmaceutics 14, 3098-3104.
  8. Ordonez, F. J., and Roggen, D. (2016) Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition, Sensors (Basel) 16.

Articles from clinicalinformaticsnews.com

Healthcare Organizations Form Synaptic Health Alliance, Explore Blockchain’s Impact On Data Quality

From http://www.clinicalinformaticsnews.com/2018/12/05/healthcare-organizations-form-synaptic-health-alliance-explore-blockchains-impact-on-data-quality.aspx

By Benjamin Ross

December 5, 2018 | The boom of blockchain and distributed ledger technologies have inspired healthcare organizations to test the capabilities of their data. Quest Diagnostics, in partnership with Humana, MultiPlan, and UnitedHealth Group’s Optum and UnitedHealthcare, have launched a pilot program that applies blockchain technology to improve data quality and reduce administrative costs associated with changes to healthcare provider demographic data.

The collective body, called Synaptic Health Alliance, explores how blockchain can keep only the most current healthcare provider information available in health plan provider directories. The alliance plans to share their progress in the first half of 2019.

Providing consumers looking for care with accurate information when they need it is essential to a high-functioning overall healthcare system, Jason O’Meara, Senior Director of Architecture at Quest Diagnostics, told Clinical Informatics News in an email interview.

“We were intentional about calling ourselves an alliance as it speaks to the shared interest in improving health care through better, collaborative use of an innovative technology,” O’Meara wrote. “Our large collective dataset and national footprints enable us to prove the value of data sharing across company lines, which has been limited in healthcare to date.”

O’Meara said Quest Diagnostics has been investing time and resources the past year or two in understanding blockchain, its ability to drive purpose within the healthcare industry, and how to leverage it for business value.

“Many health care and life science organizations have cast an eye toward blockchain’s potential to inform their digital strategies,” O’Meara said. “We recognize it takes time to learn how to leverage a new technology. We started exploring the technology in early 2017, but we quickly recognized the technology’s value is in its application to business to business use cases: to help transparently share information, automate mutually-beneficial processes and audit interactions.”

Quest began discussing the potential for an alliance with the four other companies a year ago, O’Meara said. Each company shared traits that would allow them to prove the value of data sharing across company lines.

“While we have different perspectives, each member has deep expertise in healthcare technology, a collaborative culture, and desire to continuously improve the patient/customer experience,” said O’Meara. “We also recognize the value of technology in driving efficiencies and quality.”

Following its initial launch in April, Synaptic Health Alliance is deploying a multi-company, multi-site, permissioned blockchain. According to a whitepaper published by Synaptic Health, the choice to use a permissioned blockchain rather than an anonymous one is crucial to the alliance’s success.

“This is a more effective approach, consistent with enterprise blockchains,” an alliance representative wrote. “Each Alliance member has the flexibility to deploy its nodes based on its enterprise requirements. Some members have elected to deploy their nodes within their own data centers, while others are using secured public cloud services such as AWS and Azure. This level of flexibility is key to growing the Alliance blockchain network.”

As the pilot moves forward, O’Meara says the Alliance plans to open ability to other organizations. Earlier this week Aetna and Ascension announced they joined the project.

“I am personally excited by the amount of cross-company collaboration facilitated by this project,” O’Meara says. “We have already learned so much from each other and are using that knowledge to really move the needle on improving healthcare.”

 

US Health And Human Services Looks To Blockchain To Manage Unstructured Data

http://www.clinicalinformaticsnews.com/2018/11/29/us-health-and-human-services-looks-to-blockchain-to-manage-unstructured-data.aspx

By Benjamin Ross

November 29, 2018 | The US Department of Health and Human Services (HHS) is making waves in the blockchain space. The agency’s Division of Acquisition (DA) has developed a new system, called Accelerate, which gives acquisition teams detailed information on pricing, terms, and conditions across HHS in real-time. The department’s Associate Deputy Assistant Secretary for Acquisition, Jose Arrieta, gave a presentation and live demo of the blockchain-enabled system at the Distributed: Health event earlier this month in Nashville, Tennessee.

Accelerate is still in the prototype phase, Arrieta said, with hopes that the new system will be deployed at the end of the fiscal year.

HHS spends around $25 billion a year in contracts, Arrieta said. That’s 100,000 contracts a year with over one million pages of unstructured data managed through 45 different systems. Arrieta and his team wanted to modernize the system.

“But if you’re going to change the way a workforce of 20,000 people do business, you have to think your way through how you’re going to do that,” said Arrieta. “We didn’t disrupt the existing systems: we cannibalized them.”

The cannibalization process resulted in Accelerate. According to Arrieta, the system functions by creating a record of data rather than storing it, leveraging machine learning, artificial intelligence (AI), and robotic process automation (RPA), all through blockchain data.

“We’re using that data record as a mechanism to redesign the way we deliver services through micro-services strategies,” Arrieta said. “Why is that important? Because if you have a single application or data use that interfaces with 55 other applications in your business network, it becomes very expensive to make changes to one of the 55 applications.”

Accelerate distributes the data to the workforce, making it available to them one business process at a time.

“We’re building those business processes without disrupting the existing systems,” said Arrieta, and that’s key. “We’re not shutting off those systems. We’re using human-centered design sessions to rebuild value exchange off of that data.”

The first application for the system, Arrieta said, can be compared to department stores price-matching their online competitors.

It takes the HHS close to a month to collect the amalgamation of data from existing system, whether that be terms and conditions that drive certain price points, or software licenses.

“The micro-service we built actually analyzes that data, and provides that information to you within one second,” said Arrieta. “This is distributed to the workforce, to the 5,000 people that do the contracting, to the 15,000 people that actually run the programs at [HHS].”

This simple micro-service is replicated on every node related to HHS’s internal workforce. If somebody wants to change the algorithm to fit their needs, they can do that in a distributed manner.

Arrieta hopes to use Accelerate to save researchers money at the point of purchase. The program uses blockchain to simplify the process of acquisition.

“How many of you work with the federal government?” Arrieta asked the audience. “Do you get sick of reentering the same information over and over again? Every single business opportunity you apply for, you have to resubmit your financial information. You constantly have to check for validation and verification, constantly have to resubmit capabilities.”

Wouldn’t it be better to have historical notes available for each transaction? said Arrieta. This would allow clinical researchers to be able to focus on “the things they’re really good at,” instead of red tape.

“If we had the top cancer researcher in the world, would you really want her spending her time learning about federal regulations as to how to spend money, or do you want her trying to solve cancer?” Arrieta said. “What we’re doing is providing that data to the individual in a distributed manner so they can read the information of historical purchases that support activity, and they can focus on the objectives and risks they see as it relates to their programming and their objectives.”

Blockchain also creates transparency among researchers, Arrieta said, which says creates an “uncomfortable reality” in the fact that they have to make a decision regarding data, fundamentally changing value exchange.

“The beauty of our business model is internal investment,” Arrieta said. For instance, the HHS could take all the sepsis data that exists in their system, put it into a distributed ledger, and share it with an external source.

“Maybe that could fuel partnership,” Arrieta said. “I can make data available to researchers in the field in real-time so they can actually test their hypothesis, test their intuition, and test their imagination as it relates to solving real-world problems.”

 

Shivom is creating a genomic data hub to elongate human life with AI

From VentureBeat.com
Blockchain-based genomic data hub platform Shivom recently reached its $35 million hard cap within 15 seconds of opening its main token sale. Shivom received funding from a number of crypto VC funds, including Collinstar, Lateral, and Ironside.

The goal is to create the world’s largest store of genomic data while offering an open web marketplace for patients, data donors, and providers — such as pharmaceutical companies, research organizations, governments, patient-support groups, and insurance companies.

“Disrupting the whole of the health care system as we know it has to be the most exciting use of such large DNA datasets,” Shivom CEO Henry Ines told me. “We’ll be able to stratify patients for better clinical trials, which will help to advance research in precision medicine. This means we will have the ability to make a specific drug for a specific patient based on their DNA markers. And what with the cost of DNA sequencing getting cheaper by the minute, we’ll also be able to sequence individuals sooner, so young children or even newborn babies could be sequenced from birth and treated right away.”

While there are many solutions examining DNA data to explain heritage, intellectual capabilities, health, and fitness, the potential of genomic data has largely yet to be unlocked. A few companies hold the monopoly on genomic data and make sizeable profits from selling it to third parties, usually without sharing the earnings with the data donor. Donors are also not informed if and when their information is shared, nor do they have any guarantee that their data is secure from hackers.

Shivom wants to change that by creating a decentralized platform that will break these monopolies, democratizing the processes of sharing and utilizing the data.

“Overall, large DNA datasets will have the potential to aid in the understanding, prevention, diagnosis, and treatment of every disease known to mankind, and could create a future where no diseases exist, or those that do can be cured very easily and quickly,” Ines said. “Imagine that, a world where people do not get sick or are already aware of what future diseases they could fall prey to and so can easily prevent them.”

Shivom’s use of blockchain technology and smart contracts ensures that all genomic data shared on the platform will remain anonymous and secure, while its OmiX token incentivizes users to share their data for monetary gain.

Rise in Population Genomics: Local Government in India Will Use Blockchain to Secure Genetic Data

Blockchain will secure the DNA database for 50 million citizens in the eighth-largest state in India. The government of Andhra Pradesh signed a Memorandum of Understanding with a German genomics and precision medicine start-up, Shivom, which announced to start the pilot project soon. The move falls in line with a trend for governments turning to population genomics, and at the same time securing the sensitive data through blockchain.

Andhra Pradesh, DNA, and blockchain

Storing sensitive genetic information safely and securely is a big challenge. Shivom builds a genomic data-hub powered by blockchain technology. It aims to connect researchers with DNA data donors thus facilitating medical research and the healthcare industry.

With regards to Andhra Pradesh, the start-up will first launch a trial to determine the viability of their technology for moving from a proactive to a preventive approach in medicine, and towards precision health. “Our partnership with Shivom explores the possibilities of providing an efficient way of diagnostic services to patients of Andhra Pradesh by maintaining the privacy of the individual data through blockchain technologies,” said J A Chowdary, IT Advisor to Chief Minister, Government of Andhra Pradesh.

Other Articles in this Open Access Journal on Digital Health include:

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

 

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

 

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

 

Medcity Converge 2018 Philadelphia: Live Coverage @pharma_BI

 

Digital Health Breakthrough Business Models, June 5, 2018 @BIOConvention, Boston, BCEC

 

 

 

 

 

 

Read Full Post »


 

Live Coverage: MedCity Converge 2018 Philadelphia: AI in Cancer and Keynote Address

Reporter: Stephen J. Williams, PhD

8:30 AM -9:15

Practical Applications of AI in Cancer

We are far from machine learning dictating clinical decision making, but AI has important niche applications in oncology. Hear from a panel of innovative startups and established life science players about how machine learning and AI can transform different aspects in healthcare, be it in patient recruitment, data analysis, drug discovery or care delivery.

Moderator: Ayan Bhattacharya, Advanced Analytics Specialist Leader, Deloitte Consulting LLP
Speakers:
Wout Brusselaers, CEO and Co-Founder, Deep 6 AI @woutbrusselaers ‏
Tufia Haddad, M.D., Chair of Breast Medical Oncology and Department of Oncology Chair of IT, Mayo Clinic
Carla Leibowitz, Head of Corporate Development, Arterys @carlaleibowitz
John Quackenbush, Ph.D., Professor and Director of the Center for Cancer Computational Biology, Dana-Farber Cancer Institute

Ayan: working at IBM and Thompon Rueters with structured datasets and having gone through his own cancer battle, he is now working in healthcare AI which has an unstructured dataset(s)

Carla: collecting medical images over the world, mainly tumor and calculating tumor volumetrics

Tufia: drug resistant breast cancer clinician but interested in AI and healthcareIT at Mayo

John: taking large scale datasets but a machine learning skeptic

moderator: how has imaging evolved?

Carla: ten times images but not ten times radiologists so stressed field needs help with image analysis; they have seen measuring lung tumor volumetrics as a therapeutic diagnostic has worked

moderator: how has AI affected patient recruitment?

Tufia: majority of patients are receiving great care but AI can offer profiles and determine which patients can benefit from tertiary care;

John: 1980 paper on no free lunch theorem; great enthusiasm about optimization algortihisms fell short in application; can extract great information from e.g. images

moderator: how is AI for healthcare delivery working at mayo?

Tufia: for every hour with patient two hours of data mining. for care delivery hope to use the systems to leverage the cognitive systems to do the data mining

John: problem with irreproducible research which makes a poor dataset:  also these care packages are based on population data not personalized datasets; challenges to AI is moving correlation to causation

Carla: algorithisms from on healthcare network is not good enough, Google tried and it failed

John: curation very important; good annotation is needed; needed to go in and develop, with curators, a systematic way to curate medial records; need standardization and reproducibility; applications in radiometrics can be different based on different data collection machines; developed a machine learning model site where investigators can compare models on a hub; also need to communicate with patients on healthcare information and quality information

Ayan: Australia and Canada has done the most concerning AI and lifescience, healthcare space; AI in most cases is cognitive learning: really two types of companies 1) the Microsofts, Googles, and 2) the startups that may be more pure AI

 

Final Notes: We are at a point where collecting massive amounts of healthcare related data is simple, rapid, and shareable.  However challenges exist in quality of datasets, proper curation and annotation, need for collaboration across all healthcare stakeholders including patients, and dissemination of useful and accurate information

 

9:15 AM–9:45 AM

Opening Keynote: Dr. Joshua Brody, Medical Oncologist, Mount Sinai Health System

The Promise and Hype of Immunotherapy

Immunotherapy is revolutionizing oncology care across various types of cancers, but it is also necessary to sort the hype from the reality. In his keynote, Dr. Brody will delve into the history of this new therapy mode and how it has transformed the treatment of lymphoma and other diseases. He will address the hype surrounding it, why so many still don’t respond to the treatment regimen and chart the way forward—one that can lead to more elegant immunotherapy combination paths and better outcomes for patients.

Speaker:
Joshua Brody, M.D., Assistant Professor, Mount Sinai School of Medicine @joshuabrodyMD

Director Lymphoma therapy at Mt. Sinai

  • lymphoma a cancer with high PD-L1 expression
  • hodgkin’s lymphoma best responder to PD1 therapy (nivolumab) but hepatic adverse effects
  • CAR-T (chimeric BCR and TCR); a long process which includes apheresis, selection CD3/CD28 cells; viral transfection of the chimeric; purification
  • complete remissions of B cell lymphomas (NCI trial) and long term remissions past 18 months
  • side effects like cytokine release (has been controlled); encephalopathy (he uses a hand writing test to see progression of adverse effect)

Vaccines

  •  teaching the immune cells as PD1 inhibition exhausting T cells so a vaccine boost could be an adjuvant to PD1 or checkpoint therapy
  • using Flt3L primed in-situ vaccine (using a Toll like receptor agonist can recruit the dendritic cells to the tumor and then activation of T cell response);  therefore vaccine does not need to be produced ex vivo; months after the vaccine the tumor still in remission
  • versus rituximab, which can target many healthy B cells this in-situ vaccine strategy is very specific for the tumorigenic B cells
  • HoWEVER they did see resistant tumor cells which did not overexpress PD-L1 but they did discover a novel checkpoint (cannot be disclosed at this point)

 

 

 

 

 

 

 

 

 

Please follow on Twitter using the following #hashtags and @pharma_BI

#MCConverge

#AI

#cancertreatment

#immunotherapy

#healthIT

#innovation

#precisionmedicine

#healthcaremodels

#personalizedmedicine

#healthcaredata

And at the following handles:

@pharma_BI

@medcitynews

 

Please see related articles on Live Coverage of Previous Meetings on this Open Access Journal

LIVE – Real Time – 16th Annual Cancer Research Symposium, Koch Institute, Friday, June 16, 9AM – 5PM, Kresge Auditorium, MIT

Real Time Coverage and eProceedings of Presentations on 11/16 – 11/17, 2016, The 12th Annual Personalized Medicine Conference, HARVARD MEDICAL SCHOOL, Joseph B. Martin Conference Center, 77 Avenue Louis Pasteur, Boston

Tweets Impression Analytics, Re-Tweets, Tweets and Likes by @AVIVA1950 and @pharma_BI for 2018 BioIT, Boston, 5/15 – 5/17, 2018

BIO 2018! June 4-7, 2018 at Boston Convention & Exhibition Center

https://pharmaceuticalintelligence.com/press-coverage/

Read Full Post »


Our Astrophysicist

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Ray Kurzweil talks with host Neal deGrasse Tyson, PhD: on invention & immortality

part of the week long event series 7 Days of Genius at 92 Street Y

92 Street Y | 7 Days of Genius
Conversation on stage during the week long event series, held at the historic community center.

featured talk | Ray Kurzweil with host Neil DeGrasse Tyson, PhD — on Invention & Immortality

http://www.kurzweilai.net/images/video-A12.png

 

http://www.kurzweilai.net/images/Entrepreneur-A4.png

 

Inventor, author and futurist Ray Kurzweil is joined by astrophysicist and science communicator Neil deGrasse Tyson, PhD for a discussion of some of the biggest topics of our time. They explore the role of technology in the future, its impact on brain science — and coming innovations in artificial intelligence, energy, life extension and immortality.

Ray Kurzweil has been accurately predicting the future for decades. He explains to Star Talk show host Neil DeGrasse Tyson, PhD how he does it.

Kurzweil also says microscopic robots called nanobots will connect your neocortex to the cloud — the expansion of the human brain that he predicts will happen in the 2030s.

This featured talk is part of a week long series of events called 7 Days of Genius. Presented by the celebrated, historic 92 Street Y cultural arts and community center.

video | 1.
Highlights from the talk with Ray Kurzweil and host Neil deGrasse Tyson, PhD

https://youtu.be/1km56ka9Gnw

 

video | 2.
Highlights from the talk with Ray Kurzweil and host Neil deGrasse Tyson, PhD

https://youtu.be/6BsluRkxs78

 

Entrepreneur | The one tip for success shared by Ray Kurzweil and Neil deGrasse Tyson, PhD

March 9, 2016

Entrepreneur — March 8, 2016 | Catherine Clifford

This is a summary. Read original article in full here

Follow your passion deeply, Ray Kurzweil told an audience at an impressively humorous and entertaining talk hosted by astrophysicist Neil deGrasse Tyson, PhD at the 92 Street Y community center.

The talk with leading, innovative thinkers was part of 92 Street Y’s week long 7 Days of Genius festival. Kurzweil is an inventor, entrepreneur, author and futurist.

In the future, Kurzweil said, there will be a premium on specialized, comprehensive knowledge. If you have passion for art, music or literature — follow that, he says. Kurzweil learned when he was young he had a passion for inventing. “But for some people it’s not clear,” he says. “They should explore many different avenues.”

Money should not be the motivating factor, says Kurzweil, who is something of a romantic. “Don’t do what you think is practical, just because you think that’s a way to make a living. The best way to pursue the future is find an expression you have a passion for,” he says.

Tyson encourages people to seek out learning, visit museums and follow curiosity. Tyson says, “I’m here to make more people passionate, to transform the world for good.”

 

about | 7 Days of Genius at 92 Street Y
Background on the week long event series exploring science, innovation and culture.

92 Street Y |  7 Days of Genius is a multi-platform, week long festival with stage events featuring thought leaders in science, innovation and culture. It explores the concept of genius, and how it transforms lives and cultures.

Events are also hosted globally by partner organizations, and digital broadcast through partners MS • NBC and National Geographic.

Our yearly series of inspiring conversations with experts in politics, technology, knowledge, ethics is focused on the power of genius to change the world for the better.

 

92 Street Y | 7 Days of Genius
Some featured speakers from the series.

1.  Manjul Bhargava, PhD
2.  Esther Dyson
3.  Ray Kurzweil
4.  Martine Rothblatt, PhD
5.  Yancey Strickler
6.  Neil deGrasse Tyson, PhD

 

the festival celebrates Genius Revealed featuring:

1.  special installation at 92 Street Y on remarkable, historic female scientists and inventors throughout history
2.  series on female genius produced with Big Think
3.  20 world events with United Nations Women, exploring how genius can help gender equity
4.  global events celebrating innovative ideas of youth to improve communities with design, entrepreneurship
5.  look for Mental Floss campaign on women geniuses
6.  special programming on MS • NBC, and results of our Ultimate Genius Showdown
7.  see winners of our Global Challenges on design, entrepreneurship, religion

 

video | about 92 Street Y
Background on the historic cultural and community center

watch | video tour

about | 92 Street Y
Landmark community center for culture, arts and conversation.

The historic 92 Street Y is a famous cultural and community center where people from all over connect through culture, arts, entertainment and conversation. For 140 years, we have harnessed the power of arts and ideas to enrich, enlighten and change lives, and the power of community to repair the world. The 92 Street Y is a United States cultural institution in New York, New York at the corner of 92 Street and Lexington Avenue. It’s now a significant landmark center for music, arts, philosophy, celebrity talks and entertainment.

Its full name is the 92 Street Young Men’s and Young Women’s Hebrew Association. Founded in 1874 by German Jewish professionals, 92 Street Y has grown into an organization guided by Jewish principles but serves people of all races and faiths. We harness the power of arts and ideas to enrich, enlighten and change lives, and the power of community.

We enthusiastically reach out to all ages, backgrounds while embracing Jewish values like learning and self-improvement, importance of family, joy of life, and giving back to a wonderfully diverse and growing world.

We curate conversations with the world’s thought leaders — today’s most exceptional thinkers and influential partners for social good — to deepen understanding and engage.

Our performing arts center presents classical, jazz, popular and world music and dance performances. 92 Street Y is a legendary literary destination where the most celebrated writers and readers have gathered since 1939.

We’re a studio, school and workshop where dancers, musicians, jewelry makers, ceramicists, visual artists, poets, playwrights and novelists — professionals and eager amateurs — nourish the human spirit through the arts.

We provide an inspiring, safe and supportive home for families, decades of expertise in parenting, child development, after school sports and classes, special needs programs and summer camps. And offer seniors dozens of activities.

Our fitness center inspires health. 92 Street Y creates meaningful, relevant and joyous experiences for all those who want to connect, finding new ways to bring tradition into dialog with the modern world.

I would encourage anybody as well to watch the Intelligence Square debate video on 92nd Y Street. It is quiet interesting It’s called Don’t trust the promise of Artificial Intelligence. I think both sides of the debate bring interesting arguments.

 

PBS Newshour | Tech’s next feats? Maybe on-demand kidneys, robot sex, cheap solar, lab meat

PBS Newshour | Optimists at Silicon Valley thinktank Singularity University are pushing the frontiers of human progress through innovation and emerging technologies, looking to greater longevity and better health. As part of his series on “Making Sense” of financial news, Paul Solman explores a future of “exponential growth.”

Paul Solman: Admittedly, solar now provides less than 1 percent of U.S. energy needs. But Singularity University’s other cofounder, Ray Kurzweil, whom we interviewed by something called Teleportec, says the public is pointlessly pessimistic.

Ray Kurzweil, Chancellor, Singularity University: And I think the major reason that people are pessimistic is they don’t realize that these technologies are growing exponentially.

For example, solar energy is doubling every two years. It’s now only seven doublings from meeting 100 percent of the world’s energy needs, and we have 10,000 times more sunlight than we need to do that. […]

 

York University | “Google’s Ray Kurzweil receives honorary doctorate” — October 16, 2013

On October 16, 2013 York University conferred an honorary doctorate on Ray Kurzweil, Director of Engineering at Google, in a ceremony on campus. The Lassonde School of Engineering wishes to congratulate Ray Kurzweil on this tremendous honour.

An inventor, author, futurist and a thinker, Ray Kurzweil is most certainly a Renaissance Engineer

Read Full Post »


Unlocking the Microbiome

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Machine-learning technique uncovers unknown features of multi-drug-resistant pathogen

Relatively simple “unsupervised” learning system reveals important new information to microbiologists
January 29, 201   http://www.kurzweilai.net/machine-learning-technique-uncovers-unknown-features-of-pathogen

http://www.kurzweilai.net/images/Pseudomonas-aeruginosa.jpg

According to the CDC, Pseudomonas aeruginosa is a common cause of healthcare-associated infections, including pneumonia, bloodstream infections, urinary tract infections, and surgical site infections. Some strains of P. aeruginosa have been found to be resistant to nearly all or all antibiotics. (illustration credit: CDC)

A new machine-learning technique can uncover previously unknown features of organisms and their genes in large datasets, according to researchers from the Perelman School of Medicine at the University of Pennsylvania and the Geisel School of Medicine at Dartmouth University.

For example, the technique learned to identify the characteristic gene-expression patterns that appear when a bacterium is exposed in different conditions, such as low oxygen and the presence of antibiotics.

The technique, called “ADAGE” (Analysis using Denoising Autoencoders of Gene Expression), uses a “denoising autoencoder” algorithm, which learns to identify recurring features or patterns in large datasets — without being told what specific features to look for (that is, “unsupervised.”)*

Last year,  Casey Greene, PhD, an assistant professor of Systems Pharmacology and Translational Therapeutics at Penn, and his team published, in an open-access paper in the American Society for Microbiology’s mSystems, the first demonstration of ADAGE in a biological context: an analysis of two gene-expression datasets of breast cancers.

Tracking down gene patterns of a multi-drug-resistant bacterium

The new study, published Jan. 19 in an open-access paper in mSystems, was more ambitious. It applied ADAGE to a dataset of 950 gene-expression arrays publicly available at the time for the multi-drug-resistant bacteriumPseudomonas aeruginosa. This bacterium is a notorious pathogen in the hospital and in individuals with cystic fibrosis and other chronic lung conditions; it’s often difficult to treat due to its high resistance to standard antibiotic therapies.

The data included only the identities of the roughly 5,000 P. aeruginosa genes and their measured expression levels in each published experiment. The goal was to see if this “unsupervised” learning system could uncover important patterns in P. aeruginosa gene expression and clarify how those patterns change when the bacterium’s environment changes — for example, when in the presence of an antibiotic.

Even though the model built with ADAGE was relatively simple — roughly equivalent to a brain with only a few dozen neurons — it had no trouble learning which sets of P. aeruginosa genes tend to work together or in opposition. To the researchers’ surprise, the ADAGE system also detected differences between the main laboratory strain of P. aeruginosa and strains isolated from infected patients. “That turned out to be one of the strongest features of the data,” Greene said.

“We expect that this approach will be particularly useful to microbiologists researching bacterial species that lack a decades-long history of study in the lab,” said Greene. “Microbiologists can use these models to identify where the data agree with their own knowledge and where the data seem to be pointing in a different direction … and to find completely new things in biology that we didn’t even know to look for.”

Support for the research came from the Gordon and Betty Moore Foundation, the William H. Neukom Institute for Computational Science, the National Institutes of Health, and the Cystic Fibrosis Foundation.

* In 2012, Google-sponsored researchers applied a similar method to randomly selected YouTube images; their system learned to recognize major recurring features of those images — including cats of course.


Abstract of ADAGE-Based Integration of Publicly Available Pseudomonas aeruginosa Gene Expression Data with Denoising Autoencoders Illuminates Microbe-Host Interactions

The increasing number of genome-wide assays of gene expression available from public databases presents opportunities for computational methods that facilitate hypothesis generation and biological interpretation of these data. We present an unsupervised machine learning approach, ADAGE (analysis using denoising autoencoders of gene expression), and apply it to the publicly available gene expression data compendium for Pseudomonas aeruginosa. In this approach, the machine-learned ADAGE model contained 50 nodes which we predicted would correspond to gene expression patterns across the gene expression compendium. While no biological knowledge was used during model construction, cooperonic genes had similar weights across nodes, and genes with similar weights across nodes were significantly more likely to share KEGG pathways. By analyzing newly generated and previously published microarray and transcriptome sequencing data, the ADAGE model identified differences between strains, modeled the cellular response to low oxygen, and predicted the involvement of biological processes based on low-level gene expression differences. ADAGE compared favorably with traditional principal component analysis and independent component analysis approaches in its ability to extract validated patterns, and based on our analyses, we propose that these approaches differ in the types of patterns they preferentially identify. We provide the ADAGE model with analysis of all publicly available P. aeruginosa GeneChip experiments and open source code for use with other species and settings. Extraction of consistent patterns across large-scale collections of genomic data using methods like ADAGE provides the opportunity to identify general principles and biologically important patterns in microbial biology. This approach will be particularly useful in less-well-studied microbial species.


Abstract of Unsupervised feature construction and knowledge extraction from genome-wide assays of breast cancer with denoising autoencoders

Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties.

Read Full Post »


Future of Big Data for Societal Transformation

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Musk, others commit $1 billion to non-profit AI research company to ‘benefit humanity’

Open-sourcing AI development to prevent an AI superpower takeover
(credit: OpenAI)

Elon Musk and associates announced OpenAI, a non-profit AI research company, on Friday (Dec. 11), committing $1 billion toward their goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

The funding comes from a group of tech leaders including Musk, Reid Hoffman, Peter Thiel, and Amazon Web Services, but the venture expects to only spend “a tiny fraction of this in the next few years.”

The founders note that it’s hard to predict how much AI could “damage society if built or used incorrectly” or how soon. But the hope is to have a leading research institution that can “prioritize a good outcome for all over its own self-interest … as broadly and evenly distributed as possible.”

Brains trust

OpenAI’s co-chairs are Musk, who is also the principal funder of Future of Life Institute, and Sam Altman, president of  venture-capital seed-accelerator firm Y Combinator, who is also providing funding.

I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.” — Elon Musk on Medium

The founders say the organization’s patents (if any) “will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

OpenAI’s research director is machine learning expert Ilya Sutskever, formerly at Google, and its CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are “world-class research engineers and scientists” Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. The company will be based in San Francisco.


If I’m Dr. Evil and I use it, won’t you be empowering me?

“There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.” — Sam Altman in an interview with Steven Levy on Medium.


The announcement follows recent announcements by Facebook to open-source the hardware design of its GPU-based “Big Sur” AI server (used for large-scale machine learning software to identify objects in photos and understand natural language, for example); by Google to open-source its TensorFlow machine-learning software; and by Toyota Corporation to invest $1 billion in a five-year private research effort in artificial intelligence and robotics technologies, jointly with Stanford University and MIT.

To follow OpenAI: @open_ai or info@openai.com    Topics: AI/Robotics | Survival/Defense

Spot on Elon! The threat is and currently the developments are unfortunately pointing exactly in that direction that AI will be controlled via a handful of big and powerful cooperation . None surprisingly none of those subjects are part of the OpenAI movement.

I like the sentiment, AI for all and for the common good, and at one level it seems doable but at another level it seems problematic on the scale of nation states and multinational entities.

If we all have AI systems then it will be those with control of the most energy to run their AI who will have the most influence, and that could be a “Dr. Evil”. It is the sum total of computing power on any given side of a conflict that will determine the outcome, if AI is a significant factor at all.

We could see bigger players looking at strategic questions such as, do they act now, or wait and put more resources into advancing the power of their AI so that they have better odds later, but at the risk of falling to a preemptive attack. Given this sort of thing I don’t see that AI will be a game changer, a leveller, rather it could just fit into the existing arms race type scenarios, at least until one group crosses a singularity threshold and then accelerates away from the pack while holding everyone else back so that they cannot catch up.

Not matter how I look at it I always see the scenarios running in the opposite direction to diversity, toward a singular dominant entity that “roots” all the other AI, sensor and actuator systems and then assimilates them.

How do they plan to stop this? How can one group of AIs have an ethical framework that allows them to “keep down” another group or single AI so that it does not get into a position to dominate them? How will this be any less messy than how the human super-powers have interacted in the last century?

 

I recommend the book “SuperIntelligence” by Nick Bostrom. Most thorough and penetrating. It covers many permutations of the intelligence explosion. The Allegory at the beginning is worth the price alone.

 

Elon, for goodness sake, focus! Get the big battery factory working, get space industry off the ground and America back in the ISS resupply and re-crew business, but enough with the non-profit expenditures already! Keep sinking your capital into non profits like the Hyperlink-a beautiful, high tech version of the old “I just know I can make trains profitable again outside of the northeast” dream and this non-profit AI and you’ll eventually go one financial step too far.

Both for you and for all of us who benefit from your efforts, consider this. At least change your attitude about profit; keep the option open that this AI will bring some profit, even with the open source aspect. This is a great effort, as I see you possibly becoming the “good AI” element that Ray writes about in his first essay, in the essay section on this site. There, Ray is confident that the good people with AI will out-think the bad people with AI and so good AI will prevail.

Read Full Post »


Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »

Older Posts »