Advertisements
Feeds:
Posts
Comments

Archive for the ‘Big Data’ Category

Digital Therapeutics: A threat or opportunity to pharmaceuticals


Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Digital Therapeutics (DTx) have been defined by the Digital Therapeutics Alliance (DTA) as “delivering evidence based therapeutic interventions to patients, that are driven by software to prevent, manage or treat a medical disorder or disease”. They might come in the form of a smart phone or computer tablet app, or some form of a cloud-based service connected to a wearable device. DTx tend to fall into three groups. Firstly, developers and mental health researchers have built digital solutions which typically provide a form of software delivered Cognitive-Behaviour Therapies (CBT) that help patients change behaviours and develop coping strategies around their condition. Secondly there are the group of Digital Therapeutics which target lifestyle issues, such as diet, exercise and stress, that are associated with chronic conditions, and work by offering personalized support for goal setting and target achievement. Lastly, DTx can be designed to work in combination with existing medication or treatments, helping patients manage their therapies and focus on ensuring the therapy delivers the best outcomes possible.

 

Pharmaceutical companies are clearly trying to understand what DTx will mean for them. They want to analyze whether it will be a threat or opportunity to their business. For a long time, they have been providing additional support services to patients who take relatively expensive drugs for chronic conditions. A nurse-led service might provide visits and telephone support to diabetics for example who self-inject insulin therapies. But DTx will help broaden the scope of support services because they can be delivered cost-effectively, and importantly have the ability to capture real-world evidence on patient outcomes. They will no-longer be reserved for the most expensive drugs or therapies but could apply to a whole range of common treatments to boost their efficacy. Faced with the arrival of Digital Therapeutics either replacing drugs, or playing an important role alongside therapies, pharmaceutical firms have three options. They can either ignore DTx and focus on developing drug therapies as they have done; they can partner with a growing number of DTx companies to develop software and services complimenting their drugs; or they can start to build their own Digital Therapeutics to work with their products.

 

Digital Therapeutics will have knock-on effects in health industries, which may be as great as the introduction of therapeutic apps and services themselves. Together with connected health monitoring devices, DTx will offer a near constant stream of data about an individuals’ behavior, real world context around factors affecting their treatment in their everyday lives and emotional and physiological data such as blood pressure and blood sugar levels. Analysis of the resulting data will help create support services tailored to each patient. But who stores and analyses this data is an important question. Strong data governance will be paramount to maintaining trust, and the highly regulated pharmaceutical industry may not be best-placed to handle individual patient data. Meanwhile, the health sector (payers and healthcare providers) is becoming more focused on patient outcomes, and payment for value not volume. The future will say whether pharmaceutical firms enhance the effectiveness of drugs with DTx, or in some cases replace drugs with DTx.

 

Digital Therapeutics have the potential to change what the pharmaceutical industry sells: rather than a drug it will sell a package of drugs and digital services. But they will also alter who the industry sells to. Pharmaceutical firms have traditionally marketed drugs to doctors, pharmacists and other health professionals, based on the efficacy of a specific product. Soon it could be paid on the outcome of a bundle of digital therapies, medicines and services with a closer connection to both providers and patients. Apart from a notable few, most pharmaceutical firms have taken a cautious approach towards Digital Therapeutics. Now, it is to be observed that how the pharmaceutical companies use DTx to their benefit as well as for the benefit of the general population.

 

References:

 

https://eloqua.eyeforpharma.com/LP=23674?utm_campaign=EFP%2007MAR19%20EFP%20Database&utm_medium=email&utm_source=Eloqua&elqTrackId=73e21ae550de49ccabbf65fce72faea0&elq=818d76a54d894491b031fa8d1cc8d05c&elqaid=43259&elqat=1&elqCampaignId=24564

 

https://www.s3connectedhealth.com/resources/white-papers/digital-therapeutics-pharmas-threat-or-opportunity/

 

http://www.pharmatimes.com/web_exclusives/digital_therapeutics_will_transform_pharma_and_healthcare_industries_in_2019._heres_how._1273671

 

https://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/exploring-the-potential-of-digital-therapeutics

 

https://player.fm/series/digital-health-today-2404448/s9-081-scaling-digital-therapeutics-the-opportunities-and-challenges

 

Advertisements

Read Full Post »


2019 Koch Institute Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PM ET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA

Announcement

Aviva Lev-Ari, PhD, RN,

Founder and Director of LPBI Group will be in attendance covering the event in REAL TIME

@pharma_BI

@AVIVA1950

 

Machine Learning and Cancer

The 18th Annual Koch Institute Summer Symposium on June 14, 2019 at MIT’s Kresge Auditorium will focus on Machine Learning and Cancer.

Both fields are undergoing dramatic changes, and their integration holds great promise for cancer research, diagnostics, and therapeutics. Cancer treatment and research have advanced rapidly with an increasing reliance on data-driven decisions. The volume, complexity, and diversity of research and clinical data—from genomics and single-cell molecular and image-based profiles to histopathology, clinical imaging, and medical records—far surpasses the capacity of individual scientists and physicians. However, they offer a remarkable opportunity to new approaches for data science and machine learning to provide holistic and intelligible interpretations to trained experts and patients alike. These advances will make it possible to provide far better diagnostics, discover possible chemical pathways for de novo synthesis of therapeutic compounds, predict accurately the risk of individuals for development of specific cancers years before metastatic spread, and determine the combination of agents that will stimulate immune rejection of a tumor or selectively induce the death of all cells in a tumor.

The symposium will address these issues through three sessions:

  • Machine Learning in Cancer Research: the Need and the Opportunity
  • Machine Learning to Decipher Cellular and Molecular Mechanisms in Cancer
  • Machine Learning into the Clinic

Sessions will be followed by a panel discussion of broadly informed experts moderated by MIT President Emerita Susan Hockfield.

Introductory remarks will be given by symposium co-chairs and Koch Institute faculty members Regina Barzilay, Aviv Regev and Phillip Sharp.

 

Keynote Speakers | Machine Learning in Cancer Research: the Need and the Opportunity

James P. Allison, PhD

MD Anderson Cancer Center

Regina Barzilay, PhD

MIT Computer Science and Artificial Intelligence Lab, Koch Institute for Integrative Cancer Research at MIT

Aviv Regev, PhD

Broad Institute, Koch Institute for Integrative Cancer Research at MIT

 

Session Speakers

Michael R. Angelo, MD, PhD

Stanford Unviersity

Andrew Beck

PathAI

Stephen H. Friend, MD, PhD

Sage Bionetworks

Tommi Jaakkola, PhD

MIT Computer Science and Artificial Intelligence Lab

Dana Pe’er, PhD

Memorial Sloan Kettering Cancer Center

Peter Sorger, PhD

Harvard Medical School

Olga Troyanskaya, PhD

Princeton University

Brian Wolpin, MD

Dana-Farber Cancer Institute

 

Panel Discussion | Big Data, Computation and the Future of Health Care

James (Jay) Bradner, MD

Novartis

Clifford A. Hudis, MD

American Society of Clinical Oncology

Constance D. Lehman, MD, PhD

Massachusetts General Hospital

Norman (Ned) Sharpless, MD

National Cancer Institute

 

Moderator: Susan Hockfield, PhD

Koch Institute for Integrative Cancer Research at MIT

[if gte mso 9]>

SOURCE

From: 2019 Koch Institute Symposium <ki-events@mit.edu>

Reply-To: <ki-events@mit.edu>

Date: Tuesday, March 12, 2019 at 11:30 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Invitation to the 2019 Koch Institute Symposium – Machine Learning and Cancer

Read Full Post »


TALK ANNOUNCEMENT from Boston Chapter of ASA

 

Reporter: Aviva Lev-Ari, PhD, RN

 

From: Tom Lane [mailto:Tom.Lane@mathworks.com]
Sent: Tuesday, January 15, 2019 12:59 PM
To: Tom Lane
Subject: [BCASA] INFORMS talk “Impact of Bots on Opinions in Social Networks” Wed Feb 6 at MITRE in Bedford

 

Upcoming event from INFORMS, kindly shared with BCASA members:

 

Please join us for the first INFORMS BC talk of 2019 on the Impact of Bots on Opinions in Social Networks by Professor Tauhid Zaman 

·         1.  Please join us for the first INFORMS BC talk of 2019 on the Impact of Bots on Opinions in Social Networks by Professor Tauhid Zaman

Daniel Rice

 

RSVP to lservi@mitre.org by Monday, February 4th, 2018

MITRE is asking that if you plan to attend please RSVP by sending an email to lservi@mitre.org indicating your 1) name, 2) email, (3) company or university, (4) whether you are a US citizen and, if not, country of citizenship.


Date: Wednesday Feb 6, 2019, at 6:30 PM

Location:
The MITRE Corporation
M Building, 202 Burlington Road
Bedford, MA

Title: The Impact of Bots on Opinions in Social Networks

Speaker: Tauhid Zaman

Abstract:
We present an analysis of the impact of automated accounts, or bots, on opinions in a social network. We model the opinions using a variant of the famous DeGroot model, which connects opinions with network structure. We find a nontrivial correlation between opinions based on this network model and based on the content of tweets of Twitter users discussing the 2016 U.S. presidential election between Hillary Clinton and Donald Trump, providing evidence supporting the validity of the model. We then utilize the network model to predict what the opinions would have been if the network did not contain any bots which may be trying to manipulate opinions. Using a bot detection algorithm, we identify bot accounts which comprise less than 1% of the network. By analyzing the bot posts, we find that there are twice as many bots supporting Donald Trump as there are supporting Hillary Clinton.  We remove the bots from the network and recalculate the opinions using the network model. We find that the bots produce a significant shift in the opinions, with the Clinton bots producing almost twice as large a change as the Trump bots, despite being fewer in number. Analysis of the bot behavior reveals that the large shift is due to the fact that the bots post one hundred times more frequently than humans.  The asymmetry in the opinion shift is due to the fact that the Clinton bots post 50% more frequently than the Trump bots.  Our results suggest a small number of highly active bots in a social network can have a disproportionate impact on opinions.

Bio: Tauhid is an Associate Professor of Operations Management at the MIT Sloan School of Management. He received his BS, MEng, and PhD degrees in electrical engineering and computer science from MIT.  His research focuses on solving operational problems involving social network data using probabilistic models, network algorithms, and modern statistical methods.  Some of the topics he studies in the social networks space include predicting the popularity of content, finding online extremists, and geo-locating users.  His broader interests cover data driven approaches to investing in startup companies, non-traditional choice modeling, algorithmic sports betting, and biometric data.  His work has been featured in the Wall Street Journal, Wired, Mashable, the LA Times, and Time Magazine.

 

SOURCE

From: “Tom Lane (ASA)” <Tom.Lane@mathworks.com>

Date: Tuesday, February 5, 2019 at 9:41 AM

To: “Tom Lane (ASA)” <Tom.Lane@mathworks.com>

Subject: [BCASA] INFORMS talk “Impact of Bots on Opinions in Social Networks”Wed Feb 6 at MITRE in Bedford

 

Read Full Post »


Graph Database Market Update 2019 – Cambridge Semantics’ AnzoGraph Graph Database product Awarded Highest Rating

Reporter: Aviva Lev-Ari, PhD, RN

 

Cambridge Semantics, the leading provider of big data management and enterprise analytics software, announced that its AnzoGraph Graph Database product was rated highest in analytic processing capabilities in the latest Graph Database Market Update 2019 by Bloor Research.

 

Appended and attached is the press release for your reference.

 

Regards

Linda Sekhar
+949-872- 8631

Global Results Communications | GRC

 

                                                 

 

 

 

Cambridge Semantics Awarded Highest Rating for Analytic Processing Environments in the Graph Database Market Update 2019 by Bloor Research

 

AnzoGraph Graph Database Named #1 in Several Categories of Research

 

Boston—January 23, 2019— Cambridge Semantics, the leading provider of big data management and enterprise analytics software, announced that its AnzoGraph Graph Database product was rated highest in analytic processing capabilities in the latest Graph Database Market Update 2019 by Bloor Research.

 

The Bloor Graph Database Market Update 2019 compares property graph with RDF databases, native versus non-native implementations, single versus multi-model databases and operational versus analytic solutions. It discusses the latest trends in this market, along with an assessment of the leading vendors in the market and captures any market shift in the last two years. Relative to shifts in the graph database market, Bloor noted that Cambridge Semantics unbundled AnzoGraph from its Anzo Data Lake offering, and that Amazon and SAP also entered the graph database market.

 

Within the report, Cambridge Semantics was ranked number one in several categories including:

       Integration- Analytic environments

       Language- Analytic environments

       Features- Analytic environments

 

“There are relatively few vendors in the graph database market that have the performance to focus specifically on analytics. Cambridge Semantics not only does this with AnzoGraph, but does so without requiring (very expensive) hardware acceleration or a proprietary language,” said Philip Howard, research director at Bloor Research.

 

‘’It is an honor to be named a leading vendor for analytics in the Graph Database market 2019,” said Alok Prasad, president of Cambridge Semantics.  “We have witnessed a surging interest in graph databases to conduct interactive analytics to get better insights. Last year, we were very excited to spin-out AnzoGraph, our graph analytics database for hyper-fast performance at Big Data scale. We are seeing very strong interest in this new offering.’’

 

AnzoGraph is a massively parallel distributed native graph analytics database built to interactively analyze trillions of relationships at record speed.  The database provides BI-style analytics, graph algorithms and inferencing with open W3C standards based graph technology and labelled property graphs.  The underlying technology is a third-generation data analytics engine built by the engineers who built Netezza and the technology behind Amazon Redshift. AnzoGraph has been in production at large enterprise customers as a part of Anzo and is now available behind the firewall or in the cloud on AWS, Google Cloud Platform, Microsoft Azure and all cloud environments supporting Docker.

 

About Bloor Research

Bloor Research is a global, independent research and analyst house, focused on the idea that Evolution is Essential to business success and ultimately survival. For nearly 30 years, we have enabled businesses to understand the potential offered by technology and choose the optimal solutions for their needs.

 

About Cambridge Semantics

Cambridge Semantics Inc., The Smart Data Company®, is a big data management and enterprise analytics software company that offers a universal semantic layer to connect and bring meaning to all enterprise data. The company offers two award winning products: Anzo for Enterprise Knowledge Graphs and integrated analytics and AnzoGraph, a graph analytics database.

 

Cambridge Semantics is based in Boston, Massachusetts.

For more information visit www.cambridgesemantics.com or follow us on Facebook, LinkedIn and Twitter: @CamSemantics.

###

 

Media Contact:

Lora Wilson/Valerie Christopherson

Global Results Communications for Cambridge Semantics

cambridge@globalresultspr.com

+1 (949) 608-0276

 

SOURCE

From: Linda Sekhar <LSekhar@globalresultspr.com>

Date: Wednesday, January 23, 2019 at 11:28 AM

Subject: Press release: Cambridge Semantics Awarded Highest Rating for Analytic Processing Environments in the Graph Database Market Update 2019 by Bloor Research

Read Full Post »


Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

 

Boston Healthcare sponsored a Webinar recently entitled ” Role of Informatics in Precision Medicine: Implications for Innovators”.  The webinar focused on the different informatic needs along the Oncology Care value chain from drug discovery through clinicians, C-suite executives and payers. The presentation, by Joseph Ferrara and Mark Girardi, discussed the specific informatics needs and deficiencies experienced by all players in oncology care and how innovators in this space could create value. The final part of the webinar discussed artificial intelligence and the role in cancer informatics.

 

Below is the mp4 video and audio for this webinar.  Notes on each of the slides with a few representative slides are also given below:

Please click below for the mp4 of the webinar:

 

 


  • worldwide oncology related care to increase by 40% in 2020
  • big movement to participatory care: moving decision making to the patient. Need for information
  • cost components focused on clinical action
  • use informatics before clinical stage might add value to cost chain

 

 

 

 

Key unmet needs from perspectives of different players in oncology care where informatics may help in decision making

 

 

 

  1.   Needs of Clinicians

– informatic needs for clinical enrollment

– informatic needs for obtaining drug access/newer therapies

2.  Needs of C-suite/health system executives

– informatic needs to help focus of quality of care

– informatic needs to determine health outcomes/metrics

3.  Needs of Payers

– informatic needs to determine quality metrics and managing costs

– informatics needs to form guidelines

– informatics needs to determine if biomarkers are used consistently and properly

– population level data analytics

 

 

 

 

 

 

 

 

 

 

 

 

What are the kind of value innovations that tech entrepreneurs need to create in this space? Two areas/problems need to be solved.

  • innovations in data depth and breadth
  • need to aggregate information to inform intervention

Different players in value chains have different data needs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data Depth: Cumulative Understanding of disease

Data Depth: Cumulative number of oncology transactions

  • technology innovators rely on LEGACY businesses (those that already have technology) and these LEGACY businesses either have data breath or data depth BUT NOT BOTH; (IS THIS WHERE THE GREATEST VALUE CAN BE INNOVATED?)
  • NEED to provide ACTIONABLE as well as PHENOTYPIC/GENOTYPIC DATA
  • data depth more important in clinical setting as it drives solutions and cost effective interventions.  For example Foundation Medicine, who supplies genotypic/phenotypic data for patient samples supplies high data depth
  • technologies are moving to data support
  • evidence will need to be tied to umbrella value propositions
  • Informatic solutions will have to prove outcome benefit

 

 

 

 

 

How will Machine Learning be involved in the healthcare value chain?

  • increased emphasis on real time datasets – CONSTANT UPDATES NEED TO OCCUR. THIS IS NOT HAPPENING BUT VALUED BY MANY PLAYERS IN THIS SPACE
  • Interoperability of DATABASES Important!  Many Players in this space don’t understand the complexities integrating these datasets

Other Articles on this topic of healthcare informatics, value based oncology, and healthcare IT on this OPEN ACCESS JOURNAL include:

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Broad Institute launches Merkin Institute for Transformative Technologies in Healthcare

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Paradoxical Findings in HealthCare Delivery and Outcomes: Economics in MEDICINE – Original Research by Anupam “Bapu” Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at HMS

Google & Digital Healthcare Technology

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

The Future of Precision Cancer Medicine, Inaugural Symposium, MIT Center for Precision Cancer Medicine, December 13, 2018, 8AM-6PM, 50 Memorial Drive, Cambridge, MA

Live Conference Coverage @Medcity Converge 2018 Philadelphia: Oncology Value Based Care and Patient Management

2016 BioIT World: Track 5 – April 5 – 7, 2016 Bioinformatics Computational Resources and Tools to Turn Big Data into Smart Data

The Need for an Informatics Solution in Translational Medicine

 

 

 

 

Read Full Post »


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

Updated 12/18/2018

In the efforts to reduce healthcare costs, provide increased accessibility of service for patients, and drive biomedical innovations, many healthcare and biotechnology professionals have looked to advances in digital technology to determine the utility of IT to drive and extract greater value from healthcare industry.  Two areas of recent interest have focused how best to use blockchain and artificial intelligence technologies to drive greater efficiencies in our healthcare and biotechnology industries.

More importantly, with the substantial increase in ‘omic data generated both in research as well as in the clinical setting, it has become imperative to develop ways to securely store and disseminate the massive amounts of ‘omic data to various relevant parties (researchers or clinicians), in an efficient manner yet to protect personal privacy and adhere to international regulations.  This is where blockchain technologies may play an important role.

A recent Oncotarget paper by Mamoshina et al. (1) discussed the possibility that next-generation artificial intelligence and blockchain technologies could synergize to accelerate biomedical research and enable patients new tools to control and profit from their personal healthcare data, and assist patients with their healthcare monitoring needs. According to the abstract:

The authors introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship value of the data.  They also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare.  In this system, blockchain and deep learning technologies would provide the secure and transparent distribution of personal data in a healthcare marketplace, and would also be useful to resolve challenges faced by the regulators and return control over personal data including medical records to the individual.

The review discusses:

  1. Recent achievements in next-generation artificial intelligence
  2. Basic concepts of highly distributed storage systems (HDSS) as a preferred method for medical data storage
  3. Open source blockchain Exonium and its application for healthcare marketplace
  4. A blockchain-based platform allowing patients to have control of their data and manage access
  5. How advances in deep learning can improve data quality, especially in an era of big data

Advances in Artificial Intelligence

  • Integrative analysis of the vast amount of health-associated data from a multitude of large scale global projects has proven to be highly problematic (REF 27), as high quality biomedical data is highly complex and of a heterogeneous nature, which necessitates special preprocessing and analysis.
  • Increased computing processing power and algorithm advances have led to significant advances in machine learning, especially machine learning involving Deep Neural Networks (DNNs), which are able to capture high-level dependencies in healthcare data. Some examples of the uses of DNNs are:
  1. Prediction of drug properties(2, 3) and toxicities(4)
  2. Biomarker development (5)
  3. Cancer diagnosis (6)
  4. First FDA approved system based on deep learning Arterys Cardio DL
  • Other promising systems of deep learning include:
    • Generative Adversarial Networks (https://arxiv.org/abs/1406.2661): requires good datasets for extensive training but has been used to determine tumor growth inhibition capabilities of various molecules (7)
    • Recurrent neural Networks (RNN): Originally made for sequence analysis, RNN has proved useful in analyzing text and time-series data, and thus would be very useful for electronic record analysis. Has also been useful in predicting blood glucose levels of Type I diabetic patients using data obtained from continuous glucose monitoring devices (8)
    • Transfer Learning: focused on translating information learned on one domain or larger dataset to another, smaller domain. Meant to reduce the dependence on large training datasets that RNN, GAN, and DNN require.  Biomedical imaging datasets are an example of use of transfer learning.
    • One and Zero-Shot Learning: retains ability to work with restricted datasets like transfer learning. One shot learning aimed to recognize new data points based on a few examples from the training set while zero-shot learning aims to recognize new object without seeing the examples of those instances within the training set.

Highly Distributed Storage Systems (HDSS)

The explosion in data generation has necessitated the development of better systems for data storage and handling. HDSS systems need to be reliable, accessible, scalable, and affordable.  This involves storing data in different nodes and the data stored in these nodes are replicated which makes access rapid. However data consistency and affordability are big challenges.

Blockchain is a distributed database used to maintain a growing list of records, in which records are divided into blocks, locked together by a crytosecurity algorithm(s) to maintain consistency of data.  Each record in the block contains a timestamp and a link to the previous block in the chain.  Blockchain is a distributed ledger of blocks meaning it is owned and shared and accessible to everyone.  This allows a verifiable, secure, and consistent history of a record of events.

Data Privacy and Regulatory Issues

The establishment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 has provided much needed regulatory guidance and framework for clinicians and all concerned parties within the healthcare and health data chain.  The HIPAA act has already provided much needed guidance for the latest technologies impacting healthcare, most notably the use of social media and mobile communications (discussed in this article  Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.).  The advent of blockchain technology in healthcare offers its own unique challenges however HIPAA offers a basis for developing a regulatory framework in this regard.  The special standards regarding electronic data transfer are explained in HIPAA’s Privacy Rule, which regulates how certain entities (covered entities) use and disclose individual identifiable health information (Protected Health Information PHI), and protects the transfer of such information over any medium or electronic data format. However, some of the benefits of blockchain which may revolutionize the healthcare system may be in direct contradiction with HIPAA rules as outlined below:

Issues of Privacy Specific In Use of Blockchain to Distribute Health Data

  • Blockchain was designed as a distributed database, maintained by multiple independent parties, and decentralized
  • Linkage timestamping; although useful in time dependent data, proof that third parties have not been in the process would have to be established including accountability measures
  • Blockchain uses a consensus algorithm even though end users may have their own privacy key
  • Applied cryptography measures and routines are used to decentralize authentication (publicly available)
  • Blockchain users are divided into three main categories: 1) maintainers of blockchain infrastructure, 2) external auditors who store a replica of the blockchain 3) end users or clients and may have access to a relatively small portion of a blockchain but their software may use cryptographic proofs to verify authenticity of data.

 

YouTube video on How #Blockchain Will Transform Healthcare in 25 Years (please click below)

 

 

In Big Data for Better Outcomes, BigData@Heart, DO->IT, EHDN, the EU data Consortia, and yes, even concepts like pay for performance, Richard Bergström has had a hand in their creation. The former Director General of EFPIA, and now the head of health both at SICPA and their joint venture blockchain company Guardtime, Richard is always ahead of the curve. In fact, he’s usually the one who makes the curve in the first place.

 

 

 

Please click on the following link for a podcast on Big Data, Blockchain and Pharma/Healthcare by Richard Bergström:

References

  1. Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., and Zhavoronkov, A. (2018) Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare, Oncotarget 9, 5665-5690.
  2. Aliper, A., Plis, S., Artemov, A., Ulloa, A., Mamoshina, P., and Zhavoronkov, A. (2016) Deep Learning Applications for Predicting Pharmacological Properties of Drugs and Drug Repurposing Using Transcriptomic Data, Molecular pharmaceutics 13, 2524-2530.
  3. Wen, M., Zhang, Z., Niu, S., Sha, H., Yang, R., Yun, Y., and Lu, H. (2017) Deep-Learning-Based Drug-Target Interaction Prediction, Journal of proteome research 16, 1401-1409.
  4. Gao, M., Igata, H., Takeuchi, A., Sato, K., and Ikegaya, Y. (2017) Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds, Journal of pharmacological sciences 133, 70-78.
  5. Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M., Moskalev, A., Kolosov, A., Ostrovskiy, A., Cantor, C., Vijg, J., and Zhavoronkov, A. (2016) Deep biomarkers of human aging: Application of deep neural networks to biomarker development, Aging 8, 1021-1033.
  6. Vandenberghe, M. E., Scott, M. L., Scorer, P. W., Soderberg, M., Balcerzak, D., and Barker, C. (2017) Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer, Scientific reports 7, 45938.
  7. Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., and Zhavoronkov, A. (2017) druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with Desired Molecular Properties in Silico, Molecular pharmaceutics 14, 3098-3104.
  8. Ordonez, F. J., and Roggen, D. (2016) Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition, Sensors (Basel) 16.

Articles from clinicalinformaticsnews.com

Healthcare Organizations Form Synaptic Health Alliance, Explore Blockchain’s Impact On Data Quality

From http://www.clinicalinformaticsnews.com/2018/12/05/healthcare-organizations-form-synaptic-health-alliance-explore-blockchains-impact-on-data-quality.aspx

By Benjamin Ross

December 5, 2018 | The boom of blockchain and distributed ledger technologies have inspired healthcare organizations to test the capabilities of their data. Quest Diagnostics, in partnership with Humana, MultiPlan, and UnitedHealth Group’s Optum and UnitedHealthcare, have launched a pilot program that applies blockchain technology to improve data quality and reduce administrative costs associated with changes to healthcare provider demographic data.

The collective body, called Synaptic Health Alliance, explores how blockchain can keep only the most current healthcare provider information available in health plan provider directories. The alliance plans to share their progress in the first half of 2019.

Providing consumers looking for care with accurate information when they need it is essential to a high-functioning overall healthcare system, Jason O’Meara, Senior Director of Architecture at Quest Diagnostics, told Clinical Informatics News in an email interview.

“We were intentional about calling ourselves an alliance as it speaks to the shared interest in improving health care through better, collaborative use of an innovative technology,” O’Meara wrote. “Our large collective dataset and national footprints enable us to prove the value of data sharing across company lines, which has been limited in healthcare to date.”

O’Meara said Quest Diagnostics has been investing time and resources the past year or two in understanding blockchain, its ability to drive purpose within the healthcare industry, and how to leverage it for business value.

“Many health care and life science organizations have cast an eye toward blockchain’s potential to inform their digital strategies,” O’Meara said. “We recognize it takes time to learn how to leverage a new technology. We started exploring the technology in early 2017, but we quickly recognized the technology’s value is in its application to business to business use cases: to help transparently share information, automate mutually-beneficial processes and audit interactions.”

Quest began discussing the potential for an alliance with the four other companies a year ago, O’Meara said. Each company shared traits that would allow them to prove the value of data sharing across company lines.

“While we have different perspectives, each member has deep expertise in healthcare technology, a collaborative culture, and desire to continuously improve the patient/customer experience,” said O’Meara. “We also recognize the value of technology in driving efficiencies and quality.”

Following its initial launch in April, Synaptic Health Alliance is deploying a multi-company, multi-site, permissioned blockchain. According to a whitepaper published by Synaptic Health, the choice to use a permissioned blockchain rather than an anonymous one is crucial to the alliance’s success.

“This is a more effective approach, consistent with enterprise blockchains,” an alliance representative wrote. “Each Alliance member has the flexibility to deploy its nodes based on its enterprise requirements. Some members have elected to deploy their nodes within their own data centers, while others are using secured public cloud services such as AWS and Azure. This level of flexibility is key to growing the Alliance blockchain network.”

As the pilot moves forward, O’Meara says the Alliance plans to open ability to other organizations. Earlier this week Aetna and Ascension announced they joined the project.

“I am personally excited by the amount of cross-company collaboration facilitated by this project,” O’Meara says. “We have already learned so much from each other and are using that knowledge to really move the needle on improving healthcare.”

 

US Health And Human Services Looks To Blockchain To Manage Unstructured Data

http://www.clinicalinformaticsnews.com/2018/11/29/us-health-and-human-services-looks-to-blockchain-to-manage-unstructured-data.aspx

By Benjamin Ross

November 29, 2018 | The US Department of Health and Human Services (HHS) is making waves in the blockchain space. The agency’s Division of Acquisition (DA) has developed a new system, called Accelerate, which gives acquisition teams detailed information on pricing, terms, and conditions across HHS in real-time. The department’s Associate Deputy Assistant Secretary for Acquisition, Jose Arrieta, gave a presentation and live demo of the blockchain-enabled system at the Distributed: Health event earlier this month in Nashville, Tennessee.

Accelerate is still in the prototype phase, Arrieta said, with hopes that the new system will be deployed at the end of the fiscal year.

HHS spends around $25 billion a year in contracts, Arrieta said. That’s 100,000 contracts a year with over one million pages of unstructured data managed through 45 different systems. Arrieta and his team wanted to modernize the system.

“But if you’re going to change the way a workforce of 20,000 people do business, you have to think your way through how you’re going to do that,” said Arrieta. “We didn’t disrupt the existing systems: we cannibalized them.”

The cannibalization process resulted in Accelerate. According to Arrieta, the system functions by creating a record of data rather than storing it, leveraging machine learning, artificial intelligence (AI), and robotic process automation (RPA), all through blockchain data.

“We’re using that data record as a mechanism to redesign the way we deliver services through micro-services strategies,” Arrieta said. “Why is that important? Because if you have a single application or data use that interfaces with 55 other applications in your business network, it becomes very expensive to make changes to one of the 55 applications.”

Accelerate distributes the data to the workforce, making it available to them one business process at a time.

“We’re building those business processes without disrupting the existing systems,” said Arrieta, and that’s key. “We’re not shutting off those systems. We’re using human-centered design sessions to rebuild value exchange off of that data.”

The first application for the system, Arrieta said, can be compared to department stores price-matching their online competitors.

It takes the HHS close to a month to collect the amalgamation of data from existing system, whether that be terms and conditions that drive certain price points, or software licenses.

“The micro-service we built actually analyzes that data, and provides that information to you within one second,” said Arrieta. “This is distributed to the workforce, to the 5,000 people that do the contracting, to the 15,000 people that actually run the programs at [HHS].”

This simple micro-service is replicated on every node related to HHS’s internal workforce. If somebody wants to change the algorithm to fit their needs, they can do that in a distributed manner.

Arrieta hopes to use Accelerate to save researchers money at the point of purchase. The program uses blockchain to simplify the process of acquisition.

“How many of you work with the federal government?” Arrieta asked the audience. “Do you get sick of reentering the same information over and over again? Every single business opportunity you apply for, you have to resubmit your financial information. You constantly have to check for validation and verification, constantly have to resubmit capabilities.”

Wouldn’t it be better to have historical notes available for each transaction? said Arrieta. This would allow clinical researchers to be able to focus on “the things they’re really good at,” instead of red tape.

“If we had the top cancer researcher in the world, would you really want her spending her time learning about federal regulations as to how to spend money, or do you want her trying to solve cancer?” Arrieta said. “What we’re doing is providing that data to the individual in a distributed manner so they can read the information of historical purchases that support activity, and they can focus on the objectives and risks they see as it relates to their programming and their objectives.”

Blockchain also creates transparency among researchers, Arrieta said, which says creates an “uncomfortable reality” in the fact that they have to make a decision regarding data, fundamentally changing value exchange.

“The beauty of our business model is internal investment,” Arrieta said. For instance, the HHS could take all the sepsis data that exists in their system, put it into a distributed ledger, and share it with an external source.

“Maybe that could fuel partnership,” Arrieta said. “I can make data available to researchers in the field in real-time so they can actually test their hypothesis, test their intuition, and test their imagination as it relates to solving real-world problems.”

 

Shivom is creating a genomic data hub to elongate human life with AI

From VentureBeat.com
Blockchain-based genomic data hub platform Shivom recently reached its $35 million hard cap within 15 seconds of opening its main token sale. Shivom received funding from a number of crypto VC funds, including Collinstar, Lateral, and Ironside.

The goal is to create the world’s largest store of genomic data while offering an open web marketplace for patients, data donors, and providers — such as pharmaceutical companies, research organizations, governments, patient-support groups, and insurance companies.

“Disrupting the whole of the health care system as we know it has to be the most exciting use of such large DNA datasets,” Shivom CEO Henry Ines told me. “We’ll be able to stratify patients for better clinical trials, which will help to advance research in precision medicine. This means we will have the ability to make a specific drug for a specific patient based on their DNA markers. And what with the cost of DNA sequencing getting cheaper by the minute, we’ll also be able to sequence individuals sooner, so young children or even newborn babies could be sequenced from birth and treated right away.”

While there are many solutions examining DNA data to explain heritage, intellectual capabilities, health, and fitness, the potential of genomic data has largely yet to be unlocked. A few companies hold the monopoly on genomic data and make sizeable profits from selling it to third parties, usually without sharing the earnings with the data donor. Donors are also not informed if and when their information is shared, nor do they have any guarantee that their data is secure from hackers.

Shivom wants to change that by creating a decentralized platform that will break these monopolies, democratizing the processes of sharing and utilizing the data.

“Overall, large DNA datasets will have the potential to aid in the understanding, prevention, diagnosis, and treatment of every disease known to mankind, and could create a future where no diseases exist, or those that do can be cured very easily and quickly,” Ines said. “Imagine that, a world where people do not get sick or are already aware of what future diseases they could fall prey to and so can easily prevent them.”

Shivom’s use of blockchain technology and smart contracts ensures that all genomic data shared on the platform will remain anonymous and secure, while its OmiX token incentivizes users to share their data for monetary gain.

Rise in Population Genomics: Local Government in India Will Use Blockchain to Secure Genetic Data

Blockchain will secure the DNA database for 50 million citizens in the eighth-largest state in India. The government of Andhra Pradesh signed a Memorandum of Understanding with a German genomics and precision medicine start-up, Shivom, which announced to start the pilot project soon. The move falls in line with a trend for governments turning to population genomics, and at the same time securing the sensitive data through blockchain.

Andhra Pradesh, DNA, and blockchain

Storing sensitive genetic information safely and securely is a big challenge. Shivom builds a genomic data-hub powered by blockchain technology. It aims to connect researchers with DNA data donors thus facilitating medical research and the healthcare industry.

With regards to Andhra Pradesh, the start-up will first launch a trial to determine the viability of their technology for moving from a proactive to a preventive approach in medicine, and towards precision health. “Our partnership with Shivom explores the possibilities of providing an efficient way of diagnostic services to patients of Andhra Pradesh by maintaining the privacy of the individual data through blockchain technologies,” said J A Chowdary, IT Advisor to Chief Minister, Government of Andhra Pradesh.

Other Articles in this Open Access Journal on Digital Health include:

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

 

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

 

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

 

Medcity Converge 2018 Philadelphia: Live Coverage @pharma_BI

 

Digital Health Breakthrough Business Models, June 5, 2018 @BIOConvention, Boston, BCEC

 

 

 

 

 

 

Read Full Post »


Bioinformatics Tool Review: Genome Variant Analysis Tools

Curator: Stephen J. Williams, Ph.D.

Updated 11/15/2018

The following post will be an ongoing curation of reviews of gene variant bioinformatic software.

 

The Ensembl Variant Effect Predictor.

McLaren W, Gil L, Hunt SE, Riat HS, Ritchie GR, Thormann A, Flicek P, Cunningham F.

Genome Biol. 2016 Jun 6;17(1):122. doi: 10.1186/s13059-016-0974-4.

Author information

1

European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK. wm2@ebi.ac.uk.

2

European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK.

3

European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK. fiona@ebi.ac.uk.

Abstract

The Ensembl Variant Effect Predictor is a powerful toolset for the analysis, annotation, and prioritization of genomic variants in coding and non-coding regions. It provides access to an extensive collection of genomic annotation, with a variety of interfaces to suit different requirements, and simple options for configuring and extending analysis. It is open source, free to use, and supports full reproducibility of results. The Ensembl Variant Effect Predictor can simplify and accelerate variant interpretation in a wide range of study designs.

 

Rare diseases can be difficult to diagnose due to low incidence and incomplete penetrance of implicated alleles however variant analysis of whole genome sequencing can identify underlying genetic events responsible for the disease (Nature, 2015).  However, a large cohort is required for many WGS association studies in order to produce enough statistical power for interpretation (see post and here).  To this effect major sequencing projects have been initiated worldwide including:

A more thorough curation of sequencing projects can be seen in the following post:

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

 

And although sequencing costs have dramatically been reduced over the years, the costs to determine the functional consequences of such variants remains high, as thorough basic research studies must be conducted to validate the interpretation of variant data with respect to the underlying disease, as only a small fraction of variants from a genome sequencing project will encode for a functional protein.  Correct annotation of sequences and variants, identification of correct corresponding reference genes or transcripts in GENCODE or RefSeq respectively offer compelling challenges to the proper identification of sequenced variants as potential functional variants.

To this effect, the authors developed the Ensembl Variant Effect Predictor (VEP), which is a software suite that performs annotations and analysis of most types of genomic variation in coding and non-coding regions of the genome.

Summary of Features

  • Annotation: VEP can annotate two broad categories of genomic variants
    • Sequence variants with specific and defined changes: indels, base substitutions, SNVs, tandem repeats
    • Larger structural variants > 50 nucleotides
  • Species and assembly/genomic database support: VEP can analyze data from any species with assembled genome sequence and annotated gene set. VEP supports chromosome assemblies such as the latest GRCh38, FASTA, as well as transcripts from RefSeq as well as user-derived sequences
  • Transcript Annotation: VEP includes a wide variety of gene and transcript related information including NCBI Gene ID, Gene Symbol, Transcript ID, NCBI RefSeq ID, exon/intron information, and cross reference to other databases such as UniProt
  • Protein Annotation: Protein-related fields include Protein ID, RefSeq ID, SwissProt, UniParc ID, reference codons and amino acids, SIFT pathogenicity score, protein domains
  • Noncoding Annotation: VEP reports variants in noncoding regions including genomic regulatory regions, intronic regions, transcription binding motifs. Data from ENCODE, BLUEPRINT, and NIH Epigenetics RoadMap are used for primary annotation.  Plugins to the Perl coding are also available to link other databases which annotate noncoding sequence features.
  • Frequency, phenotype, and citation annotation: VEP searches Ensembl databases containing a large amount of germline variant information and checks variants against the dbSNP single nucleotide polymorphism database. VEP integrates with mutational databases such as COSMIC, the Human Gene Mutation Database, and structural and copy number variants from Database of Genomic Variants.  Allele Frequencies are reported from 1000 Genomes and NHLBI and integrates with PubMed for literature annotation.  Phenotype information is from OMIM, Orphanet, GWAS and clinical information of variants from ClinVar.
  • Flexible Input and Output Formats: VEP supports input data format called “variant call format” or VCP, a standard in next-gen sequencing. VEP has the ability to process variant identifiers from other database formats.  Output formats are tab deliminated and give the user choices in presentation of results (HTML or text based)
  • Choice of user interface
    • Online tool (VEP Web): simple point and click; incorporates Instant VEP Functionality and copy and paste features. Results can be stored online in cloud storage on Ensembl.
    • VEP script: VEP is available as a downloadable PERL script (see below for link) and can process large amounts of data rapidly. This interface is powerfully flexible with the ability to integrate multiple plugins available from Ensembl and GitHub.  The ability to alter the PERL code and add plugins and code functions allows the flexibility to modify any feature of VEP.
    • VEP REST API: provides robust computational access to any programming language and returns basic variant annotation. Can make use of external plugins.

 

 

Watch Video on VES Instructional Webinar: https://youtu.be/7Fs7MHfXjWk

Watch Video on VES Web Version training on How to Analyze Your Sequence in VEP

 

 

Availability of data and materials

The dataset supporting the conclusions of this article is available from Illumina’s Platinum Genomes [93] and using the Ensembl release 75 gene set. Pre-built data sets are available for all Ensembl and Ensembl Genomes species [94]. They can also be downloaded automatically during set up whilst installing the VEP.

 

References

Large-scale discovery of novel genetic causes of developmental disorders.

Deciphering Developmental Disorders Study.

Nature2015 Mar 12;519(7542):223-8. doi: 10.1038/nature14135. PMID:25533962

Updated 11/15/2018

 

Research Points to Caution in Use of Variant Effect Prediction Bioinformatic Tools

Although we have the ability to use high throughput sequencing to identify allelic variants occurring in rare disease, correlation of these variants with the underlying disease is often difficult due to a few concerns:

  • For rare sporadic diseases, classical gene/variant association studies have proven difficult to perform (Meyts et al. 2016)
  • As Whole Exome Sequencing (WES) returns a considerable number of variants, how to differentiate the normal allelic variation found in the human population from disease-causing pathogenic alleles
  • For rare diseases, pathogenic allele frequencies are generally low

Therefore, for these rare pathogenic alleles, the use of bioinformatics tools in order to predict the resulting changes in gene function may provide insight into disease etiology when validation of these allelic changes might be experimentally difficult.

In a 2017 Genes & Immunity paper, Line Lykke Andersen and Rune Hartmann tested the reliability of various bioinformatic software to predict the functional consequence of variants of six different genes involved in interferon induction and sixteen allelic variants of the IFNLR1 gene.  These variants were found in cohorts of patients presenting with herpes simplex encephalitis (HSE). Most of the adult population is seropositive for Herpes Simplex Virus (HSV) however a minor fraction (1 in 250,000 individuals per year) of HSV infected individuals will develop HSE (Hjalmarsson et al., 2007).  It has been suggested that HSE occurs in individuals with rare primary immunodeficiencies caused by gene defects affecting innate immunity through reduced production of interferons (IFN) (Zhang et al., Lim et al.).

 

References

Meyts I, Bosch B, Bolze A, Boisson B, Itan Y, Belkadi A, et al. Exome and genome sequencing for inborn errors of immunity. J Allergy Clin Immunol. 2016;138:957–69.

Hjalmarsson A, Blomqvist P, Skoldenberg B. Herpes simplex encephalitis in Sweden, 1990-2001: incidence, morbidity, and mortality. Clin Infect Dis. 2007;45:875–80.

Zhang SY, Jouanguy E, Ugolini S, Smahi A, Elain G, Romero P, et al. TLR3 deficiency in patients with herpes simplex encephalitis. Science. 2007;317:1522–7.

Lim HK, Seppanen M, Hautala T, Ciancanelli MJ, Itan Y, Lafaille FG, et al. TLR3 deficiency in herpes simplex encephalitis: high allelic heterogeneity and recurrence risk. Neurology. 2014;83:1888–97.

 

Genes Immun. 2017 Dec 4. doi: 10.1038/s41435-017-0002-z.

Frequently used bioinformatics tools overestimate the damaging effect of allelic variants.

Andersen LL1Terczyńska-Dyla E1Mørk N2Scavenius C1Enghild JJ1Höning K3Hornung V3,4Christiansen M5,6Mogensen TH2,6Hartmann R7.

 

Abstract

We selected two sets of naturally occurring human missense allelic variants within innate immune genes. The first set represented eleven non-synonymous variants in six different genes involved in interferon (IFN) induction, present in a cohort of patients suffering from herpes simplex encephalitis (HSE) and the second set represented sixteen allelic variants of the IFNLR1 gene. We recreated the variants in vitro and tested their effect on protein function in a HEK293T cell based assay. We then used an array of 14 available bioinformatics tools to predict the effect of these variants upon protein function. To our surprise two of the most commonly used tools, CADD and SIFT, produced a high rate of false positives, whereas SNPs&GO exhibited the lowest rate of false positives in our test. As the problem in our test in general was false positive variants, inclusion of mutation significance cutoff (MSC) did not improve accuracy.

Methodology

  1. Identification of rare variants
  2. Genomes of nineteen Dutch patients with a history of HSE sequenced by WES and identification of novel HSE causing variants determined by filtering the single nucleotide polymorphisms (SNPs) that had a frequency below 1% in the NHBLI Exome Sequencing Project Exome Variant Server and the 1000 Genomes Project and were present within 204 genes involved in the immune response to HSV.
  3. Identified variants (204) manually evaluated for involvement of IFN induction based on IDBase and KEGG pathway database analysis.
  4. In-silico predictions: Variants classified by the in silico variant pathogenicity prediction programs: SIFT, Mutation Assessor, FATHMM, PROVEAN, SNAP2, PolyPhen2, PhD-SNP, SNP&GO, FATHMM-MKL, MutationTaster2, PredictSNP, Condel, MetaSNP, and CADD. Each program returned prediction scores measuring likelihood of a variant either being ‘deleterious’ or ‘neutral’. Prediction accuracy measured as

ACC = (true positive+true negative)/(true positive+true negative+false positive+false negative)

 

  1. Validation of prediction software/tools

In order to validate the predictive value of the software, HEK293T cells, deficient in IRF3, MAVS, and IKKe/TBK1, were cotransfected with the nine variants of the aforementioned genes and a luciferase reporter under control of the IFN-b promoter and luciferase activity measured as an indicator of IFN signaling function.  Western blot was performed to confirm the expression of the constructs.

 

Results

Table 2 Summary of the
bioinformatic predictions
HSE variants IFNLR1 variants Overall ACC
TN TP FN FP Total ACC TN TP FN FP Total ACC
Uniform cutoff
SIFT 4 1 0 4 9 0.56 8 1 0 7 16 0.56 0.56
Mutation assessor 6 1 0 2 9 0.78 9 1 0 6 16 0.63 0.68
FATHMM 7 1 0 1 9 0.89 0.89
PROVEAN 8 1 0 0 9 1.00 11 1 0 4 16 0.75 0.84
SNAP2 5 1 0 3 9 0.67 8 0 1 7 16 0.50 0.56
PolyPhen2 6 1 0 2 9 0.78 12 1 0 3 16 0.81 0.80
PhD-SNP 7 1 0 1 9 0.89 11 1 0 4 16 0.75 0.80
SNPs&GO 8 1 0 0 9 1.00 14 1 0 1 16 0.94 0.96
FATHMM MKL 4 1 0 4 9 0.56 13 0 1 2 16 0.81 0.72
MutationTaster2 4 0 1 4 9 0.44 14 0 1 1 16 0.88 0.72
PredictSNP 6 1 0 2 9 0.78 11 1 0 4 16 0.75 0.76
Condel 6 1 0 2 9 0.78 0.78
Meta-SNP 8 1 0 0 9 1.00 11 1 0 4 16 0.75 0.84
CADD 2 1 0 6 9 0.33 8 0 1 7 16 0.50 0.44
MSC 95% cutoff
SIFT 5 1 0 3 9 0.67 8 1 0 8 16 0.50 0.56
PolyPhen2 6 1 0 2 9 0.78 13 1 0 3 16 0.81 0.80
CADD 4 1 0 4 9 0.56 7 0 1 9 16 0.44 0.48

 

Note: TN: true negative, TP: true positive, FN: false negative, FP: false positive, ACC: accuracy

Functional testing (data obtained from reporter construct experiments) were considered as the correct outcome.

Three prediction tools (PROVEAN, SNP&GO, and MetaSNP correctly predicted the effect of all nine variants tested.

 

Other articles related to Genomics and Bioinformatics on this online Open Access Journal Include:

Finding the Genetic Links in Common Disease: Caveats of Whole Genome Sequencing Studies

 

Large-scale sequencing does not support the idea that lower-frequency variants have a major role in predisposition to type 2 diabetes

 

US Personalized Cancer Genome Sequencing Market Outlook 2018 –

 

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

 

 

Read Full Post »

Older Posts »