Advertisements
Feeds:
Posts
Comments

Archive for the ‘Computational Biology/Systems and Bioinformatics’ Category


Real Time @BIOConvention #BIO2019:#Bitcoin Your Data! From Trusted Pharma Silos to Trustless Community-Owned Blockchain-Based Precision Medicine Data Trials

Reporter: Stephen J Williams, PhD @StephenJWillia2
Speakers

As care for lifestyle-driven chronic diseases expands in scope, prevention and recovery are becoming the new areas of focus. Building a precision medicine foundation that will promote ownership of individuals’ health data and allow for sharing and trading of this data could prove a great blockchain.

At its core, blockchain may offer the potential of a shared platform that decentralizes healthcare interactions ensuring access control, authenticity and integrity, while presenting the industry with radical possibilities for value-based care and reimbursement models. Panelists will explore these new discoveries as well as look to answer lingering questions, such as: are we off to a “trustless” information model underpinned by Bitcoin cryptocurrency, where no central authority validates the transactions in the ledger, and anyone whose computers can do the required math can join to mine and add blocks to your data? Would smart contracts begin to incentivize “rational” behaviors where consumers respond in a manner that makes their data interesting?

Moderator:  Cybersecurity is extremely important in the minds of healthcare CEOs.  CEO of Kaiser Permenente has listed this as one of main concerns for his company.

Sanjeey of Singularity: There are Very few companies in this space.  Singularity have collected thousands of patient data.  They wanted to do predictive health care, where a patient will know beforehand what health problems and issues to expect.  Created a program called Virtual Assistant. As data is dynamic, the goal was to provide Virtual Assistant to everyone.

Benefits of blockchain: secure, simple to update, decentralized data; patient can control their own data, who sees it and monetize it.

Nebular Genetics: Company was founded by Dr. George Church, who had pioneered the next generation sequencing (NGS) methodology.  The company goal is to make genomics available to all but this currently is not the case as NGS is not being used as frequently.

The problem is a data problem:

  • data not organized
  • data too parsed
  • data not accessible

Blockchain may be able to alleviate the accessibiltiy problem.  Pharma is very interested in the data but expensive to collect.  In addition many companies just do large scale but low depth sequencing.  For example 23andme (which had recently made a big deal with Lilly for data) only sequences about 1% of genome.

There are two types of genome sequencing companies

  1.  large scale and low depth – like 23andme
  2. smaller scale but higher depth – like DECODE and some of the EU EXOME sequencing efforts like the 1000 Project

Simply Vital Health: Harnesses blockchain to combat ineffeciencies in hospital records. They tackle the costs after acute care so increase the value based care.  Most of healthcare is concentrated on the top earners and little is concentrated on the majority less affluent and poor.  On addressing HIPAA compliance issues: they decided to work with HIPAA and comply but will wait for this industry to catch up so the industry as a whole can lobby to affect policy change required for blockchain technology to work efficiently in this arena.  They will only work with known vendors: VERY Important to know where the data is kept and who are controlling the servers you are using.  With other blockchain like Etherium or Bitcoin, the servers are anonymous.

Encrypgen: generates new blockchain for genomic data and NGS companies.

 

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

#blockchain
#bitcoin
#clinicaltrials

 

 

 

Advertisements

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019:  Issues of Risk and Reproduceability in Translational and Academic Collaboration; 2:30-4:00 June 3 Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

Derisking Academic Science: The Unmet Need  

Translating academic research into products and new therapies is a very risky venture as only 1% of academic research has been successfully translated into successful products.

Speakers
Collaboration from Chicago area universities like U of Chicago, Northwestern, etc.  First phase was enhance collaboration between universities by funding faculty recruitment and basic research.  Access to core facilities across universities.  Have expanded to give alternatives to company formation.
Half of the partnerships from Harvard and companies have been able to spin out viable startups.
Most academic PI are not as savvy to start a biotech so they bring in biotechs and build project teams as well as developing a team of ex pharma and biotech experts.  Derisk as running as one asset project.  Partner as early as possible.  A third of their pipeline have been successfully partnered.  Work with investors and patent attorneys.
Focused on getting PIs to get to startup.  Focused on oncology and vaccines and I/O.  The result can be liscensing or partnership. Running around 50 to 60 projects. Creating a new company from these US PI partnerships.
Most projects from Harvard have been therapeutics-based.  At Harvard they have a network of investors ($50 million).   They screen PI proposals based on translateability and what investors are interested in.
In Chicago they solicit multiple projects but are agnostic on area but as they are limited they are focused on projects that will assist in developing a stronger proposal to investor/funding mechanism.
NYU goes around university doing due diligence reaching out to investigators. They shop around their projects to wet their investors, pharma appetite future funding.  At Takeda they have five centers around US.  They want to have more input so go into the university with their scientists and discuss ideas.
Challenges:

Takeda: Data Validation very important. Second there may be disconnect with the amount of equity the PI wants in the new company as well as management.  Third PIs not aware of all steps in drug development.

Harvard:  Pharma and biotech have robust research and academic does not have the size or scope of pharma.  PIs must be more diligent on e.g. the compounds they get from a screen… they only focus narrowly

NYU:  bring in consultants as PIs don’t understand all the management issues.  Need to understand development so they bring in the experts to help them.  Pharma he feels have to much risk aversion and none of their PIs want 100% equity.

Chicago:  they like to publish at early stage so publication freedom is a challenge

Dr. Freedman: Most scientists responding to Nature survey said yes a reproduceability crisis.  The reasons: experimental bias, lack of validation techniques, reagents, and protocols etc.
And as he says there is a great ECONOMIC IMPACT of preclinical reproducability issues: to the tune of $56 billion of irreproducable results (paper published in PLOS Biology).  If can find the core drivers of this issue they can solve the problem.  STANDARDS are constantly used in various industries however academic research are lagging in developing such standards.  Just the problem of cell line authentication is costing $4 billion.
Dr. Cousins:  There are multiple high throughput screening (HTS) academic centers around the world (150 in US).  So where does the industry go for best practices in assays?  Eli Lilly had developed a manual for HTS best practices and in 1984 made publicly available (Assay Guidance Manual).  To date there have been constant updates to this manual to incorporate new assays.  Workshops have been developed to train scientists in these best practices.
NIH has been developing new programs to address these reproducability issues.  Developed a method called
Ring Testing Initiative” where multiple centers involved in sharing reagents as well as assays and allowing scientists to test at multiple facilities.
Dr.Tong: Reproduceability of Microarrays:  As microarrays were the only methodology to do high through put genomics in the early 2000s, and although much research had been performed to standardize and achieve best reproduceability of the microarray technology (determining best practices in spotting RNA on glass slides, hybridization protocols, image analysis) little had been done on evaluating the reproducibility of results obtained from microarray experiments involving biological samples.  The advent of Artificial Intelligence and Machine Learning though can be used to help validate microarray results.  This was done in a Nature Biotechnology paper (Nature Biotechnology volume28pages827–838 (2010)) by an international consortium, the International MAQC (Microarray Quality Control) Society and can be found here
However Dr. Tong feels there is much confusion in how we define reproduceability.  Dr. Tong identified a few key points of data reproduceability:
  1. Traceability: what are the practices and procedures from going from point A to point B (steps in a protocol or experimental design)
  2. Repeatability:  ability to repeat results within the same laboratory
  3. Replicatablilty:  ability to repeat results cross laboratory
  4. Transferability:  are the results validated across multiple platforms?

The panel then discussed the role of journals and funders to drive reproduceability in research.  They felt that editors have been doing as much as they can do as they receive an end product (the paper) but all agreed funders need to do more to promote data validity, especially in requiring that systematic evaluation and validation of each step in protocols are performed..  There could be more training of PIs with respect to protocol and data validation.

Other Articles on Industry/Academic Research Partnerships and Translational Research on this Open Access Online Journal Include

Envisage-Wistar Partnership and Immunacel LLC Presents at PCCI

BIO Partnering: Intersection of Academic and Industry: BIO INTERNATIONAL CONVENTION June 23-26, 2014 | San Diego, CA

R&D Alliances between Big Pharma and Academic Research Centers: Pharma’s Realization that Internal R&D Groups alone aren’t enough

Read Full Post »


BioInformatic Resources at the Environmental Protection Agency: Tools and Webinars on Toxicity Prediction

Curator Stephen J. Williams Ph.D.

New GenRA Module in EPA’s CompTox Dashboard Will Help Predict Potential Chemical Toxicity

Published September 25, 2018

As part of its ongoing computational toxicology research, EPA is developing faster and improved approaches to evaluate chemicals for potential health effects.  One commonly applied approach is known as chemical read-across. Read-across uses information about how a chemical with known data behaves to make a prediction about the behavior of another chemical that is “similar” but does not have as much data. Current read-across, while cost-effective, relies on a subjective assessment, which leads to varying predictions and justifications depending on who undertakes and evaluates the assessment.

To reduce uncertainties and develop a more objective approach, EPA researchers have developed an automated read-across tool called Generalized Read-Across (GenRA), and added it to the newest version of the EPA Computational Toxicology Dashboard. The goal of GenRA is to encode as many expert considerations used within current read-across approaches as possible and combine these with data-driven approaches to transition read-across towards a more systematic and data-based method of making predictions.

EPA chemist Dr. Grace Patlewicz says it was this uncertainty that motivated the development of GenRA. “You don’t actually know if you’ve been successful at using read-across to help predict chemical toxicity because it’s a judgement call based on one person versus the next. That subjectivity is something we were trying to move away from.” Patlewicz says.

Since toxicologists and risk assessors are already familiar with read-across, EPA researchers saw value in creating a tool that that was aligned with the current read-across workflow but which addressed uncertainty using data analysis methods in what they call a “harmonized-hybrid workflow.”

In its current form, GenRA lets users find analogues, or chemicals that are similar to their target chemical, based on chemical structural similarity. The user can then select which analogues they want to carry forward into the GenRA prediction by exploring the consistency and concordance of the underlying experimental data for those analogues. Next, the tool predicts toxicity effects of specific repeated dose studies. Then, a plot with these outcomes is generated based on a similarity-weighted activity of the analogue chemicals the user selected. Finally, the user is presented with a data matrix view showing whether a chemical is predicted to be toxic (yes or no) for a chosen set of toxicity endpoints, with a quantitative measure of uncertainty.

The team is also comparing chemicals based on other similarity contexts, such as physicochemical characteristics or metabolic similarity, as well as extending the approach to make quantitative predictions of toxicity.

Patlewicz thinks incorporating other contexts and similarity measures will refine GenRA to make better toxicity predictions, fulfilling the goal of creating a read-across method capable of assessing thousands of chemicals that currently lack toxicity data.

“That’s the direction that we’re going in,” Patlewicz says. “Recognizing where we are and trying to move towards something a little bit more objective, showing how aspects of the current read-across workflow could be refined.”

Learn more at: https://comptox.epa.gov

 

A listing of EPA Tools for Air Quality Assessment

Tools

  • Atmospheric Model Evaluation Tool (AMET)
    AMET helps in the evaluation of meteorological and air quality simulations.
  • Benchmark Dose Software (BMDS)
    EPA developed the Benchmark Dose Software (BMDS) as a tool to help estimate dose or exposure of a chemical or chemical mixture associated with a given response level. The methodology is used by EPA risk assessors and is fast becoming the world’s standard for dose-response analysis for risk assessments, including air pollution risk assessments.
  • BenMAP
    BenMAP is a Windows-based computer program that uses a Geographic Information System (GIS)-based to estimate the health impacts and economic benefits occurring when populations experience changes in air quality.
  • Community-Focused Exposure and Risk Screening Tool (C-FERST)
    C-FERST is an online tool developed by EPA in collaboration with stakeholders to provide access to resources that can be used with communities to help identify and learn more about their environmental health issues and explore exposure and risk reduction options.
  • Community Health Vulnerability Index
    EPA scientists developed a Community Health Vulnerability Index that can be used to help identify communities at higher health risk from wildfire smoke. Breathing smoke from a nearby wildfire is a health threat, especially for people with lung or heart disease, diabetes and high blood pressure as well as older adults, and those living in communities with poverty, unemployment and other indicators of social stress. Health officials can use the tool, in combination with air quality models, to focus public health strategies on vulnerable populations living in areas where air quality is impaired, either by wildfire smoke or other sources of pollution. The work was published in Environmental Science & Technology.
  • Critical Loads Mapper Tool
    The Critical Loads Mapper Tool can be used to help protect terrestrial and aquatic ecosystems from atmospheric deposition of nitrogen and sulfur, two pollutants emitted from fossil fuel burning and agricultural emissions. The interactive tool provides easy access to information on deposition levels through time; critical loads, which identify thresholds when pollutants have reached harmful levels; and exceedances of these thresholds.
  • EnviroAtlas
    EnviroAtlas provides interactive tools and resources for exploring the benefits people receive from nature or “ecosystem goods and services”. Ecosystem goods and services are critically important to human health and well-being, but they are often overlooked due to lack of information. Using EnviroAtlas, many types of users can access, view, and analyze diverse information to better understand the potential impacts of various decisions.
  • EPA Air Sensor Toolbox for Citizen Scientists
    EPA’s Air Sensor Toolbox for Citizen Scientists provides information and guidance on new low-cost compact technologies for measuring air quality. Citizens are interested in learning more about local air quality where they live, work and play. EPA’s Toolbox includes information about: Sampling methodologies; Calibration and validation approaches; Measurement methods options; Data interpretation guidelines; Education and outreach; and Low cost sensor performance information.
  • ExpoFIRST
    The Exposure Factors Interactive Resource for Scenarios Tool (ExpoFIRST) brings data from EPA’s Exposure Factors Handbook: 2011 Edition (EFH) to an interactive tool that maximizes flexibility and transparency for exposure assessors. ExpoFIRST represents a significant advance for regional, state, and local scientists in performing and documenting calculations for community and site-specific exposure assessments, including air pollution exposure assessments.
  • EXPOsure toolbox (ExpoBox)
    This is a toolbox created to assist individuals from within government, industry, academia, and the general public with assessing exposure, including exposure to air contaminants, fate and transport processes of air pollutants and their potential exposure concentrations. It is a compendium of exposure assessment tools that links to guidance documents, databases, models, reference materials, and other related resources.
  • Federal Reference & Federal Equivalency Methods
    EPA scientists develop and evaluate Federal Reference Methods and Federal Equivalency Methods for accurately and reliably measuring six primary air pollutants in outdoor air. These methods are used by states and other organizations to assess implementation actions needed to attain National Ambient Air Quality Standards.
  • Fertilizer Emission Scenario Tool for CMAQ (FEST-C)
    FEST-C facilitates the definition and simulation of new cropland farm management system scenarios or editing of existing scenarios to drive Environmental Policy Integrated Climate model (EPIC) simulations.  For the standard 12km continental Community Multi-Scale Air Quality model (CMAQ) domain, this amounts to about 250,000 simulations for the U.S. alone. It also produces gridded daily EPIC weather input files from existing hourly Meteorology-Chemistry Interface Processor (MCIP) files, transforms EPIC output files to CMAQ-ready input files and links directly to Visual Environment for Rich Data Interpretation (VERDI) for spatial visualization of input and output files. The December 2012 release will perform all these functions for any CMAQ grid scale or domain.
  • Instruction Guide and Macro Analysis Tool for Community-led Air Monitoring 
    EPA has developed two tools for evaluating the performance of low-cost sensors and interpreting the data they collect to help citizen scientists, communities, and professionals learn about local air quality.
  • Integrated Climate and Land use Scenarios (ICLUS)
    Climate change and land-use change are global drivers of environmental change. Impact assessments frequently show that interactions between climate and land-use changes can create serious challenges for aquatic ecosystems, water quality, and air quality. Population projections to 2100 were used to model the distribution of new housing across the landscape. In addition, housing density was used to estimate changes in impervious surface cover.  A final report, datasets, the ICLUS+ Web Viewer and ArcGIS tools are available.
  • Indoor Semi-Volatile Organic Compound (i-SVOC)
    i-SVOC Version 1.0 is a general-purpose software application for dynamic modeling of the emission, transport, sorption, and distribution of semi-volatile organic compounds (SVOCs) in indoor environments. i-SVOC supports a variety of uses, including exposure assessment and the evaluation of mitigation options. SVOCs are a diverse group of organic chemicals that can be found in: Many are also present in indoor air, where they tend to bind to interior surfaces and particulate matter (dust).

    • Pesticides;
    • Ingredients in cleaning agents and personal care products;
    • Additives to vinyl flooring, furniture, clothing, cookware, food packaging, and electronics.
  • Municipal Solid Waste Decision Support Tool (MSW DST)EXIT
    This tool is designed to aid solid waste planners in evaluating the cost and environmental aspects of integrated municipal solid waste management strategies. The tool is the result of collaboration between EPA and RTI International and its partners.
  • Optical Noise-Reduction Averaging (ONA) Program Improves Black Carbon Particle Measurements Using Aethalometers
    ONA is a program that reduces noise in real-time black carbon data obtained using Aethalometers. Aethalometers optically measure the concentration of light absorbing or “black” particles that accumulate on a filter as air flows through it. These particles are produced by incomplete fossil fuel, biofuel and biomass combustion. Under polluted conditions, they appear as smoke or haze.
  • RETIGO tool
    Real Time Geospatial Data Viewer (RETIGO) is a free, web-based tool that shows air quality data that are collected while in motion (walking, biking or in a vehicle). The tool helps users overcome technical barriers to exploring air quality data. After collecting measurements, citizen scientists and other users can import their own data and explore the data on a map.
  • Remote Sensing Information Gateway (RSIG)
    RSIG offers a new way for users to get the multi-terabyte, environmental datasets they want via an interactive, Web browser-based application. A file download and parsing process that now takes months will be reduced via RSIG to minutes.
  • Simulation Tool Kit for Indoor Air Quality and Inhalation Exposure (IAQX)
    IAQX version 1.1 is an indoor air quality (IAQ) simulation software package that complements and supplements existing indoor air quality simulation (IAQ) programs. IAQX is for advanced users who have experience with exposure estimation, pollution control, risk assessment, and risk management. There are many sources of indoor air pollution, such as building materials, furnishings, and chemical cleaners. Since most people spend a large portion of their time indoors, it is important to be able to estimate exposure to these pollutants. IAQX helps users analyze the impact of pollutant sources and sinks, ventilation, and air cleaners. It performs conventional IAQ simulations to calculate the pollutant concentration and/or personal exposure as a function of time. It can also estimate adequate ventilation rates based on user-provided air quality criteria. This is a unique feature useful for product stewardship and risk management.
  • Spatial Allocator
    The Spatial Allocator provides tools that could be used by the air quality modeling community to perform commonly needed spatial tasks without requiring the use of a commercial Geographic Information System (GIS).
  • Traceability Protocol for Assay and Certification of Gaseous Calibration Standards
    This is used to certify calibration gases for ambient and continuous emission monitors. It specifies methods for assaying gases and establishing traceability to National Institute of Standards and Technology (NIST) reference standards. Traceability is required under EPA ambient and continuous emission monitoring regulations.
  • Watershed Deposition Mapping Tool (WDT)
    WDT provides an easy to use tool for mapping the deposition estimates from CMAQ to watersheds to provide the linkage of air and water needed for TMDL (Total Maximum Daily Load) and related nonpoint-source watershed analyses.
  • Visual Environment for Rich Data Interpretation (VERDI)
    VERDI is a flexible, modular, Java-based program for visualizing multivariate gridded meteorology, emissions, and air quality modeling data created by environmental modeling systems such as CMAQ and the Weather Research and Forecasting (WRF) model.

 

Databases

  • Air Quality Data for the CDC National Environmental Public Health Tracking Network 
    EPA’s Exposure Research scientists are collaborating with the Centers for Disease Control and Prevention (CDC) on a CDC initiative to build a National Environmental Public Health Tracking (EPHT) network. Working with state, local and federal air pollution and health agencies, the EPHT program is facilitating the collection, integration, analysis, interpretation, and dissemination of data from environmental hazard monitoring, and from human exposure and health effects surveillance. These data provide scientific information to develop surveillance indicators, and to investigate possible relationships between environmental exposures, chronic disease, and other diseases, that can lead to interventions to reduce the burden of theses illnesses. An important part of the initiative is air quality modeling estimates and air quality monitoring data, combined through Bayesian modeling that can be linked with health outcome data.
  • EPAUS9R – An Energy Systems Database for use with the Market Allocation (MARKAL) Model
    The EPAUS9r is a regional database representation of the United States energy system. The database uses the MARKAL model. MARKAL is an energy system optimization model used by local and federal governments, national and international communities and academia. EPAUS9r represents energy supply, technology, and demand throughout the major sectors of the U.S. energy system.
  • Fused Air Quality Surfaces Using Downscaling
    This database provides access to the most recent O3 and PM2.5 surfaces datasets using downscaling.
  • Health & Environmental Research Online (HERO)
    HERO provides access to scientific literature used to support EPA’s integrated science assessments, including the  Integrated Science Assessments (ISA) that feed into the National Ambient Air Quality (NAAQS) reviews.
  • SPECIATE 4.5 Database
    SPECIATE is a repository of volatile organic gas and particulate matter (PM) speciation profiles of air pollution sources.

A listing of EPA Tools and Databases for Water Contaminant Exposure Assessment

Exposure and Toxicity

  • EPA ExpoBox (A Toolbox for Exposure Assessors)
    This toolbox assists individuals from within government, industry, academia, and the general public with assessing exposure from multiple media, including water and sediment. It is a compendium of exposure assessment tools that links to guidance documents, databases, models, reference materials, and other related resources.

Chemical and Product Categories (CPCat) Database
CPCat is a database containing information mapping more than 43,000 chemicals to a set of terms categorizing their usage or function. The comprehensive list of chemicals with associated categories of chemical and product use was compiled from publically available sources. Unique use category taxonomies from each source are mapped onto a single common set of approximately 800 terms. Users can search for chemicals by chemical name, Chemical Abstracts Registry Number, or by CPCat terms associated with chemicals.

A listing of EPA Tools and Databases for Chemical Toxicity Prediction & Assessment

  • Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS)
    SeqAPASS is a fast, online screening tool that allows researchers and regulators to extrapolate toxicity information across species. For some species, such as humans, mice, rats, and zebrafish, the EPA has a large amount of data regarding their toxicological susceptibility to various chemicals. However, the toxicity data for numerous other plants and animals is very limited. SeqAPASS extrapolates from these data rich model organisms to thousands of other non-target species to evaluate their specific potential chemical susceptibility.

 

A listing of EPA Webinar and Literature on Bioinformatic Tools and Projects

Comparative Bioinformatics Applications for Developmental Toxicology

Discuss how the US EPA/NCCT is trying to solve the problem of too many chemicals, too high cost, and too much biological uncertainty Discuss the solution the ToxCast Program is proposing; a data-rich system to screen, classify and rank chemicals for further evaluation

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=186844

CHEMOINFORMATIC AND BIOINFORMATIC CHALLENGES AT THE US ENVIRONMENTAL PROTECTION AGENCY.

This presentation will provide an overview of both the scientific program and the regulatory activities related to computational toxicology. This presentation will provide an overview of both the scientific program and the regulatory activities related to computational toxicology.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=154013

How Can We Use Bioinformatics to Predict Which Agents Will Cause Birth Defects?

The availability of genomic sequences from a growing number of human and model organisms has provided an explosion of data, information, and knowledge regarding biological systems and disease processes. High-throughput technologies such as DNA and protein microarray biochips are now standard tools for probing the cellular state and determining important cellular behaviors at the genomic/proteomic levels. While these newer technologies are beginning to provide important information on cellular reactions to toxicant exposure (toxicogenomics), a major challenge that remains is the formulation of a strategy to integrate transcript, protein, metabolite, and toxicity data. This integration will require new concepts and tools in bioinformatics. The U.S. National Library of Medicine’s Pubmed site includes 19 million citations and abstracts and continues to grow. The BDSM team is now working on assembling the literature’s unstructured data into a structured database and linking it to BDSM within a system that can then be used for testing and generating new hypotheses. This effort will generate data bases of entities (such as genes, proteins, metabolites, gene ontology processes) linked to PubMed identifiers/abstracts and providing information on the relationships between them. The end result will be an online/standalone tool that will help researchers to focus on the papers most relevant to their query and uncover hidden connections and obvious information gaps.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=227345

ADVANCED PROTEOMICS AND BIOINFORMATICS TOOLS IN TOXICOLOGY RESEARCH: OVERCOMING CHALLENGES TO PROVIDE SIGNIFICANT RESULTS

This presentation specifically addresses the advantages and limitations of state of the art gel, protein arrays and peptide-based labeling proteomic approaches to assess the effects of a suite of model T4 inhibitors on the thyroid axis of Xenopus laevis.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NHEERL&dirEntryId=152823

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=03%2F26%2F2014&dateEndPublishedPresented=03%2F26%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F02%2F2014&dateEndPublishedPresented=04%2F02%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F02%2F2014&dateEndPublishedPresented=04%2F02%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

 

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=03%2F26%2F2014&dateEndPublishedPresented=03%2F26%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F11%2F2014&dateEndPublishedPresented=04%2F11%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F11%2F2014&dateEndPublishedPresented=04%2F11%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

A Web-Hosted R Workflow to Simplify and Automate the Analysis of 16S NGS Data

Next-Generation Sequencing (NGS) produces large data sets that include tens-of-thousands of sequence reads per sample. For analysis of bacterial diversity, 16S NGS sequences are typically analyzed in a workflow that containing best-of-breed bioinformatics packages that may leverage multiple programming languages (e.g., Python, R, Java, etc.). The process totransform raw NGS data to usable operational taxonomic units (OTUs) can be tedious due tothe number of quality control (QC) steps used in QIIME and other software packages forsample processing. Therefore, the purpose of this work was to simplify the analysis of 16SNGS data from a large number of samples by integrating QC, demultiplexing, and QIIME(Quantitative Insights Into Microbial Ecology) analysis in an accessible R project. User command line operations for each of the pipeline steps were automated into a workflow. In addition, the R server allows multi-user access to the automated pipeline via separate useraccounts while providing access to the same large set of underlying data. We demonstratethe applicability of this pipeline automation using 16S NGS data from approximately 100 stormwater runoff samples collected in a mixed-land use watershed in northeast Georgia. OTU tables were generated for each sample and the relative taxonomic abundances were compared for different periods over storm hydrographs to determine how the microbial ecology of a stream changes with rise and fall of stream stage. Our approach simplifies the pipeline analysis of multiple 16S NGS samples by automating multiple preprocessing, QC, analysis and post-processing command line steps that are called by a sequence of R scripts. Presented at ASM 2015 Rapid NGS Bioinformatic Pipelines for Enhanced Molecular Epidemiologic Investigation of Pathogens

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=309890

DEVELOPING COMPUTATIONAL TOOLS NECESSARY FOR APPLYING TOXICOGENOMICS TO RISK ASSESSMENT AND REGULATORY DECISION MAKING.

GENOMICS, PROTEOMICS & METABOLOMICS CAN PROVIDE USEFUL WEIGHT-OF-EVIDENCE DATA ALONG THE SOURCE-TO-OUTCOME CONTINUUM, WHEN APPROPRIATE BIOINFORMATIC AND COMPUTATIONAL METHODS ARE APPLIED TOWARDS INTEGRATING MOLECULAR, CHEMICAL AND TOXICOGICAL INFORMATION. GENOMICS, PROTEOMICS & METABOLOMICS CAN PROVIDE USEFUL WEIGHT-OF-EVIDENCE DATA ALONG THE SOURCE-TO-OUTCOME CONTINUUM, WHEN APPROPRIATE BIOINFORMATIC AND COMPUTATIONAL METHODS ARE APPLIED TOWARDS INTEGRATING MOLECULAR, CHEMICAL AND TOXICOGICAL INFORMATION.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=156264

The Human Toxome Project

The Human Toxome project, funded as an NIH Transformative Research grant 2011–‐ 2016, is focused on developing the concepts and the means for deducing, validating, and sharing molecular Pathways of Toxicity (PoT). Using the test case of estrogenic endocrine disruption, the responses of MCF–‐7 human breast cancer cells are being phenotyped by transcriptomics and mass–‐spectroscopy–‐based metabolomics. The bioinformatics tools for PoT deduction represent a core deliverable. A number of challenges for quality and standardization of cell systems, omics technologies, and bioinformatics are being addressed. In parallel, concepts for annotation, validation, and sharing of PoT information, as well as their link to adverse outcomes, are being developed. A reasonably comprehensive public database of PoT, the Human Toxome Knowledge–‐base, could become a point of reference for toxicological research and regulatory tests strategies. A reasonably comprehensive public database of PoT, the Human Toxome Knowledge–‐base, could become a point of reference for toxicological research and regulatory tests strategies.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=309453

High-Resolution Metabolomics for Environmental Chemical Surveillance and Bioeffect Monitoring

High-Resolution Metabolomics for Environmental Chemical Surveillance and Bioeffect Monitoring (Presented by: Dean Jones, PhD, Department of Medicine, Emory University) (2/28/2013)

https://www.epa.gov/chemical-research/high-resolution-metabolomics-environmental-chemical-surveillance-and-bioeffect

Identification of Absorption, Distribution, Metabolism, and Excretion (ADME) Genes Relevant to Steatosis Using a Gene Expression Approach

Absorption, distribution, metabolism, and excretion (ADME) impact chemical concentration and activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. In order to better describe ADME parameters and how they modulate potential hazards posed by chemical exposure, our goal is to investigate the relationship between AOPs and ADME related genes and functional information. Given the scope of this task, we began using hepatic steatosis as a case study. To identify ADME genes related to steatosis, we used the publicly available toxicogenomics database, Open TG-GATEsTM. This database contains standardized rodent chemical exposure data from 170 chemicals (mostly drugs), along with differential gene expression data and corresponding associated pathological changes. We examined the chemical exposure microarray data set gathered from 9 chemical exposure treatments resulting in pathologically confirmed (minimal, moderate and severe) incidences of hepatic steatosis. From this differential gene expression data set, we utilized differential expression analyses to identify gene changes resulting from the chemical exposures leading to hepatic steatosis. We then selected differentially expressed genes (DEGs) related to ADME by filtering all genes based on their ADME functional identities. These DEGs include enzymes such as cytochrome p450, UDP glucuronosyltransferase, flavin-containing monooxygenase and transporter genes such as solute carriers and ATP-binding cassette transporter families. The up and downregulated genes were identified across these treatments. Total of 61 genes were upregulated and 68 genes were down regulated in all treatments. Meanwhile, 25 genes were both up regulated and downregulated across all the treatments. This work highlights the application of bioinformatics in linking AOPs with gene modulations specifically in relationships to ADME and exposures to chemicals. This abstract does not necessarily reflect U.S. EPA policy. This work highlights the application of bioinformatics tools to identify genes that are modulated by adverse outcomes. Specifically, we delineate a method to identify genes that are related to ADME and can impact target tissue dose in response to chemical exposures. The computational method outlined in this work is applicable to any adverse outcome pathway, and provide a linkage between chemical exposure, target tissue dose, and adverse outcomes. Application of this method will allow for the rapid screening of chemicals for their impact on ADME-related genes using available gene data bases in literature.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NHEERL&dirEntryId=341273

Development of Environmental Fate and Metabolic Simulators

Presented at Bioinformatics Open Source Conference (BOSC), Detroit, MI, June 23-24, 2005. see description

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=257172

 

Useful Webinars on EPA Computational Tools and Informatics

 

Computational Toxicology Communities of Practice

Computational Toxicology Research

EPA’s Computational Toxicology Communities of Practice is composed of hundreds of stakeholders from over 50 public and private sector organizations (ranging from EPA, other federal agencies, industry, academic institutions, professional societies, nongovernmental organizations, environmental non-profit groups, state environmental agencies and more) who have an interest in using advances in computational toxicology and exposure science to evaluate the safety of chemicals.

The Communities of Practice is open to the public. Monthly webinars are held at EPA’s RTP campus, on the fourth Thursday of the month (occasionally rescheduled in November and December to accommodate holiday schedules), from 11am-Noon EST/EDT. Remote participation is available. For more information or to be added to the meeting email list, contact: Monica Linnenbrink (linnenbrink.monica@epa.gov).

Related Links

Past Webinar Presentations

Presentation File Presented By Date
OPEn structure-activity Relationship App (OPERA) Powerpoint(VideoEXIT) Dr. Kamel Mansouri, Lead Computational Chemist contractor for Integrated Laboratory Systems in the National Institute of Environmental Health Sciences 2019/4/25
CompTox Chemicals Dashboard and InVitroDB V3 (VideoEXIT) Dr. Antony Williams, Chemist in EPA’s National Center for Computational Toxicology and Dr. Katie Paul-Friedman, Toxicologist in EPA’s National Center for Computational Toxicology 2019/3/28
The Systematic Empirical Evaluation of Models (SEEM) framework (VideoEXIT) Dr. John Wambaugh, Physical Scientist in EPA’s National Center for Computational Toxicology 2019/2/28
ToxValDB: A comprehensive database of quantitative in vivo study results from over 25,000 chemicals (VideoEXIT) Dr. Richard Judson, Research Chemist in EPA’s National Center for Computational Toxicology 2018/12/20
Sequence Alignment to Predict Across Species Susceptibility (seqAPASS) (VideoEXIT) Dr. Carlie LaLone, Bioinformaticist, EPA’s National Health and Environmental Effects Research Laboratory 2018/11/29
Chemicals and Products Database (VideoEXIT) Dr. Kathie Dionisio, Environmental Health Scientist, EPA’s National Exposure Research Laboratory 2018/10/25
CompTox Chemicals Dashboard V3 (VideoEXIT) Dr. Antony Williams, Chemist, EPA National Center for Computational Toxicology (NCCT). 2018/09/27
Generalised Read-Across (GenRA) (VideoEXIT) Dr. Grace Patlewicz, Chemist, EPA National Center for Computational Toxicology (NCCT). 2018/08/23
EPA’S ToxCast Owner’s Manual  (VideoEXIT) Monica Linnenbrink, Strategic Outreach and Communication lead, EPA National Center for Computational Toxicology (NCCT). 2018/07/26
EPA’s Non-Targeted Analysis Collaborative Trial (ENTACT)      (VideoEXIT) Elin Ulrich, Research Chemist in the Public Health Chemistry Branch, EPA National Exposure Research Laboratory (NERL). 2018/06/28
ECOTOX Knowledgebase: New Tools and Data Visualizations(VideoEXIT) Colleen Elonen, Translational Toxicology Branch, and Dr. Jennifer Olker, Systems Toxicology Branch, in the Mid-Continent Ecology Division of EPA’s National Health & Environmental Effects Research Laboratory (NHEERL) 2018/05/24
Investigating Chemical-Microbiota Interactions in Zebrafish (VideoEXIT) Tamara Tal, Biologist in the Systems Biology Branch, Integrated Systems Toxicology Division, EPA’s National Health & Environmental Effects Research Laboratory (NHEERL) 2018/04/26
The CompTox Chemistry Dashboard v2.6: Delivering Improved Access to Data and Real Time Predictions (VideoEXIT) Tony Williams, Computational Chemist, EPA’s National Center for Computational Toxicology (NCCT) 2018/03/29
mRNA Transfection Retrofits Cell-Based Assays with Xenobiotic Metabolism (VideoEXIT* Audio starts at 10:17) Steve Simmons, Research Toxicologist, EPA’s National Center for Computational Toxicology (NCCT) 2018/02/22
Development and Distribution of ToxCast and Tox21 High-Throughput Chemical Screening Assay Method Description(VideoEXIT) Stacie Flood, National Student Services Contractor, EPA’s National Center for Computational Toxicology (NCCT) 2018/01/25
High-throughput H295R steroidogenesis assay: utility as an alternative and a statistical approach to characterize effects on steroidogenesis (VideoEXIT) Derik Haggard, ORISE Postdoctoral Fellow, EPA’s National Center for Computational Toxicology (NCCT) 2017/12/14
Systematic Review for Chemical Assessments: Core Elements and Considerations for Rapid Response (VideoEXIT) Kris Thayer, Director, Integrated Risk Information System (IRIS) Division of EPA’s National Center for Environmental Assessment (NCEA) 2017/11/16
High Throughput Transcriptomics (HTTr) Concentration-Response Screening in MCF7 Cells (VideoEXIT) Joshua Harrill, Toxicologist, EPA’s National Center for Computational Toxicology (NCCT) 2017/10/26
Learning Boolean Networks from ToxCast High-Content Imaging Data Todor Antonijevic, ORISE Postdoc, EPA’s National Center for Computational Toxicology (NCCT) 2017/09/28
Suspect Screening of Chemicals in Consumer Products Katherine Phillips, Research Chemist, Human Exposure and Dose Modeling Branch, Computational Exposure Division, EPA’s National Exposure Research Laboratory (NERHL) 2017/08/31
The EPA CompTox Chemistry Dashboard: A Centralized Hub for Integrating Data for the Environmental Sciences (VideoEXIT) Antony Williams, Chemist, EPA’s National Center for Computational Toxicology (NCCT) 2017/07/27
Navigating Through the Minefield of Read-Across Tools and Frameworks: An Update on Generalized Read-Across (GenRA)(VideoEXIT)

 

Read Full Post »


Convergence of Biology, Medicine, and Computing: Biomedical Informatics Entrepreneurs Salon (BIES), HMS, 3/7/19, 4:30 – 6:30PM

 

REAL TIME Reporter: Aviva Lev-Ari, PhD, RN

 

The Biomedical Informatics Entrepreneurs Salon (BIES), which the department cohosts with Harvard University’s Office of Technology Development, meets monthly 4:30–6:30 pm in the Waterhouse Room of Gordon Hall at HMS to promote entrepreneurship at the convergence of biology, medicine, and computing. In addition to hearing industry leaders speak, participants will have the chance to look for collaborators, employees, advisors, customers, or investors. Bring your ideas, get some pizza!

BIES is open to everyone. Tickets are free but limited, and registration is required.

3/7/19 (Thursday)

Krishna Yeshwant, MD, MBA 
General Partner, GV

4:30-6:30pm

HMS – 25 Shuttack Street

(Waterhouse Room, Gordon Hall, 1st Floor)

Register

 

Featured speaker

Krishna Yeshwant, MD, MBA
General Partner, GV

Krishna Yeshwant

Dr. Krishna Yeshwant is a physician, programmer, and entrepreneur who has been working with GV since its inception. He first joined Google as part of the New Business Development team.

Prior to Google, Krishna helped start an electronic data interchange company that was acquired by Hewlett-Packard and a network security company that was acquired by Symantec.

Krishna has a B.S. in computer science from Stanford University. He also earned an M.D. from Harvard Medical School, an MBA from Harvard Business School, and completed his residency at Brigham and Women’s Hospital in Boston, Massachusetts where he continues to practice.

SOURCE

http://dbmi.hms.harvard.edu/events/bies

Read Full Post »


Convergence of Biology, Medicine, and Computing: Biomedical Informatics Entrepreneurs Salon (BIES), HMS, 2/7/19, 4:30 – 6:30PM

 

Real Time Reporter: Aviva Lev-Ari, PhD, RN

 

The Biomedical Informatics Entrepreneurs Salon (BIES), which the department cohosts with Harvard University’s Office of Technology Development, meets monthly 4:30–6:30 pm in the Waterhouse Room of Gordon Hall at HMS to promote entrepreneurship at the convergence of biology, medicine, and computing. In addition to hearing industry leaders speak, participants will have the chance to look for collaborators, employees, advisors, customers, or investors. Bring your ideas, get some pizza!

BIES is open to everyone. Tickets are free but limited, and registration is required.

Join our mailing list to receive invitations to future events.

Upcoming

2/7/19 (Thursday)

4:30-6:30pm

25 Shuttack Street

(Waterhouse Room, Gordon Hall, 1st Floor)

Register

Featured speakers

Featured speakers

Kurt Rohloff (Duality Technologies) and Alexander “Sasha” Gusev (Dana Farber Cancer Institute)

Rohloff (left) and Gusev (right)

Kurt Rohloff is the co-founder and CTO of Duality Technologies, a start-up commercializing encrypted computing technologies. He is also an associate professor of computer science at NJIT in Newark, NJ. He is a recipient of a DARPA Young Faculty Award and has been the PI on several DARPA and IARPA projects. Prior to co-founding Duality he worked for 9 years in the US defense industry as a senior scientist in the Distributed Systems research group at Raytheon BBN Technologies. He maintains close ties to the US defense industry and has been involved with multiple DARPA projects that have transitioned technologies into operational use and programs of record. His areas of technical expertise include distributed information management, secure computing and high assurance software design. He is the co-founder of the PALISADE open-source lattice encryption library. He received his Bachelor’s degree in Electrical Engineering from Georgia Tech and his Master’s and PhD. in Electrical Engineering from the University of Michigan.

Alexander “Sasha” Gusev is a member of the faculty at the Dana-Farber Cancer Institute and Harvard Medical School. His research focuses on developing statistical methods to understand the genetic architecture of complex disease, uncover biological mechanisms, and identify genetic predictors with prognostic value. He has authored multiple key papers focusing on genetic risk prediction, including analyses of hundreds of epigenomic annotations across common diseases to quantify the contribution of non-coding variation, as well as partitioning of prostate cancer heritability and risk prediction across African American and European populations. His recent work includes methods to predict molecular phenotypes (such as gene expression) into GWAS summary-level data, advancing the concept of a transcriptome-wide association study (TWAS). He received his BS in CS from University of Connecticut and his MS and PhD from Columbia University.

 

SOURCE

http://dbmi.hms.harvard.edu/events/bies

Read Full Post »


Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

 

Boston Healthcare sponsored a Webinar recently entitled ” Role of Informatics in Precision Medicine: Implications for Innovators”.  The webinar focused on the different informatic needs along the Oncology Care value chain from drug discovery through clinicians, C-suite executives and payers. The presentation, by Joseph Ferrara and Mark Girardi, discussed the specific informatics needs and deficiencies experienced by all players in oncology care and how innovators in this space could create value. The final part of the webinar discussed artificial intelligence and the role in cancer informatics.

 

Below is the mp4 video and audio for this webinar.  Notes on each of the slides with a few representative slides are also given below:

Please click below for the mp4 of the webinar:

 

 


  • worldwide oncology related care to increase by 40% in 2020
  • big movement to participatory care: moving decision making to the patient. Need for information
  • cost components focused on clinical action
  • use informatics before clinical stage might add value to cost chain

 

 

 

 

Key unmet needs from perspectives of different players in oncology care where informatics may help in decision making

 

 

 

  1.   Needs of Clinicians

– informatic needs for clinical enrollment

– informatic needs for obtaining drug access/newer therapies

2.  Needs of C-suite/health system executives

– informatic needs to help focus of quality of care

– informatic needs to determine health outcomes/metrics

3.  Needs of Payers

– informatic needs to determine quality metrics and managing costs

– informatics needs to form guidelines

– informatics needs to determine if biomarkers are used consistently and properly

– population level data analytics

 

 

 

 

 

 

 

 

 

 

 

 

What are the kind of value innovations that tech entrepreneurs need to create in this space? Two areas/problems need to be solved.

  • innovations in data depth and breadth
  • need to aggregate information to inform intervention

Different players in value chains have different data needs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data Depth: Cumulative Understanding of disease

Data Depth: Cumulative number of oncology transactions

  • technology innovators rely on LEGACY businesses (those that already have technology) and these LEGACY businesses either have data breath or data depth BUT NOT BOTH; (IS THIS WHERE THE GREATEST VALUE CAN BE INNOVATED?)
  • NEED to provide ACTIONABLE as well as PHENOTYPIC/GENOTYPIC DATA
  • data depth more important in clinical setting as it drives solutions and cost effective interventions.  For example Foundation Medicine, who supplies genotypic/phenotypic data for patient samples supplies high data depth
  • technologies are moving to data support
  • evidence will need to be tied to umbrella value propositions
  • Informatic solutions will have to prove outcome benefit

 

 

 

 

 

How will Machine Learning be involved in the healthcare value chain?

  • increased emphasis on real time datasets – CONSTANT UPDATES NEED TO OCCUR. THIS IS NOT HAPPENING BUT VALUED BY MANY PLAYERS IN THIS SPACE
  • Interoperability of DATABASES Important!  Many Players in this space don’t understand the complexities integrating these datasets

Other Articles on this topic of healthcare informatics, value based oncology, and healthcare IT on this OPEN ACCESS JOURNAL include:

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Broad Institute launches Merkin Institute for Transformative Technologies in Healthcare

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Paradoxical Findings in HealthCare Delivery and Outcomes: Economics in MEDICINE – Original Research by Anupam “Bapu” Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at HMS

Google & Digital Healthcare Technology

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

The Future of Precision Cancer Medicine, Inaugural Symposium, MIT Center for Precision Cancer Medicine, December 13, 2018, 8AM-6PM, 50 Memorial Drive, Cambridge, MA

Live Conference Coverage @Medcity Converge 2018 Philadelphia: Oncology Value Based Care and Patient Management

2016 BioIT World: Track 5 – April 5 – 7, 2016 Bioinformatics Computational Resources and Tools to Turn Big Data into Smart Data

The Need for an Informatics Solution in Translational Medicine

 

 

 

 

Read Full Post »


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

Updated 12/18/2018

In the efforts to reduce healthcare costs, provide increased accessibility of service for patients, and drive biomedical innovations, many healthcare and biotechnology professionals have looked to advances in digital technology to determine the utility of IT to drive and extract greater value from healthcare industry.  Two areas of recent interest have focused how best to use blockchain and artificial intelligence technologies to drive greater efficiencies in our healthcare and biotechnology industries.

More importantly, with the substantial increase in ‘omic data generated both in research as well as in the clinical setting, it has become imperative to develop ways to securely store and disseminate the massive amounts of ‘omic data to various relevant parties (researchers or clinicians), in an efficient manner yet to protect personal privacy and adhere to international regulations.  This is where blockchain technologies may play an important role.

A recent Oncotarget paper by Mamoshina et al. (1) discussed the possibility that next-generation artificial intelligence and blockchain technologies could synergize to accelerate biomedical research and enable patients new tools to control and profit from their personal healthcare data, and assist patients with their healthcare monitoring needs. According to the abstract:

The authors introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship value of the data.  They also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare.  In this system, blockchain and deep learning technologies would provide the secure and transparent distribution of personal data in a healthcare marketplace, and would also be useful to resolve challenges faced by the regulators and return control over personal data including medical records to the individual.

The review discusses:

  1. Recent achievements in next-generation artificial intelligence
  2. Basic concepts of highly distributed storage systems (HDSS) as a preferred method for medical data storage
  3. Open source blockchain Exonium and its application for healthcare marketplace
  4. A blockchain-based platform allowing patients to have control of their data and manage access
  5. How advances in deep learning can improve data quality, especially in an era of big data

Advances in Artificial Intelligence

  • Integrative analysis of the vast amount of health-associated data from a multitude of large scale global projects has proven to be highly problematic (REF 27), as high quality biomedical data is highly complex and of a heterogeneous nature, which necessitates special preprocessing and analysis.
  • Increased computing processing power and algorithm advances have led to significant advances in machine learning, especially machine learning involving Deep Neural Networks (DNNs), which are able to capture high-level dependencies in healthcare data. Some examples of the uses of DNNs are:
  1. Prediction of drug properties(2, 3) and toxicities(4)
  2. Biomarker development (5)
  3. Cancer diagnosis (6)
  4. First FDA approved system based on deep learning Arterys Cardio DL
  • Other promising systems of deep learning include:
    • Generative Adversarial Networks (https://arxiv.org/abs/1406.2661): requires good datasets for extensive training but has been used to determine tumor growth inhibition capabilities of various molecules (7)
    • Recurrent neural Networks (RNN): Originally made for sequence analysis, RNN has proved useful in analyzing text and time-series data, and thus would be very useful for electronic record analysis. Has also been useful in predicting blood glucose levels of Type I diabetic patients using data obtained from continuous glucose monitoring devices (8)
    • Transfer Learning: focused on translating information learned on one domain or larger dataset to another, smaller domain. Meant to reduce the dependence on large training datasets that RNN, GAN, and DNN require.  Biomedical imaging datasets are an example of use of transfer learning.
    • One and Zero-Shot Learning: retains ability to work with restricted datasets like transfer learning. One shot learning aimed to recognize new data points based on a few examples from the training set while zero-shot learning aims to recognize new object without seeing the examples of those instances within the training set.

Highly Distributed Storage Systems (HDSS)

The explosion in data generation has necessitated the development of better systems for data storage and handling. HDSS systems need to be reliable, accessible, scalable, and affordable.  This involves storing data in different nodes and the data stored in these nodes are replicated which makes access rapid. However data consistency and affordability are big challenges.

Blockchain is a distributed database used to maintain a growing list of records, in which records are divided into blocks, locked together by a crytosecurity algorithm(s) to maintain consistency of data.  Each record in the block contains a timestamp and a link to the previous block in the chain.  Blockchain is a distributed ledger of blocks meaning it is owned and shared and accessible to everyone.  This allows a verifiable, secure, and consistent history of a record of events.

Data Privacy and Regulatory Issues

The establishment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 has provided much needed regulatory guidance and framework for clinicians and all concerned parties within the healthcare and health data chain.  The HIPAA act has already provided much needed guidance for the latest technologies impacting healthcare, most notably the use of social media and mobile communications (discussed in this article  Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.).  The advent of blockchain technology in healthcare offers its own unique challenges however HIPAA offers a basis for developing a regulatory framework in this regard.  The special standards regarding electronic data transfer are explained in HIPAA’s Privacy Rule, which regulates how certain entities (covered entities) use and disclose individual identifiable health information (Protected Health Information PHI), and protects the transfer of such information over any medium or electronic data format. However, some of the benefits of blockchain which may revolutionize the healthcare system may be in direct contradiction with HIPAA rules as outlined below:

Issues of Privacy Specific In Use of Blockchain to Distribute Health Data

  • Blockchain was designed as a distributed database, maintained by multiple independent parties, and decentralized
  • Linkage timestamping; although useful in time dependent data, proof that third parties have not been in the process would have to be established including accountability measures
  • Blockchain uses a consensus algorithm even though end users may have their own privacy key
  • Applied cryptography measures and routines are used to decentralize authentication (publicly available)
  • Blockchain users are divided into three main categories: 1) maintainers of blockchain infrastructure, 2) external auditors who store a replica of the blockchain 3) end users or clients and may have access to a relatively small portion of a blockchain but their software may use cryptographic proofs to verify authenticity of data.

 

YouTube video on How #Blockchain Will Transform Healthcare in 25 Years (please click below)

 

 

In Big Data for Better Outcomes, BigData@Heart, DO->IT, EHDN, the EU data Consortia, and yes, even concepts like pay for performance, Richard Bergström has had a hand in their creation. The former Director General of EFPIA, and now the head of health both at SICPA and their joint venture blockchain company Guardtime, Richard is always ahead of the curve. In fact, he’s usually the one who makes the curve in the first place.

 

 

 

Please click on the following link for a podcast on Big Data, Blockchain and Pharma/Healthcare by Richard Bergström:

References

  1. Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., and Zhavoronkov, A. (2018) Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare, Oncotarget 9, 5665-5690.
  2. Aliper, A., Plis, S., Artemov, A., Ulloa, A., Mamoshina, P., and Zhavoronkov, A. (2016) Deep Learning Applications for Predicting Pharmacological Properties of Drugs and Drug Repurposing Using Transcriptomic Data, Molecular pharmaceutics 13, 2524-2530.
  3. Wen, M., Zhang, Z., Niu, S., Sha, H., Yang, R., Yun, Y., and Lu, H. (2017) Deep-Learning-Based Drug-Target Interaction Prediction, Journal of proteome research 16, 1401-1409.
  4. Gao, M., Igata, H., Takeuchi, A., Sato, K., and Ikegaya, Y. (2017) Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds, Journal of pharmacological sciences 133, 70-78.
  5. Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M., Moskalev, A., Kolosov, A., Ostrovskiy, A., Cantor, C., Vijg, J., and Zhavoronkov, A. (2016) Deep biomarkers of human aging: Application of deep neural networks to biomarker development, Aging 8, 1021-1033.
  6. Vandenberghe, M. E., Scott, M. L., Scorer, P. W., Soderberg, M., Balcerzak, D., and Barker, C. (2017) Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer, Scientific reports 7, 45938.
  7. Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., and Zhavoronkov, A. (2017) druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with Desired Molecular Properties in Silico, Molecular pharmaceutics 14, 3098-3104.
  8. Ordonez, F. J., and Roggen, D. (2016) Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition, Sensors (Basel) 16.

Articles from clinicalinformaticsnews.com

Healthcare Organizations Form Synaptic Health Alliance, Explore Blockchain’s Impact On Data Quality

From http://www.clinicalinformaticsnews.com/2018/12/05/healthcare-organizations-form-synaptic-health-alliance-explore-blockchains-impact-on-data-quality.aspx

By Benjamin Ross

December 5, 2018 | The boom of blockchain and distributed ledger technologies have inspired healthcare organizations to test the capabilities of their data. Quest Diagnostics, in partnership with Humana, MultiPlan, and UnitedHealth Group’s Optum and UnitedHealthcare, have launched a pilot program that applies blockchain technology to improve data quality and reduce administrative costs associated with changes to healthcare provider demographic data.

The collective body, called Synaptic Health Alliance, explores how blockchain can keep only the most current healthcare provider information available in health plan provider directories. The alliance plans to share their progress in the first half of 2019.

Providing consumers looking for care with accurate information when they need it is essential to a high-functioning overall healthcare system, Jason O’Meara, Senior Director of Architecture at Quest Diagnostics, told Clinical Informatics News in an email interview.

“We were intentional about calling ourselves an alliance as it speaks to the shared interest in improving health care through better, collaborative use of an innovative technology,” O’Meara wrote. “Our large collective dataset and national footprints enable us to prove the value of data sharing across company lines, which has been limited in healthcare to date.”

O’Meara said Quest Diagnostics has been investing time and resources the past year or two in understanding blockchain, its ability to drive purpose within the healthcare industry, and how to leverage it for business value.

“Many health care and life science organizations have cast an eye toward blockchain’s potential to inform their digital strategies,” O’Meara said. “We recognize it takes time to learn how to leverage a new technology. We started exploring the technology in early 2017, but we quickly recognized the technology’s value is in its application to business to business use cases: to help transparently share information, automate mutually-beneficial processes and audit interactions.”

Quest began discussing the potential for an alliance with the four other companies a year ago, O’Meara said. Each company shared traits that would allow them to prove the value of data sharing across company lines.

“While we have different perspectives, each member has deep expertise in healthcare technology, a collaborative culture, and desire to continuously improve the patient/customer experience,” said O’Meara. “We also recognize the value of technology in driving efficiencies and quality.”

Following its initial launch in April, Synaptic Health Alliance is deploying a multi-company, multi-site, permissioned blockchain. According to a whitepaper published by Synaptic Health, the choice to use a permissioned blockchain rather than an anonymous one is crucial to the alliance’s success.

“This is a more effective approach, consistent with enterprise blockchains,” an alliance representative wrote. “Each Alliance member has the flexibility to deploy its nodes based on its enterprise requirements. Some members have elected to deploy their nodes within their own data centers, while others are using secured public cloud services such as AWS and Azure. This level of flexibility is key to growing the Alliance blockchain network.”

As the pilot moves forward, O’Meara says the Alliance plans to open ability to other organizations. Earlier this week Aetna and Ascension announced they joined the project.

“I am personally excited by the amount of cross-company collaboration facilitated by this project,” O’Meara says. “We have already learned so much from each other and are using that knowledge to really move the needle on improving healthcare.”

 

US Health And Human Services Looks To Blockchain To Manage Unstructured Data

http://www.clinicalinformaticsnews.com/2018/11/29/us-health-and-human-services-looks-to-blockchain-to-manage-unstructured-data.aspx

By Benjamin Ross

November 29, 2018 | The US Department of Health and Human Services (HHS) is making waves in the blockchain space. The agency’s Division of Acquisition (DA) has developed a new system, called Accelerate, which gives acquisition teams detailed information on pricing, terms, and conditions across HHS in real-time. The department’s Associate Deputy Assistant Secretary for Acquisition, Jose Arrieta, gave a presentation and live demo of the blockchain-enabled system at the Distributed: Health event earlier this month in Nashville, Tennessee.

Accelerate is still in the prototype phase, Arrieta said, with hopes that the new system will be deployed at the end of the fiscal year.

HHS spends around $25 billion a year in contracts, Arrieta said. That’s 100,000 contracts a year with over one million pages of unstructured data managed through 45 different systems. Arrieta and his team wanted to modernize the system.

“But if you’re going to change the way a workforce of 20,000 people do business, you have to think your way through how you’re going to do that,” said Arrieta. “We didn’t disrupt the existing systems: we cannibalized them.”

The cannibalization process resulted in Accelerate. According to Arrieta, the system functions by creating a record of data rather than storing it, leveraging machine learning, artificial intelligence (AI), and robotic process automation (RPA), all through blockchain data.

“We’re using that data record as a mechanism to redesign the way we deliver services through micro-services strategies,” Arrieta said. “Why is that important? Because if you have a single application or data use that interfaces with 55 other applications in your business network, it becomes very expensive to make changes to one of the 55 applications.”

Accelerate distributes the data to the workforce, making it available to them one business process at a time.

“We’re building those business processes without disrupting the existing systems,” said Arrieta, and that’s key. “We’re not shutting off those systems. We’re using human-centered design sessions to rebuild value exchange off of that data.”

The first application for the system, Arrieta said, can be compared to department stores price-matching their online competitors.

It takes the HHS close to a month to collect the amalgamation of data from existing system, whether that be terms and conditions that drive certain price points, or software licenses.

“The micro-service we built actually analyzes that data, and provides that information to you within one second,” said Arrieta. “This is distributed to the workforce, to the 5,000 people that do the contracting, to the 15,000 people that actually run the programs at [HHS].”

This simple micro-service is replicated on every node related to HHS’s internal workforce. If somebody wants to change the algorithm to fit their needs, they can do that in a distributed manner.

Arrieta hopes to use Accelerate to save researchers money at the point of purchase. The program uses blockchain to simplify the process of acquisition.

“How many of you work with the federal government?” Arrieta asked the audience. “Do you get sick of reentering the same information over and over again? Every single business opportunity you apply for, you have to resubmit your financial information. You constantly have to check for validation and verification, constantly have to resubmit capabilities.”

Wouldn’t it be better to have historical notes available for each transaction? said Arrieta. This would allow clinical researchers to be able to focus on “the things they’re really good at,” instead of red tape.

“If we had the top cancer researcher in the world, would you really want her spending her time learning about federal regulations as to how to spend money, or do you want her trying to solve cancer?” Arrieta said. “What we’re doing is providing that data to the individual in a distributed manner so they can read the information of historical purchases that support activity, and they can focus on the objectives and risks they see as it relates to their programming and their objectives.”

Blockchain also creates transparency among researchers, Arrieta said, which says creates an “uncomfortable reality” in the fact that they have to make a decision regarding data, fundamentally changing value exchange.

“The beauty of our business model is internal investment,” Arrieta said. For instance, the HHS could take all the sepsis data that exists in their system, put it into a distributed ledger, and share it with an external source.

“Maybe that could fuel partnership,” Arrieta said. “I can make data available to researchers in the field in real-time so they can actually test their hypothesis, test their intuition, and test their imagination as it relates to solving real-world problems.”

 

Shivom is creating a genomic data hub to elongate human life with AI

From VentureBeat.com
Blockchain-based genomic data hub platform Shivom recently reached its $35 million hard cap within 15 seconds of opening its main token sale. Shivom received funding from a number of crypto VC funds, including Collinstar, Lateral, and Ironside.

The goal is to create the world’s largest store of genomic data while offering an open web marketplace for patients, data donors, and providers — such as pharmaceutical companies, research organizations, governments, patient-support groups, and insurance companies.

“Disrupting the whole of the health care system as we know it has to be the most exciting use of such large DNA datasets,” Shivom CEO Henry Ines told me. “We’ll be able to stratify patients for better clinical trials, which will help to advance research in precision medicine. This means we will have the ability to make a specific drug for a specific patient based on their DNA markers. And what with the cost of DNA sequencing getting cheaper by the minute, we’ll also be able to sequence individuals sooner, so young children or even newborn babies could be sequenced from birth and treated right away.”

While there are many solutions examining DNA data to explain heritage, intellectual capabilities, health, and fitness, the potential of genomic data has largely yet to be unlocked. A few companies hold the monopoly on genomic data and make sizeable profits from selling it to third parties, usually without sharing the earnings with the data donor. Donors are also not informed if and when their information is shared, nor do they have any guarantee that their data is secure from hackers.

Shivom wants to change that by creating a decentralized platform that will break these monopolies, democratizing the processes of sharing and utilizing the data.

“Overall, large DNA datasets will have the potential to aid in the understanding, prevention, diagnosis, and treatment of every disease known to mankind, and could create a future where no diseases exist, or those that do can be cured very easily and quickly,” Ines said. “Imagine that, a world where people do not get sick or are already aware of what future diseases they could fall prey to and so can easily prevent them.”

Shivom’s use of blockchain technology and smart contracts ensures that all genomic data shared on the platform will remain anonymous and secure, while its OmiX token incentivizes users to share their data for monetary gain.

Rise in Population Genomics: Local Government in India Will Use Blockchain to Secure Genetic Data

Blockchain will secure the DNA database for 50 million citizens in the eighth-largest state in India. The government of Andhra Pradesh signed a Memorandum of Understanding with a German genomics and precision medicine start-up, Shivom, which announced to start the pilot project soon. The move falls in line with a trend for governments turning to population genomics, and at the same time securing the sensitive data through blockchain.

Andhra Pradesh, DNA, and blockchain

Storing sensitive genetic information safely and securely is a big challenge. Shivom builds a genomic data-hub powered by blockchain technology. It aims to connect researchers with DNA data donors thus facilitating medical research and the healthcare industry.

With regards to Andhra Pradesh, the start-up will first launch a trial to determine the viability of their technology for moving from a proactive to a preventive approach in medicine, and towards precision health. “Our partnership with Shivom explores the possibilities of providing an efficient way of diagnostic services to patients of Andhra Pradesh by maintaining the privacy of the individual data through blockchain technologies,” said J A Chowdary, IT Advisor to Chief Minister, Government of Andhra Pradesh.

Other Articles in this Open Access Journal on Digital Health include:

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

 

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

 

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

 

Medcity Converge 2018 Philadelphia: Live Coverage @pharma_BI

 

Digital Health Breakthrough Business Models, June 5, 2018 @BIOConvention, Boston, BCEC

 

 

 

 

 

 

Read Full Post »

Older Posts »