Feeds:
Posts
Comments

Archive for the ‘BioIT: BioInformatics’ Category


MinneBOS 2019, Field Guide to Data Science & Emerging Tech in the Boston Community

August 22, 2019, 8AM to 5PM at Boston University Questrom School of Business, 595 Commonwealth Avenue, Boston, MA

 

 

MinneBOS – Boston’s Field Guide to Data Science & Emerging Tech

Announcement

Leaders in Pharmaceutical Business Intelligence (LPBI) Group

 

REAL TIME Press Coverage for

 http://pharmaceuticalintelligence.com 

by

 Aviva Lev-Ari, PhD, RN

Director & Founder, Leaders in Pharmaceutical Business Intelligence (LPBI) Group, Boston

Editor-in-Chief, Open Access Online Scientific Journal, http://pharmaceuticalintelligence.com

Editor-in-Chief, BioMed e-Series, 16 Volumes in Medicine, https://pharmaceuticalintelligence.com/biomed-e-books/

@pharma_BI

@AVIVA1950

#MinneBos

 

Logo, Leaders in Pharmaceutical Business Intelligence (LPBI) Group, Boston

Our BioMed e-series

WE ARE ON AMAZON.COM

 

https://lnkd.in/ekWGNqA

 

UPDATED AGENDA

Thursday, August 22 • 9:30am – 10:15am
Histopathological images are the gold standard tool for cancer diagnosis, whose interpretation requires manual inspection by expert pathologists. This process is time-consuming for the patients and subject to human error. Recent advances in deep learning models, particularly convolutional neural networks, combined with big databases of patient histopathology images will pave the path for cancer researchers to create more accurate guiding tools for pathologists. In this talk, I will review the latest advances of big data in healthcare analytics and focus on deep learning applications in cancer research. Targeted at a general audience, I will provide a high-level overview of technical concepts in deep learning image analysis, and describe a typical cloud-based workflow for tackling such big data problems. I will conclude my talk by sharing some of our most recent results based on a wide range of cancer types.

Speakers

avatar for Mohammad Soltanieh-ha, PhD

Mohammad Soltanieh-ha, PhD

Clinical Assistant Professor, Boston University – Questrom
Mohammad is a faculty at Boston University, Questrom School of Business, where he teaches data analytics and big data to master’s students. Mohammad’s current research area involves deep learning and its applications in cancer research.

10:15am

10:30am

Thursday, August 22 • 10:30am – 11:00am

Deep learning image recognition and classification models for fashion items

Large scale image recognition and classification is an interesting and challenging problem. This case study uses fashion-MNIST dataset that involves 60000 training images and 10000 testing images. Several popular deep learning models are explored in this study to arrive at a suitable model with high accuracy. Although convolutional neural networks have emerged as a gold-standard for image recognition and classification problems due to speed and accuracy advantages, arriving at an optimal model and making several choices at the time of specifying model architecture, is still a challenging task. This case study provides the best practices and interesting insights.

Speakers

avatar for Bharatendra Rai

Bharatendra Rai

Professor, UMass Dartmouth
Bharatendra Rai, Ph.D. is Professor of Business Analytics in the Charlton College of Business at UMass Dartmouth. His research interests include machine learning & deep learning applications.
  • Train data: 60,000
  • Test data: 10,000
  • Dataset available from Google MNIST Fashion Data – items in DB: data already labelled
  • Label and Description
  • Architecture: Input >> Conv >> Conv >> Pooling >> Dropout << Dense <<Flatten << Dropout >> Output
  • CNN vs Fully connected: 320 parameters: 3x3x1x32 + [32 BIAS TERM] = 320 vs
  • fully connected network parameters is 16 million
  • Train the model: 15 iterations – Training and Validation
  • Actual vs Predicted: 94% was classified correctly = Accuracy: 94% 5974 vs 4700 (78%)
  • Confusion Matrix – Test 720 correctly classified for item 6  – Probability va Actual Vs Predicted
  • Image generation: Noise . gnerator Network > fake Image vs Real image – GAN Loss va Discriminator Loss
  • CNN network help reduce # of parameter
  • Droppot layers can help reduce overfitting
  • validation split of x%chooses last x% of train data
  • Generation of new data is challenging

11:00am

11:15am

Thursday, August 22 • 11:15am – 12:00pm

Rapid Data Science

Most companies today require fast, traceable, and actionable answers to their data questions. This talk will present the structure of the data science process along with cutting edge developments in computing and data science technology (DST) with direct applications to real world problems (with a lot of pictures!). Everything from modeling to team building will be discussed, with clear business applications.

Speakers

avatar for Erez Kaminski

Erez Kaminski

Leaders Global Operations Fellow, MIT
Erez has spent his career helping companies solve problems using data science. He is currently a graduate student in computer science and business at MIT. Previously, he worked in data science at Amgen Inc. and as a technologist at Wolfram Research.

12:00pm

1:00pm

Thursday, August 22 • 1:00pm – 1:45pm

Health and Healthcare Data Visualization – See how you’re doing

Health and healthcare organizations are swimming in data but few have the skills to show and see the story in their data using the best practices of data visualization. This presentation raises awareness about the research that inform these best practice and stories from the front of groups who are embracing them and re-imagining how they display their data and information. These groups include the NYC Dept of Health & Mental Hygiene, The Centers for Medicare and Medicaid (CMS), and leading medical centers and providers across the country.

Speakers

avatar for Katherine Rowell

Katherine Rowell

Co-Founder & Principal, Health Data Viz
Katherine Rowell is a health, healthcare, and data visualization expert. She is Co-founder and Principal of HealthDataViz, a Boston firm that specializes in helping healthcare organizations organize, design and present visual displays of data to inform their decisions and stimulate… Read More →
  • dashboard for Hospital CEOs

1:45pm

2:00pm

Thursday, August 22 • 2:00pm – 2:45pm

AI in Healthcare

Benefits, challenges and impact of AI and Cybersecurity on medicine.

Speakers

avatar for Vinit Nijhawan

Vinit Nijhawan

Lecturer, Boston University
Vinit Nijhawan is an Entrepreneur, Academic, and Board Member with a track record of success, including 4 startups in 20 years.
  • US: Spends the most on Health Care (HC) death per 100K people is the highest
  • Eric Topol – Diagnosis is not done correctly, AI will help with diagnosis
  • Diagnosis — AI will have the most impact; VIRAL infections are diagnosed as bacterial infections and get antibiotics for treatment
  • Image Classification my ML – decline below to human misclassification
  • Training Data sets – Big data
  • Algorithms getting better
  • Data Capture getting better – HC as well
  • Investment in HC is the greatest
  • SECURITY related to Implentable Medical Devices = security attacks – hacking and sending signal to implentable devices

2:45pm

3:00pm

Empower

Thursday, August 22 • 3:00pm – 3:30pm

Patient centric AI: Saving lives with ML driven hospital interventions

This presentation will cover the use of machine learning for maximizing the impact of a hospital readmissions intervention program. With machine learning, clinical care teams can identify and focus their intervention efforts on patients with the highest risk of readmission. The talk will go over the goals, logistics, and considerations for defining, implementing, and measuring our ML driven intervention program. While covering some technical details, this presentation will focus on the business implementation of advanced technology for helping people live healthier lives.

Speakers

avatar for Miguel Martinez

Miguel Martinez

Data Scientist, Optum
Miguel Martinez is a Data Scientist at Optum Enterprise Analytics. Relied on as a tech lead in advancing AI healthcare initiatives, he is passionate about identifying and developing data science solutions for the benefit of organizations and people.

 

3:30pm

3:45pm

Thursday, August 22 • 3:45pm – 4:15pm

Using Ontologies to Power AI Systems

There’s a great deal of confusion about the role of a knowledge architecture in artificial intelligence projects. Some people don’t believe that any reference data is necessary. But in reality reference data is required- even if there is no metadata or architecture definitions outside defined externally for an AI algorithm, someone has made the decisions about architecture and classification within the program. However, this will not work for every organization because there are terms, workflows, product attributes, and organizing principles that are unique to the organization and that need to be defined for AI tools to work most effectively.

Speakers

avatar for Seth Earley

Seth Earley

CEO, Earley Information Science
Seth Earley is a published author and public speaker about artificial intelligence and information architecture. He wrote “There’s no AI without IA” which has become an industry catchphrase used by a number of people including Ginny Rometty, the CEO of IBM.
  • Ontology, taxonomies, thesauri – conceptual relationships
  • Object-Oriented Programming and Information Architecture using AI is Old wine in new bottles

4:15pm

Thursday, August 22

TBA

 Senior Leadership Panel: Future Directions of Analytics

This panel includes senior leaders from across industry, academia & government to discuss challenges they are tackling, needs they anticipate and goals they will achieve

Moderators

avatar for Bonnie Holub, PhD

Bonnie Holub, PhD

Industry & Business Data Science, Teradata
Bonnie has a PhD in Artificial Intelligence and specializes in correlating disparate sets of Big Data for actionable results.

Read Full Post »


scPopCorn: A New Computational Method for Subpopulation Detection and their Comparative Analysis Across Single-Cell Experiments

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Present day technological advances have facilitated unprecedented opportunities for studying biological systems at single-cell level resolution. For example, single-cell RNA sequencing (scRNA-seq) enables the measurement of transcriptomic information of thousands of individual cells in one experiment. Analyses of such data provide information that was not accessible using bulk sequencing, which can only assess average properties of cell populations. Single-cell measurements, however, can capture the heterogeneity of a population of cells. In particular, single-cell studies allow for the identification of novel cell types, states, and dynamics.

 

One of the most prominent uses of the scRNA-seq technology is the identification of subpopulations of cells present in a sample and comparing such subpopulations across samples. Such information is crucial for understanding the heterogeneity of cells in a sample and for comparative analysis of samples from different conditions, tissues, and species. A frequently used approach is to cluster every dataset separately, inspect marker genes for each cluster, and compare these clusters in an attempt to determine which cell types were shared between samples. This approach, however, relies on the existence of predefined or clearly identifiable marker genes and their consistent measurement across subpopulations.

 

Although the aligned data can then be clustered to reveal subpopulations and their correspondence, solving the subpopulation-mapping problem by performing global alignment first and clustering second overlooks the original information about subpopulations existing in each experiment. In contrast, an approach addressing this problem directly might represent a more suitable solution. So, keeping this in mind the researchers developed a computational method, single-cell subpopulations comparison (scPopCorn), that allows for comparative analysis of two or more single-cell populations.

 

The performance of scPopCorn was tested in three distinct settings. First, its potential was demonstrated in identifying and aligning subpopulations from single-cell data from human and mouse pancreatic single-cell data. Next, scPopCorn was applied to the task of aligning biological replicates of mouse kidney single-cell data. scPopCorn achieved the best performance over the previously published tools. Finally, it was applied to compare populations of cells from cancer and healthy brain tissues, revealing the relation of neoplastic cells to neural cells and astrocytes. Consequently, as a result of this integrative approach, scPopCorn provides a powerful tool for comparative analysis of single-cell populations.

 

This scPopCorn is basically a computational method for the identification of subpopulations of cells present within individual single-cell experiments and mapping of these subpopulations across these experiments. Different from other approaches, scPopCorn performs the tasks of population identification and mapping simultaneously by optimizing a function that combines both objectives. When applied to complex biological data, scPopCorn outperforms previous methods. However, it should be kept in mind that scPopCorn assumes the input single-cell data to consist of separable subpopulations and it is not designed to perform a comparative analysis of single cell trajectories datasets that do not fulfill this constraint.

 

Several innovations developed in this work contributed to the performance of scPopCorn. First, unifying the above-mentioned tasks into a single problem statement allowed for integrating the signal from different experiments while identifying subpopulations within each experiment. Such an incorporation aids the reduction of biological and experimental noise. The researchers believe that the ideas introduced in scPopCorn not only enabled the design of a highly accurate identification of subpopulations and mapping approach, but can also provide a stepping stone for other tools to interrogate the relationships between single cell experiments.

 

References:

 

https://www.sciencedirect.com/science/article/pii/S2405471219301887

 

https://www.tandfonline.com/doi/abs/10.1080/23307706.2017.1397554

 

https://ieeexplore.ieee.org/abstract/document/4031383

 

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0927-y

 

https://www.sciencedirect.com/science/article/pii/S2405471216302666

 

 

Read Full Post »


eProceedings for BIO 2019 International Convention, June 3-6, 2019 Philadelphia Convention Center; Philadelphia PA, Real Time Coverage by Stephen J. Williams, PhD @StephenJWillia2

 

CONFERENCE OVERVIEW

Real Time Coverage of BIO 2019 International Convention, June 3-6, 2019 Philadelphia Convention Center; Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/05/31/real-time-coverage-of-bio-international-convention-june-3-6-2019-philadelphia-convention-center-philadelphia-pa/

 

LECTURES & PANELS

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence: Realizing Precision Medicine One Patient at a Time, 6/5/2019, Philadelphia PA

Reporter: Stephen J Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/05/real-time-coverage-bioconvention-bio2019-machine-learning-and-artificial-intelligence-realizing-precision-medicine-one-patient-at-a-time/

 

Real Time Coverage @BIOConvention #BIO2019: Genome Editing and Regulatory Harmonization: Progress and Challenges, 6/5/2019. Philadelphia PA

Reporter: Stephen J Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/05/real-time-coverage-bioconvention-bio2019-genome-editing-and-regulatory-harmonization-progress-and-challenges/

 

Real Time Coverage @BIOConvention #BIO2019: Precision Medicine Beyond Oncology June 5, 2019, Philadelphia PA

Reporter: Stephen J Williams PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/05/real-time-coverage-bioconvention-bio2019-precision-medicine-beyond-oncology-june-5-philadelphia-pa/

 

Real Time @BIOConvention #BIO2019:#Bitcoin Your Data! From Trusted Pharma Silos to Trustless Community-Owned Blockchain-Based Precision Medicine Data Trials, 6/5/2019, Philadelphia PA

Reporter: Stephen J Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/05/real-time-bioconvention-bio2019bitcoin-your-data-from-trusted-pharma-silos-to-trustless-community-owned-blockchain-based-precision-medicine-data-trials/

 

Real Time Coverage @BIOConvention #BIO2019: Keynote Address Jamie Dimon CEO @jpmorgan June 5, 2019, Philadelphia, PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/05/real-time-coverage-bioconvention-bio2019-keynote-address-jamie-dimon-ceo-jpmorgan-june-5-philadelphia/

 

Real Time Coverage @BIOConvention #BIO2019: Chat with @FDA Commissioner, & Challenges in Biotech & Gene Therapy June 4, 2019, Philadelphia, PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/04/real-time-coverage-bioconvention-bio2019-chat-with-fda-commissioner-challenges-in-biotech-gene-therapy-june-4-philadelphia/

 

Falling in Love with Science: Championing Science for Everyone, Everywhere June 4 2019, Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/04/real-time-coverage-bioconvention-bio2019-falling-in-love-with-science-championing-science-for-everyone-everywhere/

 

Real Time Coverage @BIOConvention #BIO2019: June 4 Morning Sessions; Global Biotech Investment & Public-Private Partnerships, 6/4/2019, Philadelphia PA

Reporter: Stephen J Williams PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/04/real-time-coverage-bioconvention-bio2019-june-4-morning-sessions-global-biotech-investment-public-private-partnerships/

 

Real Time Coverage @BIOConvention #BIO2019: Understanding the Voices of Patients: Unique Perspectives on Healthcare; June 4, 2019, 11:00 AM, Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/04/real-time-coverage-bioconvention-bio2019-understanding-the-voices-of-patients-unique-perspectives-on-healthcare-june-4/

 

Real Time Coverage @BIOConvention #BIO2019: Keynote: Siddhartha Mukherjee, Oncologist and Pulitzer Author; June 4 2019, 9AM, Philadelphia PA

Reporter: Stephen J. Williams, PhD. @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/04/real-time-coverage-bioconvention-bio2019-keynote-siddhartha-mukherjee-oncologist-and-pulitzer-author-june-4-9am-philadelphia-pa/

 

Real Time Coverage @BIOConvention #BIO2019:  Issues of Risk and Reproduceability in Translational and Academic Collaboration; 2:30-4:00 June 3, 2019, Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/03/real-time-coverage-bioconvention-bio2019-issues-of-risk-and-reproduceability-in-translational-and-academic-collaboration-230-400-june-3-philadelphia-pareal-time-coverage-bioconvention-bi/

 

Real Time Coverage @BIOConvention #BIO2019: What’s Next: The Landscape of Innovation in 2019 and Beyond. 3-4 PM June 3, 2019, Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/03/real-time-coverage-bioconvention-bio2019-whats-next-the-landscape-of-innovation-in-2019-and-beyond-3-4-pm-june-3-philadelphia-pa/

 

Real Time Coverage @BIOConvention #BIO2019: After Trump’s Drug Pricing Blueprint: What Happens Next? A View from Washington; June 3, 2019 1:00 PM, Philadelphia PA

Reporter: Stephen J. Williams, PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/03/real-time-coverage-bioconvention-bio2019-after-trumps-drug-pricing-blueprint-what-happens-next-a-view-from-washington-june-3-2019-100-pm-philadelphia-pa/

 

Real Time Coverage @BIOConvention #BIO2019: International Cancer Clusters Showcase June 3, 2019, Philadelphia PA

Reporter: Stephen J. Williams PhD @StephenJWillia2

https://pharmaceuticalintelligence.com/2019/06/03/real-time-coverage-bioconvention-bio2019-international-cancer-clusters-showcase-june-3-philadelphia-pa/

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence: Realizing Precision Medicine One Patient at a Time

Reporter: Stephen J Williams, PhD @StephenJWillia2

The impact of Machine Learning (ML) and Artificial Intelligence (AI) during the last decade has been tremendous. With the rise of infobesity, ML/AI is evolving to an essential capability to help mine the sheer volume of patient genomics, omics, sensor/wearables and real-world data, and unravel the knot of healthcare’s most complex questions.

Despite the advancements in technology, organizations struggle to prioritize and implement ML/AI to achieve the anticipated value, whilst managing the disruption that comes with it. In this session, panelists will discuss ML/AI implementation and adoption strategies that work. Panelists will draw upon their experiences as they share their success stories, discuss how to implement digital diagnostics, track disease progression and treatment, and increase commercial value and ROI compared against traditional approaches.

  • most of trials which are done are still in training AI/ML algorithms with training data sets.  The best results however have been about 80% accuracy in training sets.  Needs to improve
  • All data sets can be biased.  For example a professor was looking at heartrate using a IR detector on a wearable but it wound up that different types of skin would generate a different signal to the detector so training sets maybe population biases (you are getting data from one group)
  • clinical grade equipment actually haven’t been trained on a large set like commercial versions of wearables, Commercial grade is tested on a larger study population.  This can affect the AI/ML algorithms.
  • Regulations:  The regulatory bodies responsible is up to debate.  Whether FDA or FTC is responsible for AI/ML in healtcare and healthcare tech and IT is not fully decided yet.  We don’t have the guidances for these new technologies
  • some rules: never use your own encryption always use industry standards especially when getting personal data from wearables.  One hospital corrupted their system because their computer system was not up to date and could not protect against a virus transmitted by a wearable.
  • pharma companies understand they need to increase value of their products so very interested in how AI/ML can be used.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »


BioInformatic Resources at the Environmental Protection Agency: Tools and Webinars on Toxicity Prediction

Curator Stephen J. Williams Ph.D.

New GenRA Module in EPA’s CompTox Dashboard Will Help Predict Potential Chemical Toxicity

Published September 25, 2018

As part of its ongoing computational toxicology research, EPA is developing faster and improved approaches to evaluate chemicals for potential health effects.  One commonly applied approach is known as chemical read-across. Read-across uses information about how a chemical with known data behaves to make a prediction about the behavior of another chemical that is “similar” but does not have as much data. Current read-across, while cost-effective, relies on a subjective assessment, which leads to varying predictions and justifications depending on who undertakes and evaluates the assessment.

To reduce uncertainties and develop a more objective approach, EPA researchers have developed an automated read-across tool called Generalized Read-Across (GenRA), and added it to the newest version of the EPA Computational Toxicology Dashboard. The goal of GenRA is to encode as many expert considerations used within current read-across approaches as possible and combine these with data-driven approaches to transition read-across towards a more systematic and data-based method of making predictions.

EPA chemist Dr. Grace Patlewicz says it was this uncertainty that motivated the development of GenRA. “You don’t actually know if you’ve been successful at using read-across to help predict chemical toxicity because it’s a judgement call based on one person versus the next. That subjectivity is something we were trying to move away from.” Patlewicz says.

Since toxicologists and risk assessors are already familiar with read-across, EPA researchers saw value in creating a tool that that was aligned with the current read-across workflow but which addressed uncertainty using data analysis methods in what they call a “harmonized-hybrid workflow.”

In its current form, GenRA lets users find analogues, or chemicals that are similar to their target chemical, based on chemical structural similarity. The user can then select which analogues they want to carry forward into the GenRA prediction by exploring the consistency and concordance of the underlying experimental data for those analogues. Next, the tool predicts toxicity effects of specific repeated dose studies. Then, a plot with these outcomes is generated based on a similarity-weighted activity of the analogue chemicals the user selected. Finally, the user is presented with a data matrix view showing whether a chemical is predicted to be toxic (yes or no) for a chosen set of toxicity endpoints, with a quantitative measure of uncertainty.

The team is also comparing chemicals based on other similarity contexts, such as physicochemical characteristics or metabolic similarity, as well as extending the approach to make quantitative predictions of toxicity.

Patlewicz thinks incorporating other contexts and similarity measures will refine GenRA to make better toxicity predictions, fulfilling the goal of creating a read-across method capable of assessing thousands of chemicals that currently lack toxicity data.

“That’s the direction that we’re going in,” Patlewicz says. “Recognizing where we are and trying to move towards something a little bit more objective, showing how aspects of the current read-across workflow could be refined.”

Learn more at: https://comptox.epa.gov

 

A listing of EPA Tools for Air Quality Assessment

Tools

  • Atmospheric Model Evaluation Tool (AMET)
    AMET helps in the evaluation of meteorological and air quality simulations.
  • Benchmark Dose Software (BMDS)
    EPA developed the Benchmark Dose Software (BMDS) as a tool to help estimate dose or exposure of a chemical or chemical mixture associated with a given response level. The methodology is used by EPA risk assessors and is fast becoming the world’s standard for dose-response analysis for risk assessments, including air pollution risk assessments.
  • BenMAP
    BenMAP is a Windows-based computer program that uses a Geographic Information System (GIS)-based to estimate the health impacts and economic benefits occurring when populations experience changes in air quality.
  • Community-Focused Exposure and Risk Screening Tool (C-FERST)
    C-FERST is an online tool developed by EPA in collaboration with stakeholders to provide access to resources that can be used with communities to help identify and learn more about their environmental health issues and explore exposure and risk reduction options.
  • Community Health Vulnerability Index
    EPA scientists developed a Community Health Vulnerability Index that can be used to help identify communities at higher health risk from wildfire smoke. Breathing smoke from a nearby wildfire is a health threat, especially for people with lung or heart disease, diabetes and high blood pressure as well as older adults, and those living in communities with poverty, unemployment and other indicators of social stress. Health officials can use the tool, in combination with air quality models, to focus public health strategies on vulnerable populations living in areas where air quality is impaired, either by wildfire smoke or other sources of pollution. The work was published in Environmental Science & Technology.
  • Critical Loads Mapper Tool
    The Critical Loads Mapper Tool can be used to help protect terrestrial and aquatic ecosystems from atmospheric deposition of nitrogen and sulfur, two pollutants emitted from fossil fuel burning and agricultural emissions. The interactive tool provides easy access to information on deposition levels through time; critical loads, which identify thresholds when pollutants have reached harmful levels; and exceedances of these thresholds.
  • EnviroAtlas
    EnviroAtlas provides interactive tools and resources for exploring the benefits people receive from nature or “ecosystem goods and services”. Ecosystem goods and services are critically important to human health and well-being, but they are often overlooked due to lack of information. Using EnviroAtlas, many types of users can access, view, and analyze diverse information to better understand the potential impacts of various decisions.
  • EPA Air Sensor Toolbox for Citizen Scientists
    EPA’s Air Sensor Toolbox for Citizen Scientists provides information and guidance on new low-cost compact technologies for measuring air quality. Citizens are interested in learning more about local air quality where they live, work and play. EPA’s Toolbox includes information about: Sampling methodologies; Calibration and validation approaches; Measurement methods options; Data interpretation guidelines; Education and outreach; and Low cost sensor performance information.
  • ExpoFIRST
    The Exposure Factors Interactive Resource for Scenarios Tool (ExpoFIRST) brings data from EPA’s Exposure Factors Handbook: 2011 Edition (EFH) to an interactive tool that maximizes flexibility and transparency for exposure assessors. ExpoFIRST represents a significant advance for regional, state, and local scientists in performing and documenting calculations for community and site-specific exposure assessments, including air pollution exposure assessments.
  • EXPOsure toolbox (ExpoBox)
    This is a toolbox created to assist individuals from within government, industry, academia, and the general public with assessing exposure, including exposure to air contaminants, fate and transport processes of air pollutants and their potential exposure concentrations. It is a compendium of exposure assessment tools that links to guidance documents, databases, models, reference materials, and other related resources.
  • Federal Reference & Federal Equivalency Methods
    EPA scientists develop and evaluate Federal Reference Methods and Federal Equivalency Methods for accurately and reliably measuring six primary air pollutants in outdoor air. These methods are used by states and other organizations to assess implementation actions needed to attain National Ambient Air Quality Standards.
  • Fertilizer Emission Scenario Tool for CMAQ (FEST-C)
    FEST-C facilitates the definition and simulation of new cropland farm management system scenarios or editing of existing scenarios to drive Environmental Policy Integrated Climate model (EPIC) simulations.  For the standard 12km continental Community Multi-Scale Air Quality model (CMAQ) domain, this amounts to about 250,000 simulations for the U.S. alone. It also produces gridded daily EPIC weather input files from existing hourly Meteorology-Chemistry Interface Processor (MCIP) files, transforms EPIC output files to CMAQ-ready input files and links directly to Visual Environment for Rich Data Interpretation (VERDI) for spatial visualization of input and output files. The December 2012 release will perform all these functions for any CMAQ grid scale or domain.
  • Instruction Guide and Macro Analysis Tool for Community-led Air Monitoring 
    EPA has developed two tools for evaluating the performance of low-cost sensors and interpreting the data they collect to help citizen scientists, communities, and professionals learn about local air quality.
  • Integrated Climate and Land use Scenarios (ICLUS)
    Climate change and land-use change are global drivers of environmental change. Impact assessments frequently show that interactions between climate and land-use changes can create serious challenges for aquatic ecosystems, water quality, and air quality. Population projections to 2100 were used to model the distribution of new housing across the landscape. In addition, housing density was used to estimate changes in impervious surface cover.  A final report, datasets, the ICLUS+ Web Viewer and ArcGIS tools are available.
  • Indoor Semi-Volatile Organic Compound (i-SVOC)
    i-SVOC Version 1.0 is a general-purpose software application for dynamic modeling of the emission, transport, sorption, and distribution of semi-volatile organic compounds (SVOCs) in indoor environments. i-SVOC supports a variety of uses, including exposure assessment and the evaluation of mitigation options. SVOCs are a diverse group of organic chemicals that can be found in: Many are also present in indoor air, where they tend to bind to interior surfaces and particulate matter (dust).

    • Pesticides;
    • Ingredients in cleaning agents and personal care products;
    • Additives to vinyl flooring, furniture, clothing, cookware, food packaging, and electronics.
  • Municipal Solid Waste Decision Support Tool (MSW DST)EXIT
    This tool is designed to aid solid waste planners in evaluating the cost and environmental aspects of integrated municipal solid waste management strategies. The tool is the result of collaboration between EPA and RTI International and its partners.
  • Optical Noise-Reduction Averaging (ONA) Program Improves Black Carbon Particle Measurements Using Aethalometers
    ONA is a program that reduces noise in real-time black carbon data obtained using Aethalometers. Aethalometers optically measure the concentration of light absorbing or “black” particles that accumulate on a filter as air flows through it. These particles are produced by incomplete fossil fuel, biofuel and biomass combustion. Under polluted conditions, they appear as smoke or haze.
  • RETIGO tool
    Real Time Geospatial Data Viewer (RETIGO) is a free, web-based tool that shows air quality data that are collected while in motion (walking, biking or in a vehicle). The tool helps users overcome technical barriers to exploring air quality data. After collecting measurements, citizen scientists and other users can import their own data and explore the data on a map.
  • Remote Sensing Information Gateway (RSIG)
    RSIG offers a new way for users to get the multi-terabyte, environmental datasets they want via an interactive, Web browser-based application. A file download and parsing process that now takes months will be reduced via RSIG to minutes.
  • Simulation Tool Kit for Indoor Air Quality and Inhalation Exposure (IAQX)
    IAQX version 1.1 is an indoor air quality (IAQ) simulation software package that complements and supplements existing indoor air quality simulation (IAQ) programs. IAQX is for advanced users who have experience with exposure estimation, pollution control, risk assessment, and risk management. There are many sources of indoor air pollution, such as building materials, furnishings, and chemical cleaners. Since most people spend a large portion of their time indoors, it is important to be able to estimate exposure to these pollutants. IAQX helps users analyze the impact of pollutant sources and sinks, ventilation, and air cleaners. It performs conventional IAQ simulations to calculate the pollutant concentration and/or personal exposure as a function of time. It can also estimate adequate ventilation rates based on user-provided air quality criteria. This is a unique feature useful for product stewardship and risk management.
  • Spatial Allocator
    The Spatial Allocator provides tools that could be used by the air quality modeling community to perform commonly needed spatial tasks without requiring the use of a commercial Geographic Information System (GIS).
  • Traceability Protocol for Assay and Certification of Gaseous Calibration Standards
    This is used to certify calibration gases for ambient and continuous emission monitors. It specifies methods for assaying gases and establishing traceability to National Institute of Standards and Technology (NIST) reference standards. Traceability is required under EPA ambient and continuous emission monitoring regulations.
  • Watershed Deposition Mapping Tool (WDT)
    WDT provides an easy to use tool for mapping the deposition estimates from CMAQ to watersheds to provide the linkage of air and water needed for TMDL (Total Maximum Daily Load) and related nonpoint-source watershed analyses.
  • Visual Environment for Rich Data Interpretation (VERDI)
    VERDI is a flexible, modular, Java-based program for visualizing multivariate gridded meteorology, emissions, and air quality modeling data created by environmental modeling systems such as CMAQ and the Weather Research and Forecasting (WRF) model.

 

Databases

  • Air Quality Data for the CDC National Environmental Public Health Tracking Network 
    EPA’s Exposure Research scientists are collaborating with the Centers for Disease Control and Prevention (CDC) on a CDC initiative to build a National Environmental Public Health Tracking (EPHT) network. Working with state, local and federal air pollution and health agencies, the EPHT program is facilitating the collection, integration, analysis, interpretation, and dissemination of data from environmental hazard monitoring, and from human exposure and health effects surveillance. These data provide scientific information to develop surveillance indicators, and to investigate possible relationships between environmental exposures, chronic disease, and other diseases, that can lead to interventions to reduce the burden of theses illnesses. An important part of the initiative is air quality modeling estimates and air quality monitoring data, combined through Bayesian modeling that can be linked with health outcome data.
  • EPAUS9R – An Energy Systems Database for use with the Market Allocation (MARKAL) Model
    The EPAUS9r is a regional database representation of the United States energy system. The database uses the MARKAL model. MARKAL is an energy system optimization model used by local and federal governments, national and international communities and academia. EPAUS9r represents energy supply, technology, and demand throughout the major sectors of the U.S. energy system.
  • Fused Air Quality Surfaces Using Downscaling
    This database provides access to the most recent O3 and PM2.5 surfaces datasets using downscaling.
  • Health & Environmental Research Online (HERO)
    HERO provides access to scientific literature used to support EPA’s integrated science assessments, including the  Integrated Science Assessments (ISA) that feed into the National Ambient Air Quality (NAAQS) reviews.
  • SPECIATE 4.5 Database
    SPECIATE is a repository of volatile organic gas and particulate matter (PM) speciation profiles of air pollution sources.

A listing of EPA Tools and Databases for Water Contaminant Exposure Assessment

Exposure and Toxicity

  • EPA ExpoBox (A Toolbox for Exposure Assessors)
    This toolbox assists individuals from within government, industry, academia, and the general public with assessing exposure from multiple media, including water and sediment. It is a compendium of exposure assessment tools that links to guidance documents, databases, models, reference materials, and other related resources.

Chemical and Product Categories (CPCat) Database
CPCat is a database containing information mapping more than 43,000 chemicals to a set of terms categorizing their usage or function. The comprehensive list of chemicals with associated categories of chemical and product use was compiled from publically available sources. Unique use category taxonomies from each source are mapped onto a single common set of approximately 800 terms. Users can search for chemicals by chemical name, Chemical Abstracts Registry Number, or by CPCat terms associated with chemicals.

A listing of EPA Tools and Databases for Chemical Toxicity Prediction & Assessment

  • Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS)
    SeqAPASS is a fast, online screening tool that allows researchers and regulators to extrapolate toxicity information across species. For some species, such as humans, mice, rats, and zebrafish, the EPA has a large amount of data regarding their toxicological susceptibility to various chemicals. However, the toxicity data for numerous other plants and animals is very limited. SeqAPASS extrapolates from these data rich model organisms to thousands of other non-target species to evaluate their specific potential chemical susceptibility.

 

A listing of EPA Webinar and Literature on Bioinformatic Tools and Projects

Comparative Bioinformatics Applications for Developmental Toxicology

Discuss how the US EPA/NCCT is trying to solve the problem of too many chemicals, too high cost, and too much biological uncertainty Discuss the solution the ToxCast Program is proposing; a data-rich system to screen, classify and rank chemicals for further evaluation

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=186844

CHEMOINFORMATIC AND BIOINFORMATIC CHALLENGES AT THE US ENVIRONMENTAL PROTECTION AGENCY.

This presentation will provide an overview of both the scientific program and the regulatory activities related to computational toxicology. This presentation will provide an overview of both the scientific program and the regulatory activities related to computational toxicology.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=154013

How Can We Use Bioinformatics to Predict Which Agents Will Cause Birth Defects?

The availability of genomic sequences from a growing number of human and model organisms has provided an explosion of data, information, and knowledge regarding biological systems and disease processes. High-throughput technologies such as DNA and protein microarray biochips are now standard tools for probing the cellular state and determining important cellular behaviors at the genomic/proteomic levels. While these newer technologies are beginning to provide important information on cellular reactions to toxicant exposure (toxicogenomics), a major challenge that remains is the formulation of a strategy to integrate transcript, protein, metabolite, and toxicity data. This integration will require new concepts and tools in bioinformatics. The U.S. National Library of Medicine’s Pubmed site includes 19 million citations and abstracts and continues to grow. The BDSM team is now working on assembling the literature’s unstructured data into a structured database and linking it to BDSM within a system that can then be used for testing and generating new hypotheses. This effort will generate data bases of entities (such as genes, proteins, metabolites, gene ontology processes) linked to PubMed identifiers/abstracts and providing information on the relationships between them. The end result will be an online/standalone tool that will help researchers to focus on the papers most relevant to their query and uncover hidden connections and obvious information gaps.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=227345

ADVANCED PROTEOMICS AND BIOINFORMATICS TOOLS IN TOXICOLOGY RESEARCH: OVERCOMING CHALLENGES TO PROVIDE SIGNIFICANT RESULTS

This presentation specifically addresses the advantages and limitations of state of the art gel, protein arrays and peptide-based labeling proteomic approaches to assess the effects of a suite of model T4 inhibitors on the thyroid axis of Xenopus laevis.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NHEERL&dirEntryId=152823

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=03%2F26%2F2014&dateEndPublishedPresented=03%2F26%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F02%2F2014&dateEndPublishedPresented=04%2F02%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F02%2F2014&dateEndPublishedPresented=04%2F02%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

 

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=03%2F26%2F2014&dateEndPublishedPresented=03%2F26%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F11%2F2014&dateEndPublishedPresented=04%2F11%2F2019&dirEntryId=344452&fed_org_id=111&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

Bioinformatic Integration of in vivo Data and Literature-based Gene Associations for Prioritization of Adverse Outcome Pathway Development

Adverse outcome pathways (AOPs) describe a sequence of events, beginning with a molecular initiating event (MIE), proceeding via key events (KEs), and culminating in an adverse outcome (AO). A challenge for use of AOPs in a safety evaluation context has been identification of MIEs and KEs relevant for AOs observed in regulatory toxicity studies. In this work, we implemented a bioinformatic approach that leverages mechanistic information in the literature and the AOs measured in regulatory toxicity studies to prioritize putative MIEs and/or early KEs for AOP development relevant to chemical safety evaluation. The US Environmental Protection Agency Toxicity Reference Database (ToxRefDB, v2.0) contains effect information for >1000 chemicals curated from >5000 studies or summaries from sources including data evaluation records from the US EPA Office of Pesticide Programs, the National Toxicology Program (NTP), peer-reviewed literature, and pharmaceutical preclinical studies. To increase ToxRefDB interoperability, endpoint and effect information were cross-referenced with codes from the United Medical Language System, which enabled mapping of in vivo pathological effects from ToxRefDB to PubMed (via Medical Subject Headings or MeSH). This enabled linkage to any resource that is also connected to PubMed or indexed with MeSH. A publicly available bioinformatic tool, the Entity-MeSH Co-occurrence Network (EMCON), uses multiple data sources and a measure of mutual information to identify genes most related to a MeSH term. Using EMCON, gene sets were generated for endpoints of toxicological relevance in ToxRefDB linking putative KEs and/or MIEs. The Comparative Toxicogenomics Database was used to further filter important associations. As a proof of concept, thyroid-related effects and their highly associated genes were examined, and demonstrated relevant MIEs and early KEs for AOPs to describe thyroid-related AOs. The ToxRefDB to gene mapping for thyroid resulted in >50 unique gene to chemical relationships. Integrated use of EMCON and ToxRefDB data provides a basis for rapid and robust putative AOP development, as well as a novel means to generate mechanistic hypotheses for specific chemicals. This abstract does not necessarily reflect U.S. EPA policy. Abstract and Poster for 2019 Society of Toxicology annual meeting in March 2019

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dateBeginPublishedPresented=04%2F11%2F2014&dateEndPublishedPresented=04%2F11%2F2019&dirEntryId=344452&keyword=Chemical+Safety&showCriteria=2&sortBy=pubDateYear&subject=Chemical+Safety+Research

A Web-Hosted R Workflow to Simplify and Automate the Analysis of 16S NGS Data

Next-Generation Sequencing (NGS) produces large data sets that include tens-of-thousands of sequence reads per sample. For analysis of bacterial diversity, 16S NGS sequences are typically analyzed in a workflow that containing best-of-breed bioinformatics packages that may leverage multiple programming languages (e.g., Python, R, Java, etc.). The process totransform raw NGS data to usable operational taxonomic units (OTUs) can be tedious due tothe number of quality control (QC) steps used in QIIME and other software packages forsample processing. Therefore, the purpose of this work was to simplify the analysis of 16SNGS data from a large number of samples by integrating QC, demultiplexing, and QIIME(Quantitative Insights Into Microbial Ecology) analysis in an accessible R project. User command line operations for each of the pipeline steps were automated into a workflow. In addition, the R server allows multi-user access to the automated pipeline via separate useraccounts while providing access to the same large set of underlying data. We demonstratethe applicability of this pipeline automation using 16S NGS data from approximately 100 stormwater runoff samples collected in a mixed-land use watershed in northeast Georgia. OTU tables were generated for each sample and the relative taxonomic abundances were compared for different periods over storm hydrographs to determine how the microbial ecology of a stream changes with rise and fall of stream stage. Our approach simplifies the pipeline analysis of multiple 16S NGS samples by automating multiple preprocessing, QC, analysis and post-processing command line steps that are called by a sequence of R scripts. Presented at ASM 2015 Rapid NGS Bioinformatic Pipelines for Enhanced Molecular Epidemiologic Investigation of Pathogens

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=309890

DEVELOPING COMPUTATIONAL TOOLS NECESSARY FOR APPLYING TOXICOGENOMICS TO RISK ASSESSMENT AND REGULATORY DECISION MAKING.

GENOMICS, PROTEOMICS & METABOLOMICS CAN PROVIDE USEFUL WEIGHT-OF-EVIDENCE DATA ALONG THE SOURCE-TO-OUTCOME CONTINUUM, WHEN APPROPRIATE BIOINFORMATIC AND COMPUTATIONAL METHODS ARE APPLIED TOWARDS INTEGRATING MOLECULAR, CHEMICAL AND TOXICOGICAL INFORMATION. GENOMICS, PROTEOMICS & METABOLOMICS CAN PROVIDE USEFUL WEIGHT-OF-EVIDENCE DATA ALONG THE SOURCE-TO-OUTCOME CONTINUUM, WHEN APPROPRIATE BIOINFORMATIC AND COMPUTATIONAL METHODS ARE APPLIED TOWARDS INTEGRATING MOLECULAR, CHEMICAL AND TOXICOGICAL INFORMATION.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=156264

The Human Toxome Project

The Human Toxome project, funded as an NIH Transformative Research grant 2011–‐ 2016, is focused on developing the concepts and the means for deducing, validating, and sharing molecular Pathways of Toxicity (PoT). Using the test case of estrogenic endocrine disruption, the responses of MCF–‐7 human breast cancer cells are being phenotyped by transcriptomics and mass–‐spectroscopy–‐based metabolomics. The bioinformatics tools for PoT deduction represent a core deliverable. A number of challenges for quality and standardization of cell systems, omics technologies, and bioinformatics are being addressed. In parallel, concepts for annotation, validation, and sharing of PoT information, as well as their link to adverse outcomes, are being developed. A reasonably comprehensive public database of PoT, the Human Toxome Knowledge–‐base, could become a point of reference for toxicological research and regulatory tests strategies. A reasonably comprehensive public database of PoT, the Human Toxome Knowledge–‐base, could become a point of reference for toxicological research and regulatory tests strategies.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NCCT&dirEntryId=309453

High-Resolution Metabolomics for Environmental Chemical Surveillance and Bioeffect Monitoring

High-Resolution Metabolomics for Environmental Chemical Surveillance and Bioeffect Monitoring (Presented by: Dean Jones, PhD, Department of Medicine, Emory University) (2/28/2013)

https://www.epa.gov/chemical-research/high-resolution-metabolomics-environmental-chemical-surveillance-and-bioeffect

Identification of Absorption, Distribution, Metabolism, and Excretion (ADME) Genes Relevant to Steatosis Using a Gene Expression Approach

Absorption, distribution, metabolism, and excretion (ADME) impact chemical concentration and activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. In order to better describe ADME parameters and how they modulate potential hazards posed by chemical exposure, our goal is to investigate the relationship between AOPs and ADME related genes and functional information. Given the scope of this task, we began using hepatic steatosis as a case study. To identify ADME genes related to steatosis, we used the publicly available toxicogenomics database, Open TG-GATEsTM. This database contains standardized rodent chemical exposure data from 170 chemicals (mostly drugs), along with differential gene expression data and corresponding associated pathological changes. We examined the chemical exposure microarray data set gathered from 9 chemical exposure treatments resulting in pathologically confirmed (minimal, moderate and severe) incidences of hepatic steatosis. From this differential gene expression data set, we utilized differential expression analyses to identify gene changes resulting from the chemical exposures leading to hepatic steatosis. We then selected differentially expressed genes (DEGs) related to ADME by filtering all genes based on their ADME functional identities. These DEGs include enzymes such as cytochrome p450, UDP glucuronosyltransferase, flavin-containing monooxygenase and transporter genes such as solute carriers and ATP-binding cassette transporter families. The up and downregulated genes were identified across these treatments. Total of 61 genes were upregulated and 68 genes were down regulated in all treatments. Meanwhile, 25 genes were both up regulated and downregulated across all the treatments. This work highlights the application of bioinformatics in linking AOPs with gene modulations specifically in relationships to ADME and exposures to chemicals. This abstract does not necessarily reflect U.S. EPA policy. This work highlights the application of bioinformatics tools to identify genes that are modulated by adverse outcomes. Specifically, we delineate a method to identify genes that are related to ADME and can impact target tissue dose in response to chemical exposures. The computational method outlined in this work is applicable to any adverse outcome pathway, and provide a linkage between chemical exposure, target tissue dose, and adverse outcomes. Application of this method will allow for the rapid screening of chemicals for their impact on ADME-related genes using available gene data bases in literature.

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NHEERL&dirEntryId=341273

Development of Environmental Fate and Metabolic Simulators

Presented at Bioinformatics Open Source Conference (BOSC), Detroit, MI, June 23-24, 2005. see description

https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=257172

 

Useful Webinars on EPA Computational Tools and Informatics

 

Computational Toxicology Communities of Practice

Computational Toxicology Research

EPA’s Computational Toxicology Communities of Practice is composed of hundreds of stakeholders from over 50 public and private sector organizations (ranging from EPA, other federal agencies, industry, academic institutions, professional societies, nongovernmental organizations, environmental non-profit groups, state environmental agencies and more) who have an interest in using advances in computational toxicology and exposure science to evaluate the safety of chemicals.

The Communities of Practice is open to the public. Monthly webinars are held at EPA’s RTP campus, on the fourth Thursday of the month (occasionally rescheduled in November and December to accommodate holiday schedules), from 11am-Noon EST/EDT. Remote participation is available. For more information or to be added to the meeting email list, contact: Monica Linnenbrink (linnenbrink.monica@epa.gov).

Related Links

Past Webinar Presentations

Presentation File Presented By Date
OPEn structure-activity Relationship App (OPERA) Powerpoint(VideoEXIT) Dr. Kamel Mansouri, Lead Computational Chemist contractor for Integrated Laboratory Systems in the National Institute of Environmental Health Sciences 2019/4/25
CompTox Chemicals Dashboard and InVitroDB V3 (VideoEXIT) Dr. Antony Williams, Chemist in EPA’s National Center for Computational Toxicology and Dr. Katie Paul-Friedman, Toxicologist in EPA’s National Center for Computational Toxicology 2019/3/28
The Systematic Empirical Evaluation of Models (SEEM) framework (VideoEXIT) Dr. John Wambaugh, Physical Scientist in EPA’s National Center for Computational Toxicology 2019/2/28
ToxValDB: A comprehensive database of quantitative in vivo study results from over 25,000 chemicals (VideoEXIT) Dr. Richard Judson, Research Chemist in EPA’s National Center for Computational Toxicology 2018/12/20
Sequence Alignment to Predict Across Species Susceptibility (seqAPASS) (VideoEXIT) Dr. Carlie LaLone, Bioinformaticist, EPA’s National Health and Environmental Effects Research Laboratory 2018/11/29
Chemicals and Products Database (VideoEXIT) Dr. Kathie Dionisio, Environmental Health Scientist, EPA’s National Exposure Research Laboratory 2018/10/25
CompTox Chemicals Dashboard V3 (VideoEXIT) Dr. Antony Williams, Chemist, EPA National Center for Computational Toxicology (NCCT). 2018/09/27
Generalised Read-Across (GenRA) (VideoEXIT) Dr. Grace Patlewicz, Chemist, EPA National Center for Computational Toxicology (NCCT). 2018/08/23
EPA’S ToxCast Owner’s Manual  (VideoEXIT) Monica Linnenbrink, Strategic Outreach and Communication lead, EPA National Center for Computational Toxicology (NCCT). 2018/07/26
EPA’s Non-Targeted Analysis Collaborative Trial (ENTACT)      (VideoEXIT) Elin Ulrich, Research Chemist in the Public Health Chemistry Branch, EPA National Exposure Research Laboratory (NERL). 2018/06/28
ECOTOX Knowledgebase: New Tools and Data Visualizations(VideoEXIT) Colleen Elonen, Translational Toxicology Branch, and Dr. Jennifer Olker, Systems Toxicology Branch, in the Mid-Continent Ecology Division of EPA’s National Health & Environmental Effects Research Laboratory (NHEERL) 2018/05/24
Investigating Chemical-Microbiota Interactions in Zebrafish (VideoEXIT) Tamara Tal, Biologist in the Systems Biology Branch, Integrated Systems Toxicology Division, EPA’s National Health & Environmental Effects Research Laboratory (NHEERL) 2018/04/26
The CompTox Chemistry Dashboard v2.6: Delivering Improved Access to Data and Real Time Predictions (VideoEXIT) Tony Williams, Computational Chemist, EPA’s National Center for Computational Toxicology (NCCT) 2018/03/29
mRNA Transfection Retrofits Cell-Based Assays with Xenobiotic Metabolism (VideoEXIT* Audio starts at 10:17) Steve Simmons, Research Toxicologist, EPA’s National Center for Computational Toxicology (NCCT) 2018/02/22
Development and Distribution of ToxCast and Tox21 High-Throughput Chemical Screening Assay Method Description(VideoEXIT) Stacie Flood, National Student Services Contractor, EPA’s National Center for Computational Toxicology (NCCT) 2018/01/25
High-throughput H295R steroidogenesis assay: utility as an alternative and a statistical approach to characterize effects on steroidogenesis (VideoEXIT) Derik Haggard, ORISE Postdoctoral Fellow, EPA’s National Center for Computational Toxicology (NCCT) 2017/12/14
Systematic Review for Chemical Assessments: Core Elements and Considerations for Rapid Response (VideoEXIT) Kris Thayer, Director, Integrated Risk Information System (IRIS) Division of EPA’s National Center for Environmental Assessment (NCEA) 2017/11/16
High Throughput Transcriptomics (HTTr) Concentration-Response Screening in MCF7 Cells (VideoEXIT) Joshua Harrill, Toxicologist, EPA’s National Center for Computational Toxicology (NCCT) 2017/10/26
Learning Boolean Networks from ToxCast High-Content Imaging Data Todor Antonijevic, ORISE Postdoc, EPA’s National Center for Computational Toxicology (NCCT) 2017/09/28
Suspect Screening of Chemicals in Consumer Products Katherine Phillips, Research Chemist, Human Exposure and Dose Modeling Branch, Computational Exposure Division, EPA’s National Exposure Research Laboratory (NERHL) 2017/08/31
The EPA CompTox Chemistry Dashboard: A Centralized Hub for Integrating Data for the Environmental Sciences (VideoEXIT) Antony Williams, Chemist, EPA’s National Center for Computational Toxicology (NCCT) 2017/07/27
Navigating Through the Minefield of Read-Across Tools and Frameworks: An Update on Generalized Read-Across (GenRA)(VideoEXIT)

 

Read Full Post »


A Nonlinear Methodology to Explain Complexity of the Genome and Bioinformatic Information

Reporter: Stephen J. Williams, Ph.D.

Multifractal bioinformatics: A proposal to the nonlinear interpretation of genome

The following is an open access article by Pedro Moreno on a methodology to analyze genetic information across species and in particular, the evolutionary trends of complex genomes, by a nonlinear analytic approach utilizing fractal geometry, coined “Nonlinear Bioinformatics”.  This fractal approach stems from the complex nature of higher eukaryotic genomes including mosaicism, multiple interdispersed  genomic elements such as intronic regions, noncoding regions, and also mobile elements such as transposable elements.  Although seemingly random, there exists a repetitive nature of these elements. Such complexity of DNA regulation, structure and genomic variation is felt best understood by developing algorithms based on fractal analysis, which can best model the regionalized and repetitive variability and structure within complex genomes by elucidating the individual components which contributes to an overall complex structure rather than using a “linear” or “reductionist” approach looking at individual coding regions, which does not take into consideration the aforementioned factors leading to genetic complexity and diversity.

Indeed, many other attempts to describe the complexities of DNA as a fractal geometric pattern have been described.  In a paper by Carlo Cattani “Fractals and Hidden Symmetries in DNA“, Carlo uses fractal analysis to construct a simple geometric pattern of the influenza A virus by modeling the primary sequence of this viral DNA, namely the bases A,G,C, and T. The main conclusions that

fractal shapes and symmetries in DNA sequences and DNA walks have been shown and compared with random and deterministic complex series. DNA sequences are structured in such a way that there exists some fractal behavior which can be observed both on the correlation matrix and on the DNA walks. Wavelet analysis confirms by a symmetrical clustering of wavelet coefficients the existence of scale symmetries.

suggested that, at least, the viral influenza genome structure could be analyzed into its basic components by fractal geometry.
This approach has been used to model the complex nature of cancer as discussed in a 2011 Seminars in Oncology paper
Abstract: Cancer is a highly complex disease due to the disruption of tissue architecture. Thus, tissues, and not individual cells, are the proper level of observation for the study of carcinogenesis. This paradigm shift from a reductionist approach to a systems biology approach is long overdue. Indeed, cell phenotypes are emergent modes arising through collective non-linear interactions among different cellular and microenvironmental components, generally described by “phase space diagrams”, where stable states (attractors) are embedded into a landscape model. Within this framework, cell states and cell transitions are generally conceived as mainly specified by gene-regulatory networks. However, the system s dynamics is not reducible to the integrated functioning of the genome-proteome network alone; the epithelia-stroma interacting system must be taken into consideration in order to give a more comprehensive picture. Given that cell shape represents the spatial geometric configuration acquired as a result of the integrated set of cellular and environmental cues, we posit that fractal-shape parameters represent “omics descriptors of the epithelium-stroma system. Within this framework, function appears to follow form, and not the other way around.

As authors conclude

” Transitions from one phenotype to another are reminiscent of phase transitions observed in physical systems. The description of such transitions could be obtained by a set of morphological, quantitative parameters, like fractal measures. These parameters provide reliable information about system complexity. “

Gene expression also displays a fractal nature. In a Frontiers in Physiology paper by Mahboobeh Ghorbani, Edmond A. Jonckheere and Paul Bogdan* “Gene Expression Is Not Random: Scaling, Long-Range Cross-Dependence, and Fractal Characteristics of Gene Regulatory Networks“,

the authors describe that gene expression networks display time series display fractal and long-range dependence characteristics.

Abstract: Gene expression is a vital process through which cells react to the environment and express functional behavior. Understanding the dynamics of gene expression could prove crucial in unraveling the physical complexities involved in this process. Specifically, understanding the coherent complex structure of transcriptional dynamics is the goal of numerous computational studies aiming to study and finally control cellular processes. Here, we report the scaling properties of gene expression time series in Escherichia coliand Saccharomyces cerevisiae. Unlike previous studies, which report the fractal and long-range dependency of DNA structure, we investigate the individual gene expression dynamics as well as the cross-dependency between them in the context of gene regulatory network. Our results demonstrate that the gene expression time series display fractal and long-range dependence characteristics. In addition, the dynamics between genes and linked transcription factors in gene regulatory networks are also fractal and long-range cross-correlated. The cross-correlation exponents in gene regulatory networks are not unique. The distribution of the cross-correlation exponents of gene regulatory networks for several types of cells can be interpreted as a measure of the complexity of their functional behavior.

 

Given that multitude of complex biomolecular networks and biomolecules can be described by fractal patterns, the development of bioinformatic algorithms  would enhance our understanding of the interdependence and cross funcitonality of these mutiple biological networks, particularly in disease and drug resistance.  The article below by Pedro Moreno describes the development of such bioinformatic algorithms.

Pedro A. Moreno
Escuela de Ingeniería de Sistemas y Computación, Facultad de Ingeniería, Universidad del Valle, Cali, Colombia
E-mail: pedro.moreno@correounivalle.edu.co

Eje temático: Ingeniería de sistemas / System engineering
Recibido: 19 de septiembre de 2012
Aceptado: 16 de diciembre de 2013


 

 


Abstract

The first draft of the human genome (HG) sequence was published in 2001 by two competing consortia. Since then, several structural and functional characteristics for the HG organization have been revealed. Today, more than 2.000 HG have been sequenced and these findings are impacting strongly on the academy and public health. Despite all this, a major bottleneck, called the genome interpretation persists. That is, the lack of a theory that explains the complex puzzles of coding and non-coding features that compose the HG as a whole. Ten years after the HG sequenced, two recent studies, discussed in the multifractal formalism allow proposing a nonlinear theory that helps interpret the structural and functional variation of the genetic information of the genomes. The present review article discusses this new approach, called: “Multifractal bioinformatics”.

Keywords: Omics sciences, bioinformatics, human genome, multifractal analysis.


1. Introduction

Omic Sciences and Bioinformatics

In order to study the genomes, their life properties and the pathological consequences of impairment, the Human Genome Project (HGP) was created in 1990. Since then, about 500 Gpb (EMBL) represented in thousands of prokaryotic genomes and tens of different eukaryotic genomes have been sequenced (NCBI, 1000 Genomes, ENCODE). Today, Genomics is defined as the set of sciences and technologies dedicated to the comprehensive study of the structure, function and origin of genomes. Several types of genomic have arisen as a result of the expansion and implementation of genomics to the study of the Central Dogma of Molecular Biology (CDMB), Figure 1 (above). The catalog of different types of genomics uses the Latin suffix “-omic” meaning “set of” to mean the new massive approaches of the new omics sciences (Moreno et al, 2009). Given the large amount of genomic information available in the databases and the urgency of its actual interpretation, the balance has begun to lean heavily toward the requirements of bioinformatics infrastructure research laboratories Figure 1 (below).

The bioinformatics or Computational Biology is defined as the application of computer and information technology to the analysis of biological data (Mount, 2004). An interdisciplinary science that requires the use of computing, applied mathematics, statistics, computer science, artificial intelligence, biophysical information, biochemistry, genetics, and molecular biology. Bioinformatics was born from the need to understand the sequences of nucleotide or amino acid symbols that make up DNA and proteins, respectively. These analyzes are made possible by the development of powerful algorithms that predict and reveal an infinity of structural and functional features in genomic sequences, as gene location, discovery of homologies between macromolecules databases (Blast), algorithms for phylogenetic analysis, for the regulatory analysis or the prediction of protein folding, among others. This great development has created a multiplicity of approaches giving rise to new types of Bioinformatics, such as Multifractal Bioinformatics (MFB) that is proposed here.

1.1 Multifractal Bioinformatics and Theoretical Background

MFB is a proposal to analyze information content in genomes and their life properties in a non-linear way. This is part of a specialized sub-discipline called “nonlinear Bioinformatics”, which uses a number of related techniques for the study of nonlinearity (fractal geometry, Hurts exponents, power laws, wavelets, among others.) and applied to the study of biological problems (https://pharmaceuticalintelligence.com/tag/fractal-geometry/). For its application, we must take into account a detailed knowledge of the structure of the genome to be analyzed and an appropriate knowledge of the multifractal analysis.

1.2 From the Worm Genome toward Human Genome

To explore a complex genome such as the HG it is relevant to implement multifractal analysis (MFA) in a simpler genome in order to show its practical utility. For example, the genome of the small nematode Caenorhabditis elegans is an excellent model to learn many extrapolated lessons of complex organisms. Thus, if the MFA explains some of the structural properties in that genome it is expected that this same analysis reveals some similar properties in the HG.

The C. elegans nuclear genome is composed of about 100 Mbp, with six chromosomes distributed into five autosomes and one sex chromosome. The molecular structure of the genome is particularly homogeneous along with the chromosome sequences, due to the presence of several regular features, including large contents of genes and introns of similar sizes. The C. elegans genome has also a regional organization of the chromosomes, mainly because the majority of the repeated sequences are located in the chromosome arms, Figure 2 (left) (C. elegans Sequencing Consortium, 1998). Given these regular and irregular features, the MFA could be an appropriate approach to analyze such distributions.

Meanwhile, the HG sequencing revealed a surprising mosaicism in coding (genes) and noncoding (repetitive DNA) sequences, Figure 2 (right) (Venter et al., 2001). This structure of 6 Gbp is divided into 23 pairs of chromosomes (diploid cells) and these highly regionalized sequences introduce complex patterns of regularity and irregularity to understand the gene structure, the composition of sequences of repetitive DNA and its role in the study and application of life sciences. The coding regions of the genome are estimated at ~25,000 genes which constitute 1.4% of GH. These genes are involved in a giant sea of various types of non-coding sequences which compose 98.6% of HG (misnamed popularly as “junk DNA”). The non-coding regions are characterized by many types of repeated DNA sequences, where 10.6% consists of Alu sequences, a type of SINE (short and dispersed repeated elements) sequence and preferentially located towards the genes. LINES, MIR, MER, LTR, DNA transposons and introns are another type of non-coding sequences which form about 86% of the genome. Some of these sequences overlap with each other; as with CpG islands, which complicates the analysis of genomic landscape. This standard genomic landscape was recently clarified, the last studies show that 80.4% of HG is functional due to the discovery of more than five million “switches” that operate and regulate gene activity, re-evaluating the concept of “junk DNA”. (The ENCODE Project Consortium, 2012).

Given that all these genomic variations both in worm and human produce regionalized genomic landscapes it is proposed that Fractal Geometry (FG) would allow measuring how the genetic information content is fragmented. In this paper the methodology and the nonlinear descriptive models for each of these genomes will be reviewed.

1.3 The MFA and its Application to Genome Studies

Most problems in physics are implicitly non-linear in nature, generating phenomena such as chaos theory, a science that deals with certain types of (non-linear) but very sensitive dynamic systems to initial conditions, nonetheless of deterministic rigor, that is that their behavior can be completely determined by knowing initial conditions (Peitgen et al, 1992). In turn, the FG is an appropriate tool to study the chaotic dynamic systems (CDS). In other words, the FG and chaos are closely related because the space region toward which a chaotic orbit tends asymptotically has a fractal structure (strange attractors). Therefore, the FG allows studying the framework on which CDS are defined (Moon, 1992). And this is how it is expected for the genome structure and function to be organized.

The MFA is an extension of the FG and it is related to (Shannon) information theory, disciplines that have been very useful to study the information content over a sequence of symbols. Initially, Mandelbrot established the FG in the 80’s, as a geometry capable of measuring the irregularity of nature by calculating the fractal dimension (D), an exponent derived from a power law (Mandelbrot, 1982). The value of the D gives us a measure of the level of fragmentation or the information content for a complex phenomenon. That is because the D measures the scaling degree that the fragmented self-similarity of the system has. Thus, the FG looks for self-similar properties in structures and processes at different scales of resolution and these self-similarities are organized following scaling or power laws.

Sometimes, an exponent is not sufficient to characterize a complex phenomenon; so more exponents are required. The multifractal formalism allows this, and applies when many subgroups of fractals with different scalar properties with a large number of exponents or fractal dimensions coexist simultaneously. As a result, when a spectrum of multifractal singularity measurement is generated, the scaling behavior of the frequency of symbols of a sequence can be quantified (Vélez et al, 2010).

The MFA has been implemented to study the spatial heterogeneity of theoretical and experimental fractal patterns in different disciplines. In post-genomics times, the MFA was used to study multiple biological problems (Vélez et al, 2010). Nonetheless, very little attention has been given to the use of MFA to characterize the content of the structural genetic information of the genomes obtained from the images of the Chaos Representation Game (CRG). First studies at this level were made recently to the analysis of the C. elegans genome (Vélez et al, 2010) and human genomes (Moreno et al, 2011). The MFA methodology applied for the study of these genomes will be developed below.

2. Methodology

The Multifractal Formalism from the CGR

2.1 Data Acquisition and Molecular Parameters

Databases for the C. elegans and the 36.2 Hs_ refseq HG version were downloaded from the NCBI FTP server. Then, several strategies were designed to fragment the genomic DNA sequences of different length ranges. For example, the C. elegans genome was divided into 18 fragments, Figure 2 (left) and the human genome in 9,379 fragments. According to their annotation systems, the contents of molecular parameters of coding sequences (genes, exons and introns), noncoding sequences (repetitive DNA, Alu, LINES, MIR, MER, LTR, promoters, etc.) and coding/ non-coding DNA (TTAGGC, AAAAT, AAATT, TTTTC, TTTTT, CpG islands, etc.) are counted for each sequence.

2.2 Construction of the CGR 2.3 Fractal Measurement by the Box Counting Method

Subsequently, the CGR, a recursive algorithm (Jeffrey, 1990; Restrepo et al, 2009) is applied to each selected DNA sequence, Figure 3 (above, left) and from which an image is obtained, which is quantified by the box-counting algorithm. For example, in Figure 3 (above, left) a CGR image for a human DNA sequence of 80,000 bp in length is shown. Here, dark regions represent sub-quadrants with a high number of points (or nucleotides). Clear regions, sections with a low number of points. The calculation for the D for the Koch curve by the box-counting method is illustrated by a progression of changes in the grid size, and its Cartesian graph, Table 1

The CGR image for a given DNA sequence is quantified by a standard fractal analysis. A fractal is a fragmented geometric figure whose parts are an approximated copy at full scale, that is, the figure has self-similarity. The D is basically a scaling rule that the figure obeys. Generally, a power law is given by the following expression:

Where N(E) is the number of parts required for covering the figure when a scaling factor E is applied. The power law permits to calculate the fractal dimension as:

The D obtained by the box-counting algorithm covers the figure with disjoint boxes ɛ = 1/E and counts the number of boxes required. Figure 4 (above, left) shows the multifractal measure at momentum q=1.

2.4 Multifractal Measurement

When generalizing the box-counting algorithm for the multifractal case and according to the method of moments q, we obtain the equation (3) (Gutiérrez et al, 1998; Yu et al, 2001):

Where the Mi number of points falling in the i-th grid is determined and related to the total number Mand ɛ to box size. Thus, the MFA is used when multiple scaling rules are applied. Figure 4 (above, right) shows the calculation of the multifractal measures at different momentum q (partition function). Here, linear regressions must have a coefficient of determination equal or close to 1. From each linear regression D are obtained, which generate an spectrum of generalized fractal dimensions Dfor all q integers, Figure 4 (below, left). So, the multifractal spectrum is obtained as the limit:

The variation of the q integer allows emphasizing different regions and discriminating their fractal a high Dq is synonymous of the structure’s richness and the properties of these regions. Negative values emphasize the scarce regions; a high Dindicates a lot of structure and properties in these regions. In real world applications, the limit Dqreadily approximated from the data using a linear fitting: the transformation of the equation (3) yields:

Which shows that ln In(Mi )= for set q is a linear function in the ln(ɛ), Dq can therefore be evaluated as q the slope of a fixed relationship between In(Mi )= and (q-1) ln(ɛ). The methodologies and approaches for the method of box-counting and MFA are detailed in Moreno et al, 2000, Yu et al, 2001; Moreno, 2005. For a rigorous mathematical development of MFA from images consult Multifractal system, wikipedia.

2.5 Measurement of Information Content

Subsequently, from the spectrum of generalized dimensions Dq, the degree of multifractality ΔDq(MD) is calculated as the difference between the maximum and minimum values of : ΔD qq Dqmax – Dqmin (Ivanov et al, 1999). When qmaxqmin ΔDis high, the multifractal spectrum is rich in information and highly aperiodic, when ΔDq is small, the resulting dimension spectrum is poor in information and highly periodic. It is expected then, that the aperiodicity in the genome would be related to highly polymorphic genomic aperiodic structures and those periodic regions with highly repetitive and not very polymorphic genomic structures. The correlation exponent t(q) = (– 1)DqFigure 4 (below, right ) can also be obtained from the multifractal dimension Dq. The generalized dimension also provides significant specific information. D(q = 0) is equal to the Capacity dimension, which in this analysis is the size of the “box count”. D(q = 1) is equal to the Information dimension and D(q = 2) to the Correlation dimension. Based on these multifractal parameters, many of the structural genomic properties can be quantified, related, and interpreted.

2.6 Multifractal Parameters and Statistical and Discrimination Analyses

Once the multifractal parameters are calculated (D= (-20, 20), ΔDq, πq, etc.), correlations with the molecular parameters are sought. These relations are established by plotting the number of genome molecular parameters versus MD by discriminant analysis with Cartesian graphs in 2-D, Figure 5 (below, left) and 3-D and combining multifractal and molecular parameters. Finally, simple linear regression analysis, multivariate analysis, and analyses by ranges and clusterings are made to establish statistical significance.

3 Results and Discussion

3.1 Non-linear Descriptive Model for the C. elegans Genome

When analyzing the C. elegans genome with the multifractal formalism it revealed what symmetry and asymmetry on the genome nucleotide composition suggested. Thus, the multifractal scaling of the C. elegans genome is of interest because it indicates that the molecular structure of the chromosome may be organized as a system operating far from equilibrium following nonlinear laws (Ivanov et al, 1999; Burgos and Moreno-Tovar, 1996). This can be discussed from two points of view:

1) When comparing C. elegans chromosomes with each other, the X chromosome showed the lowest multifractality, Figure 5 (above). This means that the X chromosome is operating close to equilibrium, which results in an increased genetic instability. Thus, the instability of the X could selectively contribute to the molecular mechanism that determines sex (XX or X0) during meiosis. Thus, the X chromosome would be operating closer to equilibrium in order to maintain their particular sexual dimorphism.

2) When comparing different chromosome regions of the C. elegans genome, changes in multifractality were found in relation to the regional organization (at the center and arms) exhibited by the chromosomes, Figure 5 (below, left). These behaviors are associated with changes in the content of repetitive DNA, Figure 5 (below, right). The results indicated that the chromosome arms are even more complex than previously anticipated. Thus, TTAGGC telomere sequences would be operating far from equilibrium to protect the genetic information encoded by the entire chromosome.

All these biological arguments may explain why C. elegans genome is organized in a nonlinear way. These findings provide insight to quantify and understand the organization of the non-linear structure of the C. elegans genome, which may be extended to other genomes, including the HG (Vélez et al, 2010).

3.2 Nonlinear Descriptive Model for the Human Genome

Once the multifractal approach was validated in C. elegans genome, HG was analyzed exhaustively. This allowed us to propose a nonlinear model for the HG structure which will be discussed under three points of view.

1) It was found that the HG high multifractality depends strongly on the contents of Alu sequences and to a lesser extent on the content of CpG islands. These contents would be located primarily in highly aperiodic regions, thus taking the chromosome far from equilibrium and giving to it greater genetic stability, protection and attraction of mutations, Figure 6 (A-C). Thus, hundreds of regions in the HG may have high genetic stability and the most important genetic information of the HG, the genes, would be safeguarded from environmental fluctuations. Other repeated elements (LINES, MIR, MER, LTRs) showed no significant relationship,

Figure 6 (D). Consequently, the human multifractal map developed in Moreno et al, 2011 constitutes a good tool to identify those regions rich in genetic information and genomic stability. 2) The multifractal context seems to be a significant requirement for the structural and functional organization of thousands of genes and gene families. Thus, a high multifractal context (aperiodic) appears to be a “genomic attractor” for many genes (KOGs, KEEGs), Figure 6 (E) and some gene families, Figure 6 (F) are involved in genetic and deterministic processes, in order to maintain a deterministic regulation control in the genome, although most of HG sequences may be subject to a complex epigenetic control.

3) The classification of human chromosomes and chromosome regions analysis may have some medical implications (Moreno et al, 2002; Moreno et al, 2009). This means that the structure of low nonlinearity exhibited by some chromosomes (or chromosome regions) involve an environmental predisposition, as potential targets to undergo structural or numerical chromosomal alterations in Figure 6 (G). Additionally, sex chromosomes should have low multifractality to maintain sexual dimorphism and probably the X chromosome inactivation.

All these fractals and biological arguments could explain why Alu elements are shaping the HG in a nonlinearly manner (Moreno et al, 2011). Finally, the multifractal modeling of the HG serves as theoretical framework to examine new discoveries made by the ENCODE project and new approaches about human epigenomes. That is, the non-linear organization of HG might help to explain why it is expected that most of the GH is functional.

4. Conclusions

All these results show that the multifractal formalism is appropriate to quantify and evaluate genetic information contents in genomes and to relate it with the known molecular anatomy of the genome and some of the expected properties. Thus, the MFB allows interpreting in a logic manner the structural nature and variation of the genome.

The MFB allows understanding why a number of chromosomal diseases are likely to occur in the genome, thus opening a new perspective toward personalized medicine to study and interpret the GH and its diseases.

The entire genome contains nonlinear information organizing it and supposedly making it function, concluding that virtually 100% of HG is functional. Bioinformatics in general, is enriched with a novel approach (MFB) making it possible to quantify the genetic information content of any DNA sequence and their practical applications to different disciplines in biology, medicine and agriculture. This novel breakthrough in computational genomic analysis and diseases contributes to define Biology as a “hard” science.

MFB opens a door to develop a research program towards the establishment of an integrative discipline that contributes to “break” the code of human life. (http://pharmaceuticalintelligence. com/page/3/).

5. Acknowledgements

Thanks to the directives of the EISC, the Universidad del Valle and the School of Engineering for offering an academic, scientific and administrative space for conducting this research. Likewise, thanks to co authors (professors and students) who participated in the implementation of excerpts from some of the works cited here. Finally, thanks to Colciencias by the biotechnology project grant # 1103-12-16765.


6. References

Blanco, S., & Moreno, P.A. (2007). Representación del juego del caos para el análisis de secuencias de ADN y proteínas mediante el análisis multifractal (método “box-counting”). In The Second International Seminar on Genomics and Proteomics, Bioinformatics and Systems Biology (pp. 17-25). Popayán, Colombia.         [ Links ]

Burgos, J.D., & Moreno-Tovar, P. (1996). Zipf scaling behavior in the immune system. BioSystem , 39, 227-232.         [ Links ]

C. elegans Sequencing Consortium. (1998). Genome sequence of the nematode C. elegans: a platform for investigating biology. Science , 282, 2012-2018.         [ Links ]

Gutiérrez, J.M., Iglesias A., Rodríguez, M.A., Burgos, J.D., & Moreno, P.A. (1998). Analyzing the multifractals structure of DNA nucleotide sequences. In, M. Barbie & S. Chillemi (Eds.) Chaos and Noise in Biology and Medicine (cap. 4). Hackensack (NJ): World Scientific Publishing Co.         [ Links ]

Ivanov, P.Ch., Nunes, L.A., Golberger, A.L., Havlin, S., Rosenblum, M.G., Struzikk, Z.R., & Stanley, H.E. (1999). Multifractality in human heartbeat dynamics. Nature , 399, 461-465.         [ Links ]

Jeffrey, H.J. (1990). Chaos game representation of gene structure. Nucleic Acids Research , 18, 2163-2175.         [ Links ]

Mandelbrot, B. (1982). La geometría fractal de la naturaleza. Barcelona. España: Tusquets editores.         [ Links ]

Moon, F.C. (1992). Chaotic and fractal dynamics. New York: John Wiley.         [ Links ]

Moreno, P.A. (2005). Large scale and small scale bioinformatics studies on the Caenorhabditis elegans enome. Doctoral thesis. Department of Biology and Biochemistry, University of Houston, Houston, USA.         [ Links ]

Moreno, P.A., Burgos, J.D., Vélez, P.E., Gutiérrez, J.M., & et al., (2000). Multifractal analysis of complete genomes. In P roceedings of the 12th International Genome Sequencing and Analysis Conference (pp. 80-81). Miami Beach (FL).         [ Links ]

Moreno, P.A., Rodríguez, J.G., Vélez, P.E., Cubillos, J.R., & Del Portillo, P. (2002). La genómica aplicada en salud humana. Colombia Ciencia y Tecnología. Colciencias , 20, 14-21.         [ Links ]

Moreno, P.A., Vélez, P.E., & Burgos, J.D. (2009). Biología molecular, genómica y post-genómica. Pioneros, principios y tecnologías. Popayán, Colombia: Editorial Universidad del Cauca.         [ Links ]

Moreno, P.A., Vélez, P.E., Martínez, E., Garreta, L., Díaz, D., Amador, S., Gutiérrez, J.M., et. al. (2011). The human genome: a multifractal analysis. BMC Genomics , 12, 506.         [ Links ]

Mount, D.W. (2004). Bioinformatics. Sequence and ge nome analysis. New York: Cold Spring Harbor Laboratory Press.         [ Links ]

Peitgen, H.O., Jürgen, H., & Saupe D. (1992). Chaos and Fractals. New Frontiers of Science. New York: Springer-Verlag.         [ Links ]

Restrepo, S., Pinzón, A., Rodríguez, L.M., Sierra, R., Grajales, A., Bernal, A., Barreto, E. et. al. (2009). Computational biology in Colombia. PLoS Computational Biology, 5 (10), e1000535.         [ Links ]

The ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome. Nature , 489, 57-74.         [ Links ]

Vélez, P.E., Garreta, L.E., Martínez, E., Díaz, N., Amador, S., Gutiérrez, J.M., Tischer, I., & Moreno, P.A. (2010). The Caenorhabditis elegans genome: a multifractal analysis. Genet and Mol Res , 9, 949-965.         [ Links ]

Venter, J.C., Adams, M.D., Myers, E.W., Li, P.W., & et al. (2001). The sequence of the human genome. Science , 291, 1304-1351.         [ Links ]

Yu, Z.G., Anh, V., & Lau, K.S. (2001). Measure representation and multifractal analysis of complete genomes. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics , 64, 031903.         [ Links ]

 

Other articles on Bioinformatics on this Open Access Journal include:

Bioinformatics Tool Review: Genome Variant Analysis Tools

2017 Agenda – BioInformatics: Track 6: BioIT World Conference & Expo ’17, May 23-35, 2017, Seaport World Trade Center, Boston, MA

Better bioinformatics

Broad Institute, Google Genomics combine bioinformatics and computing expertise

Autophagy-Modulating Proteins and Small Molecules Candidate Targets for Cancer Therapy: Commentary of Bioinformatics Approaches

CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

Read Full Post »


18th Annual 2019 BioIT, Conference & Expo, April 16-18, 2019, Boston, Seaport World Trade Center, Track 5 Next-Gen Sequencing Informatics – Advances in Large-Scale Computing

 

https://www.bio-itworldexpo.com/programs

https://www.bio-itworldexpo.com/next-gen-sequencing-informatics

 

 

Leaders in Pharmaceutical Business Intelligence (LPBI) Group

represented by Founder & Director, Aviva Lev-Ari, PhD, RN will cover this event in REAL TIME using Social Media

@pharma_BI

@AVIVA1950

@evanKristel 

TUESDAY, APRIL 16

2:00 – 6:30 Main Conference Registration Open

 

4:00 PLENARY KEYNOTE SESSION
Amphitheater

5:00 – 7:00 Welcome Reception in the Exhibit Hall with Poster Viewing

 

WEDNESDAY, APRIL 17

7:30 am Registration Open and Morning Coffee

8:00 PLENARY KEYNOTE SESSION
Amphitheater

9:45 Coffee Break in the Exhibit Hall with Poster Viewing

 

CURRENT AND EMERGING TECHNOLOGIES
Waterfront 3

10:50 Chairperson’s Remarks

David LaBrosse, Director, Genomics, Research, Life Sciences & Healthcare, NetApp

11:00 Long Read Sequencing

Justin Zook, PhD, Researcher, National Institute of Standards and Technology

11:20 NovoGraph: Loading 7 Human Genomes into Graphs

Evan Biederstedt, Computational Biologist, Memorial Sloan Kettering Cancer Center

11:40 Building a Usable Human Pangenome: A Human Pangenomics Hackathon Run by NCBI at UCSC

Ben Busby, PhD, Scientific Lead, NCBI Hackathons Group, National Center for Biotechnology Information (NCBI)

netapp12:00 pm Co-Presentation: Faster Genomic Data

Michael Hultner, PhD, Senior Vice President, Strategy; General Manager, US Operations, PetaGene

David LaBrosse, Director, Genomics, Research, Life Sciences & Healthcare, NetApp

Genetic testing demand is driving up the volume of genomic data that must be processed, analyzed, and stored. Gigabyte-scale genome sample files and terabyte- to petabyte-scale cohort data sets must be moved from data generation to processing to analysis sites, historically a slow, arduous process. NetApp and PetaGene will describe compression and data transfer technologies that overcome I/O bottlenecks to accelerate the movement of genomic data and reduce the time to process and analyze it.

12:30 Session Break

12:40 Luncheon Presentation I: Deep Phenotypic and Genomic Analysis of UK Biobank Data on the WuXi NextCODE Platform

Saliha Yilmaz, PhD, Research Geneticist, WuXi NextCODE

The increasing size and complexity of genetic and phenotypic data to include hundreds of thousands of participants poses a significant challenge for data storage and analysis. We demonstrate use of the GOR database and query language underlying our platform to mine UK Biobank and other datasets for efficient phenotype selection, GWAS and PheWAS, and to archive and query the results.

Seven-Bridges-rectangular1:10 NEW: Luncheon Co-Presentation II: Optimizing Drug Discovery and Development with Data-Driven Insights

Christian Frech, PhD, Associate Director, Scientific Operations, Seven Bridges

Serhat Tetikol, Research & Development Engineer, Seven Bridges

1:40 Session Break

DATA VISUALIZATION, EXPLORATION & ANALYSIS
Waterfront 3

1:50 Chairperson’s Remarks

Jeffrey Rosenfeld, PhD, Manager of the Biomedical Informatics Shared Resource and Assistant Professor of Pathology, Rutgers Cancer Institute of NJ

1:55 AbbVie’s Target and Genomics Compilation (ATGC): A Target Knowledge Platform

Rishi Gupta, PhD, Senior Research Scientist, Information Research, AbbVie, Inc.

Author: Anne-Sophie Barthelet, Scientific Developer, Discngine

ATGC is a web-based platform that allows AbbVie scientists to gather relevant information to make accurate decisions on target ID, target validation, biomarker selection and drug discovery. This platform provides in-depth information on several key pieces of information such as gene expression, RNA expression, protein expression, mouse knockout studies, etc. for each target. This talk focuses on key aspects of this application including application architecture, currently available tool sets and how various pieces of information are provided to the user.

2:25 Self Service Data Visualization and Exploration at Genentech Research

Kiran Mukhyala, Senior Software Engineer, Bioinformatics and Computational Biology, Genentech Research and Early Development

Genomic data requires specialized infrastructure to enable data exploration and analysis at scale. We built an integrated, modular, end-to-end gene expression analysis platform implementing data import, storage, processing, analysis and visualization. The multi-layered architecture of the platform supports general, high-level applications for self-service analytics, as well as infrastructure for prototyping, incubating and integrating scientist-driven innovations. The platform coexists with other in-house and commercial software to provide a wide range of genomic data analysis and visualization options for Research scientists.

2:55 Exploring and Visualizing Single-cell RNA Sequencing Data

Michael DeRan, PhD, Scientific Consultant, Diamond Age Data Science

Recent advances in single-cell RNA sequencing (scRNA-seq) technology have made this powerful method accessible to many researchers, but have not brought with them a clear, simple workflow for data analysis. As the number of scRNA-seq datasets has increased, so too has the number of analysis tools available; for those looking to perform their first scRNA-seq analysis the range of options can seem daunting. In working with our clients, I have had the opportunity to apply many different tools to scRNA-seq data from a variety of tissues and organisms. I have used this experience to select a set of tools that are flexible and suitable to many common scRNA-seq analysis tasks. In this talk I will introduce popular tools and methods for identifying cell populations, assessing differential expression and visualizing biological processes. I will discuss common pitfalls encountered in analyzing this data and make recommendations that anyone can use in their own analysis.

3:25 Refreshment Break in the Exhibit Hall with Poster Viewing, Meet the Experts: Bio-IT World Editorial Team, and Book Signing with Joseph Kvedar, MD, Author, The Internet of Healthy Things℠ (Book will be available for purchase onsite)

NGS APPROACHES FOR CANCER
Waterfront 3

4:00 Comparison of Different Approaches for Clinical Cancer Sequencing

Jeffrey Rosenfeld, PhD, Manager of the Biomedical Informatics Shared Resource and Assistant Professor of Pathology, Rutgers Cancer Institute of NJ

The sequencing of tumors is important for guiding the treatment of cancer patients. While it is agreed that there is a need to perform sequencing of the tumor, there are a wide variety of approaches ranging from paired whole genome tumor-normal sequencing to tumor-only small panel sequencing with many intermediate possibilities. Each of the approaches has a different cost and associated benefit. I will present a comparison of different methods and their efficacy for guiding cancer treatment.

4:30 Integrated NGS Analysis to Accelerate Disease Understanding for Drug Discovery

Helen Li, Director- Research IT – Biologics & Informatics, Eli Lilly and Company

5:00 Identification of Cancer Biomarker Genes

Maryam Nazarieh, PhD, Postdoctoral Researcher, Center for Bioinformatics, Universität des Saarlandes, Saarbrücken, Germany

Identification of biomarker genes plays a crucial role in disease detection and treatment. Computational approaches enhance the insights derived from experiments and reduce the efforts of biologists and experimentalists to identify biomarker genes which play key roles in complex diseases. This is essentially achieved through prioritizing a set of genes with certain attributes (1). Here, I propose a set of transcription factors that make the largest strongly connected component of the pluripotency network in embryonic stem cells as the global regulators that control differentiation process determining cell fate. This component can be controlled by a set of master regulatory genes.  The regulatory mechanisms underlying stem cells inspired us to formulate the problem where a set of master regulatory genes in regulatory networks is identified with two combinatorial optimization problems namely as minimum dominating set and minimum connected dominating set in weakly and strongly connected components. The developed methods were applied to regulatory cancer networks to identify disease-associated genes and anti-cancer drug targets in breast cancer and hepatocellular carcinoma.  As not all the nodes in the solutions are critical, a prioritization method was developed named TopControl to rank a set of candidate genes which relate to a certain disease based on systematic analysis of the genes that are differentially expressed in tumor and normal conditions. To this purpose, the NGS data were utilized taken from The Cancer Genome Atlas for matched tumor and normal samples of liver hepatocellular carcinoma (LIHC) and breast invasive carcinoma (BRCA) datasets. Moreover, the topological features were demonstrated in regulatory networks surrounding differentially expressed genes that were highly consistent in terms of using the output of several analysis tools. We present several web servers and software packages that are publicly available at no cost. The Cytoscape plugin of minimum connected dominating set identifies a set of key regulatory genes in a user provided regulatory network based on a heuristic approach. The ILP formulations of minimum dominating set and minimum connected dominating set return the optimal solutions for the aforementioned problems. Our source code is publicly available. The web servers TFmiR and TFmiR2 construct disease-, tissue-, process-specific networks for the sets of deregulated genes and miRNAs provided by a user. They highlight topological hotspots and offer detection of three- and four-node FFL motifs as a separate web service for both organisms mouse and human. 1) Maryam Nazarieh, Understanding regulatory mechanisms underlying stem cells helps to identify cancer biomarkers. Ph.D. thesis, Saarland University, Saarbrücken, Germany (2018).

5:30 Best of Show Awards Reception in the Exhibit Hall with Poster Viewing

Read Full Post »

Older Posts »