Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘#healtcare’


10:15AM 11/13/2014 – 10th Annual Personalized Medicine Conference at the Harvard Medical School, Boston

REAL TIME Coverage of this Conference by Dr. Aviva Lev-Ari, PhD, RN – Director and Founder of LEADERS in PHARMACEUTICAL BUSINESS INTELLIGENCE, Boston http://pharmaceuticalintelligence.com

10:15 a.m. Panel Discussion — IT/Big Data

IT/Big Data

The human genome is composed of 6 billion nucleotides (using the genetic alphabet of T, C, G and A). As the cost of sequencing the human genome is decreasing at a rapid rate, it might not be too far into the future that every human being will be sequenced at least once in their lifetime. The sequence data together with the clinical data are going to be used more and more frequently to make clinical decisions. If that is true, we need to have secure methods of storing, retrieving and analyzing all of these data.  Some people argue that this is a tsunami of data that we are not ready to handle. The panel will discuss the types and volumes of data that are being generated and how to deal with it.

IT/Big Data

   Moderator:

Amy Abernethy, M.D.
Chief Medical Officer, Flatiron

Role of Informatics, SW and HW in PM. Big data and Healthcare

How Lab and Clinics can be connected. Oncologist, Hematologist use labs in clinical setting, Role of IT and Technology in the environment of the Clinicians

Compare Stanford Medical Center and Harvard Medical Center and Duke Medical Center — THREE different models in Healthcare data management

Create novel solutions: Capture the voice of the patient for integration of component: Volume, Veracity, Value

Decisions need to be made in short time frame, documentation added after the fact

No system can be perfect in all aspects

Understanding clinical record for conversion into data bases – keeping quality of data collected

Key Topics

Panelists:

Stephen Eck, M.D., Ph.D.
Vice President, Global Head of Oncology Medical Sciences,
Astellas, Inc.

Small data expert, great advantage to small data. Populations data allows for longitudinal studies,

Big Mac Big Data – Big is Good — Is data been collected suitable for what is it used, is it robust, limitations, of what the data analysis mean

Data analysis in Chemical Libraries – now annotated

Diversity data in NOTED by MDs, nuances are very great, Using Medical Records for building Billing Systems

Cases when the data needed is not known or not available — use data that is available — limits the scope of what Valuable solution can be arrived at

In Clinical Trial: needs of researchers, billing clinicians — in one system

Translation of data on disease to data object

Signal to Noise Problem — Thus Big data provided validity and power

 

J. Michael Gaziano, M.D., M.P.H., F.R.C.P.
Scientific Director, Massachusetts Veterans Epidemiology Research
and Information Center (MAVERIC), VA Boston Healthcare System;
Chief Division of Aging, Brigham and Women’s Hospital;
Professor of Medicine, Harvard Medical School

at BWH since 1987 at 75% – push forward the Genomics Agenda, VA system 25% – VA is horizontally data integrated embed research and knowledge — baseline questionnaire 200,000 phenotypes – questionnaire and Genomics data to be integrated, Data hierarchical way to be curated, Simple phenotypes, validate phenotypes, Probability to have susceptibility for actual disease, Genomics Medicine will benefit Clinicians

Data must be of visible quality, collect data via Telephone VA – on Med compliance study, on Ability to tolerate medication

–>>Annotation assisted in building a tool for Neurologist on Alzheimer’s Disease (AlzSWAN knowledge base) (see also Genotator , a Disease-Agnostic Tool for Annotation)

–>>Curation of data is very different than statistical analysis of Clinical Trial Data

–>>Integration of data at VA and at BWH are tow different models of SUCCESSFUL data integration models, accessing the data is also using a different model

–>>Data extraction from the Big data — an issue

–>>Where the answers are in the data, build algorithms that will pick up causes of disease: Alzheimer’s – very difficult to do

–>>system around all stakeholders: investment in connectivity, moving data, individual silo, HR, FIN, Clinical Research

–>>Biobank data and data quality

 

Krishna Yeshwant, M.D.
General Partner, Google Ventures;
Physician, Brigham and Women’s Hospital

Computer Scientist and Medical Student. Were the technology is going?

Messy situation, interaction IT and HC, Boston and Silicon Valley are focusing on Consumers, Google Engineers interested in developing Medical and HC applications — HUGE interest. Application or Wearable – new companies in this space, from Computer Science world to Medicine – Enterprise level – EMR or Consumer level – Wearable — both areas are very active in Silicon Valley

IT stuff in the hospital HARDER that IT in any other environment, great progress in last 5 years, security of data, privacy. Sequencing data cost of big data management with highest security

Constrained data vs non-constrained data

Opportunities for Government cooperation as a Lead needed for standardization of data objects

 

Questions from the Podium:

  • Where is the Truth: do we have all the tools or we don’t for Genomic data usage
  • Question on Interoperability
  • Big Valuable data — vs Big data
  • quality, uniform, large cohort, comprehensive Cancer Centers
  • Volume of data can compensate quality of data
  • Data from Imaging – Quality and interpretation – THREE radiologist will read cancer screening

 

 

 

– See more at: http://personalizedmedicine.partners.org/Education/Personalized-Medicine-Conference/Program.aspx#sthash.qGbGZXXf.dpuf

 

@HarvardPMConf

#PMConf

@SachsAssociates

@Duke_Medicine

@AstellasUS

@GoogleVentures

@harvardmed

@BrighamWomens

@kyeshwant

Advertisements

Read Full Post »


9:20AM 11/12/2014 – 10th Annual Personalized Medicine Conference at the Harvard Medical School, Boston

REAL TIME Coverage of this Conference by Dr. Aviva Lev-Ari, PhD, RN – Director and Founder of LEADERS in PHARMACEUTICAL BUSINESS INTELLIGENCE, Boston http://pharmaceuticalintelligence.com

9:20 a.m. Panel Discussion – Genomic Technologies

Genomic Technologies

The greatest impetus for personalized medicine is the initial sequencing of the human genome at the beginning of this Century. As we began to recognize the importance of genetic factors in human health and disease, efforts to understand genetic variation and its impact on health have accelerated. It was estimated that it cost more than two billion dollars to sequence the first human genome and reduction in the cost of sequence became an imperative to apply this technology to many facets of risk assessment, diagnosis, prognosis and therapeutic intervention. This panel will take a brief historical look back at how the technologies have evolved over the last 15 years and what the future holds and how these technologies are being applied to patient care.

Genomic Technologies

Opening Speaker and Moderator:

George Church, Ph.D.
Professor of Genetics, Harvard Medical School; Director, Personal Genomics

Genomic Technologies and Sequencing

  • highly predictive, preventative
  • non predictive

Shareable Human Genomes Omics Standards

$800 Human Genome Sequence – Moore’s Law does not account for the rapid decrease in cost of Genome Sequencing

Genome Technologies and Applications

  • Genia nanopore – battery operated device
  • RNA & protein traffic
  • Molecular Stratification Methods – more than one read, sequence ties
  • Brain Atlas  – transcriptome of mouse brains
  • Multigenics – 700 genes: hGH therapies

Therapies

  • vaccine
  • hygiene
  • age

~1970 Gene Therapy in Clinical Trials

Is Omic technologies — a Commodity?

  • Some practices will have protocols
  • other will never become a commodity

 

Panelists:

Sam Hanash, M.D., Ph.D. @MDAndersonNews

Director, Red & Charline McCombs Institute for Early Detection & Treatment of Cancer MD Anderson Cancer Center

Heterogeneity among Cancer cells. Data analysis and interpretation is very difficult, back up technology

Proteins and Peptides before analysis with spectrometry:

  • PM  – Immunotherapy approaches need be combined with other techniques
  • How modification in protein type affects disease
  • amplification of an aberrant protein – when that happens cancer developed. Modeling on a CHip of peptide synthesizer

Mark Stevenson @servingscience

Executive Vice President and President, Life Sciences Solutions
Thermo Fisher Scientific

Issues of a Diagnostics Developer:

  • FDA regulation, need to test on several tissues
  • computational environment
  • PCR, qPCR – cost effective
  • BGI – competitiveness

Robert Green, MD @BrighamWomens

Partners, Health Care Personalized Medicine — >>Disclosure: Illumina and three Pharmas

Innovative Clinical Trial: Alzheimer’s Disease, integration of sequencing with drug development

  • Population based screening with diagnosis
  • Cancer predisposition: Cost, Value, BRCA
  • epigenomics technologies to be integrated
  • Real-time diagnostics
  • Screening makes assumption on Predisposition
  • Public Health view: Phenotypes in the Framingham Studies: 64% pathogenic genes were prevalent – complication based in sequencing.

Questions from the Podium:

  • Variants analysis
  • Metastasis different than solid tumor itself – Genomics will not answer issues related to tumor in special tissues variability

 

 

 

 

– See more at: http://personalizedmedicine.partners.org/Education/Personalized-Medicine-Conference/Program.aspx#sthash.qGbGZXXf.dpuf

@HarvardPMConf

#PMConf

@SachsAssociates

 

Read Full Post »


Track 9 Pharmaceutical R&D Informatics: Collaboration, Data Science and Biologics @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Aviva Lev-Ari, PhD, RN

 

April 30, 2014

 

Big Data and Data Science in R&D and Translational Research

10:50 Chairperson’s Remarks

Ralph Haffner, Local Area Head, Research Informatics, F. Hoffmann-La Roche AG

11:00 Can Data Science Save Pharmaceutical R&D?

Jason M. Johnson, Ph.D., Associate Vice President,

Scientific Informatics & Early Development and Discovery Sciences IT, Merck

Although both premises – that the viability of pharmaceutical R&D is mortally threatened and that modern “data science” is a relevant superhero – are

suspect, it is clear that R&D productivity is progressively declining and many areas of R&D suboptimally use data in decision-making. We will discuss

some barriers to our overdue information revolution, and our strategy for overcoming them.

11:30 Enabling Data Science in Externalized Pharmaceutical R&D

Sándor Szalma, Ph.D., Head, External Innovation, R&D IT,

Janssen Research & Development, LLC

Pharmaceutical companies have historically been involved in many external partnerships. With recent proliferation of hosted solutions and the availability

of cost-effective, massive high-performance computing resources there is an opportunity and a requirement now to enable collaborative data science. We

discuss our experience in implementing robust solutions and pre-competitive approaches to further these goals.

12:00 pm Co-Presentation: Sponsored by

Collaborative Waveform Analytics: How New Approaches in Machine Learning and Enterprise Analytics will Extend Expert Knowledge and Improve Safety Assessment

  • Tim Carruthers, CEO, Neural ID
  • Scott Weiss, Director, Product Strategy, IDBS

Neural ID’s Intelligent Waveform Service (IWS) delivers the only enterprise biosignal analysis solution combining machine learning with human expertise. A collaborative platform supporting all phases of research and development, IWS addresses a significant unmet need, delivering scalable analytics and a single interoperable data format to transform productivity in life sciences. By enabling analysis from BioBook (IDBS) to original biosignals, IWS enables users of BioBook to evaluate cardio safety assessment across the R&D lifecycle.

12:15 Building a Life Sciences Data

Sponsored by

Lake: A Useful Approach to Big Data

Ben Szekely, Director & Founding Engineer,

Cambridge Semantics

The promise of Big Data is in its ability to give us technology that can cope with overwhelming volume and variety of information that pervades R&D informatics. But the challenges are in practical use of disconnected and poorly described data. We will discuss: Linking Big Data from diverse sources for easy understanding and reuse; Building R&D informatics applications on top of a Life Sciences Data Lake; and Applications of a Data Lake in Pharma.

12:40 Luncheon Presentation I:

Sponsored by

Chemical Data Visualization in Spotfire

Matthew Stahl, Ph.D., Senior Vice President,

OpenEye Scientific Software

Spotfire deftly facilitates the analysis and interrogation of data sets. Domain specific data, such as chemistry, presents a set of challenges that general data analysis tools have difficulty addressing directly. Fortunately, Spotfire is an extensible platform that can be augmented with domain specific abilities. Spotfire has been augmented to naturally handle cheminformatics and chemical data visualization through the integration of OpenEye toolkits. The OpenEye chemistry extensions for Spotfire will be presented.

1:10 Luncheon Presentation II 

1:50 Chairperson’s Remarks

Yuriy Gankin, Ph.D., Co. Founder and CSO, GGA Software Services

1:55 Enable Translational Science by Integrating Data across the R&D Organization

Christian Gossens, Ph.D., Global Head, pRED Development Informatics Team,

pRED Informatics, F. Hoffmann-La Roche Ltd.

Multi-national pharmaceutical companies face an amazingly complex information management environment. The presentation will show that

a systematic system landscaping approach is an effective tool to build a sustainable integrated data environment. Data integration is not mainly about

technology, but the use and implementation of it.

2:25 The Role of Collaboration in Enabling Great Science in the Digital Age: The BARD Data Science Case Study

Andrea DeSouza, Director, Informatics & Data Analysis,

Broad Institute

BARD (BioAssay Research Database) is a new, public web portal that uses a standard representation and common language for organizing chemical biology data. In this talk, I describe how data professionals and scientists collaborated to develop BARD, organize the NIH Molecular Libraries Program data, and create a new standard for bioassay data exchange.

May 1. 2014

BIG DATA AND DATA SCIENCE IN R&D AND TRANSLATIONAL RESEARCH

10:30 Chairperson’s Opening Remarks

John Koch, Director, Scientific Information Architecture & Search, Merck

10:35 The Role of a Data Scientist in Drug Discovery and Development

Anastasia (Khoury) Christianson, Ph.D., Head, Translational R&D IT, Bristol-

Myers Squibb

A major challenge in drug discovery and development is finding all the relevant data, information, and knowledge to ensure informed, evidencebased

decisions in drug projects, including meaningful correlations between preclinical observations and clinical outcomes. This presentation will describe

where and how data scientists can support pharma R&D.

11:05 Designing and Building a Data Sciences Capability to Support R&D and Corporate Big Data Needs

Shoibal Datta, Ph.D., Director, Data Sciences, Biogen Idec

To achieve Biogen Idec’s strategic goals, we have built a cross-disciplinary team to focus on key areas of interest and the required capabilities. To provide

a reusable set of IT services we have broken down our platform to focus on the Ingestion, Digestion, Extraction and Analysis of data. In this presentation, we will outline how we brought focus and prioritization to our data sciences needs, our data sciences architecture, lessons learned and our future direction.

11:35 Data Experts: Improving Sponsored by

Translational Drug-Development Efficiency

Jamie MacPherson, Ph.D., Consultant, Tessella

We report on a novel approach to translational informatics support: embedding Data Experts’ within drug-project teams. Data experts combine first-line

informatics support and Business Analysis. They help teams exploit data sources that are diverse in type, scale and quality; analyse user-requirements and prototype potential software solutions. We then explore scaling this approach from a specific drug development team to all.

 

Read Full Post »


PLENARY KEYNOTE PRESENTATIONS: THURSDAY, MAY 1 | 8:00 – 10:00 AM @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

 

Reporter: Aviva Lev-Ari, PhD, RN

 

Keynote Introduction: Sponsored by Fred Lee, M.D., MPH, Director, Healthcare Strategy and Business Development, Oracle Health Sciences

Heather Dewey-Hagborg

Artist, Ph.D. Student, Rensselaer Polytechnic Institute

Heather Dewey-Hagborg is an interdisciplinary artist, programmer and educator who explores art as research and public inquiry. She recreates identity from strands of human hair in an entirely different way. Collecting hairs she finds in random public places – bathrooms, libraries, and subway seats – she uses a battery of newly developing technologies to create physical, life-sized portraits of the owners of these hairs. Her fixation with a single hair leads her to controversial art projects and the study of genetics. Traversing media ranging from algorithms to DNA, her work seeks to question fundamental assumptions underpinning perceptions of human nature, technology and the environment. Examining culture through the lens of information, Heather creates situations and objects embodying concepts, probes for reflection and discussion. Her work has been featured in print, television, radio, and online. Heather has a BA in Information Arts from Bennington College and a Masters degree from the Interactive Telecommunications Program at Tisch School of the Arts, New York University. She is currently a Ph.D. student in Electronic Arts at Rensselaer Polytechnic Institute.

 

Yaniv Erlich, Ph.D.

Principal Investigator and Whitehead Fellow, Whitehead Institute for Biomedical Research

 

Dr. Yaniv Erlich is Andria and Paul Heafy Family Fellow and Principal Investigator at the Whitehead Institute for Biomedical Research. He received a bachelor’s degree from Tel-Aviv University, Israel and a PhD from the Watson School of Biological Sciences at Cold Spring Harbor Laboratory in 2010. Dr. Erlich’s research interests are computational human genetics. Dr. Erlich is the recipient of the Burroughs Wellcome Career Award (2013), Harold M. Weintraub award (2010), the IEEE/ACM-CS HPC award (2008), and he was selected as one of 2010 Tomorrow’s PIs team of Genome Technology.

 

Isaac Samuel Kohane, M.D., Ph.D.

Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School;

Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing;

Co-Director, HMS Center for Biomedical Informatics

 

Isaac Kohane, MD, PhD, co-directs the Center for Biomedical Informatics at Harvard Medical School. He applies computational techniques, whole genome analysis, and functional genomics to study human diseases through the developmental lens, and particularly through the use of animal model systems. Kohane has led the use of whole healthcare systems, notably in the i2b2 project, as “living laboratories” to drive discovery research in disease genomics (with a focus on autism) and pharmacovigilance

(including providing evidence for the cardiovascular risk of hypoglycemic agents which ultimately contributed to “black box”ing by the FDA) and comparative effectiveness with software and methods adopted in over 84 academic health centers internationally. Dr. Kohane has published over 200 papers in the medical literature and authored a widely used book on Microarrays for an Integrative Genomics. He has been elected to multiple honor societies including the American Society for Clinical Investigation, the American College of Medical Informatics, and the Institute of Medicine. He leads a doctoral program in genomics and bioinformatics within the Division of Medical Science at Harvard University. He is also an occasionally practicing pediatric endocrinologist.

 

#SachsBioinvestchat, #bioinvestchat

#Sachs14thBEF

Read Full Post »


Track 5 Next-Gen Sequencing Informatics: Advances in Analysis and Interpretation of NGS Data @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

NGS Bioinformatics Marketplace: Emerging Trends and Predictions

10:50 Chairperson’s Remarks

Narges Baniasadi, Ph.D., Founder & CEO, Bina Technologies, Inc.

11:00 Global Next-Generation Sequencing Informatics Markets: Inflated Expectations in an Emerging Market

Greg Caressi, Senior Vice President, Healthcare and Life Sciences, Frost & Sullivan

This presentation evaluates the global next-generation sequencing (NGS) informatics markets from 2012 to 2018. Learn key market drivers and restraints,

key highlights for many of the leading NGS informatics services providers and vendors, revenue forecasts, and the important trends and predictions that

affect market growth.

Organizational Approaches to NGS Informatics

11:30 High-Performance Databases to Manage and Analyze NGS Data

Joseph Szustakowski, Ph.D., Head, Bioinformatics, Biomarker Development,

Novartis Institutes for Biomedical Research

The size, scale, and complexity of NGS data sets call for new data management and analysis strategies. High-performance database systems

combine the advantages of both established and cutting edge technologies. We are using high performance database systems to manage and analyze NGS, clinical, pathway, and phenotypic data with great success. We will describe our approach and concrete success stories that demonstrate its efficiency and effectiveness.

12:00 pm Taming Big Science Data Growth with Converged Infrastructure

Aaron D. Gardner, Senior Scientific Consultant,

BioTeam, Inc.

Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected

data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details

for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today’s NGS workflows.

12:15 Next Generation Sequencing:  Workflow Overview from a High-Performance Computing Point of View

Carlos P. Sosa, Ph.D., Applications Engineer, HPC Lead,

Cray, Inc.

Next Generation Sequencing (NGS) allows for the analysis of genetic material with unprecedented speed and efficiency. NGS increasingly shifts the burden

from chemistry done in a laboratory to a string manipulation problem, well suited to High- Performance Computing. We explore the impact of the NGS

workflow in the design of IT infrastructures. We also present Cray’s most recent solutions for NGS workflow.

SOSA in REAL TIME

Bioinformatics and BIG DATA – NGS @ CRAY i 2014

I/O moving, storage data – UNIFIED solution by Cray

  • Data access
  • Fast Access
  • Storage
  • manage high performance computinf; NGS work flow, multiple human genomes 61 then 240 sequentiallt, with high performance in 51 hours, 140 genomes in simultaneous

Architecture @Cray for Genomics

  • sequensors
  • Galaxy
  • servers for analysis
  •  workstation: Illumina, galaxy, CRAY does the integration of 3rd party SW using a workflow LEVERAGING the network, the fastest in the World, network useding NPI for scaling and i/O
  • Compute blades, reserves formI?O nodes, the Fastest interconnet in the industry
  • scale of capacity and capability, link interconnect in the file System: lustre
  • optimization of bottle neck: capability, capacity, file structure for super fast I/O

12:40 Luncheon Presentation I

Erasing the Data Analysis Bottleneck with BaseSpace

Jordan Stockton, Ph.D., Marketing Director,

Enterprise Informatics, Illumina, Inc.

Since the inception of next generation sequencing, great attention has been paid to challenges such as storage, alignment, and variant calling. We believe

that this narrow focus has distracted many biologists from higher-level scientific goals, and that simplifying this process will expedite the discovery

process in the field of applied genomics. In this talk we will show that applications in BaseSpace can empower a new class of researcher to go from

sample to answer quickly, and can allow software developers to make their tools accessible to a vast and receptive audience.

1:10 Luncheon Presentation II: Sponsored by

The Empowered Genome Community: First Insights from Shareable Joint Interpretation of Personal Genomes for Research

Nathan Pearson, Ph.D. Principal Genome Scientist,

QIAGEN

Genome sequencing is becoming prevalent however understanding each genome requires comparing many genomes. We launched the Empowered Genome Community, consisting of people from programs such as the Personal Genome Project (PGP) and Illumina’s Understand Your Genome. Using Ingenuity Variant Analysis, members have identified proof of principle insights on a common complex disease (here,myopia) derived by open collaborative analysis of PGP genomes.

Pearson in REAL TIME

One Genome vs. population of Genomes

IF one Genome:

  1. ancestry
  2. family health
  3. less about drug and mirrors
  4. health is complex

CHallenges

1. mine genome

2. what all genome swill do for Humanity not what my genome can do for me

3. Cohort analysis, rich for variance

4. Ingenuity Variant Analysis – secure environment

5. comparison of genomes, a sequence, reference matching

6. phynogenum, statistical analysis as Population geneticists do

Open, collabrative myopia analysis GENES rare leading to myuopia – 111 genomes

– first-pass finding highlight 12 plausibly myopia-relevant genes: variants in cases vs control

– refine finding and analysis, statistical association, common variance

Read Full Post »


Track 4 Bioinformatics: Utilizing Massive Quantities of –omic Information across Research Initiatives @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

Bioinformatics for Big Data

10:50 Chairperson’s Remarks

Les Mara, Founder, Databiology, Ltd.

 

11:00 Data Management Best Practices for Genomics Service Providers

Vas Vasiliadis, Director, Products, Computation Institute,

University of Chicago and Argonne National Laboratory

Genomics research teams in academia and industry are increasingly limited at all stages of their work by large and unwieldy datasets, poor integration between the computing facilities they use for analysis, and difficulty in sharing analysis results with their customers and collaborators. We will discuss issues with current approaches and describe emerging best practices for managing genomics data through its lifecycle.

Vas in REAL TIME

Computation Institute @ University of Chicago solutions to non profit entities, scale and make available in an affordable way “I have nothing to say on Big Data”, 57.7% survey by NAS, average time researcher spend on research, it will get worse, research data management morphed into better ways, industrial robust way, commercial start ups are role model. All functions of an enterprise now available as applications for small business.

  • Highly scaleable, invisible
  • high performance
  • In Genomics, tools – shipping hard drive new ways to develop research infrastructure:
  • dropbox, does not scale Amazon’s Webservices is the cloud
  • security in sharing across campuses, InCommon – cross domains sw access constrains are mitigated.
  • identity provision for multiple identity – identity Hub, one time association done, Group Hubs, i.e., ci connect – UChicago, access to systems at other campuses – connecting science to cycles of data, network not utilizied efficiently – tools not design for that, FTP, Firewalls are designed for data not Big data.
  • Science DMZ – carve realestate for Science data transfer, monitoring the transfer
  • Reproducibility, Provenance, Public mandates
  • Data publication Service: VIVO, fisshare, Fedora, duracloud, doi, identification, store, preserve,, curation workflow
  • Search for discovery: Faceted Search. browse distributed, access locally – automation required, outsourcing, delivery throufg SaaS
  • We are all on cloud

11:30 NGS Analysis to Drug Discovery: Impact of High-Performance Computing in Life Sciences

Bhanu Rekepalli, Ph.D., Assistant Professor and Research Scientist, Joint Institute for Computational Sciences, The University of Tennessee, Oak Ridge National Laboratory

We are working with small-cluster-based applications most widely used by the scientific community on the world’s premier supercomputers. We incorporated these parallel applications into science gateways with user-friendly, web-based portals. Learn how the research at UTK-ORNL will help to bridge the gap between the rate of big data generation in life sciences and the speed and ease at which biologists and pharmacists can study this data.

Bhanu in REAL TIME

Cost per Genome does down, 2011 from $100,000 to $1,000

  • Solutions:
  • architecture
  • parallel informatics
  • SW modules
  • web-based gateway
  • XSEDE.org sponsured by NSF at all sponsored research by NSF
  • LCF – applications: Astrophysics, Bioinfo, CFD, highly scalable wrappers for the analysis Blast scaling results in Biology
  • Next generation super computers: Xeon/Phi

NICS Informatics Science gateway – PoPLAR Portal for Parallel Scaling Life Sciences Applications & Research

  • automated workflows
  • Smithsonian Institute, generate genomes fro all life entities in the universe: BGI
  • Titan Genomic Data analysis –   Everglade ecosystem, sequenced
  • Univ S. Carolina great computing infrastructure
  • Super computer: KRAKEN
  • 5-10 proteins modeling on supercomputers for novel drug discovery
  • Vascular Tree system for Heart transplant – visualization and modeling

12:00 pm The Future of Biobank Informatics

Bruce Pharr, Vice President, Product Marketing, Laboratory Systems, Remedy Informatics

As biobanks become increasingly essential to basic, translational, and clinical research for genetic studies and personalized medicine, biobank informatics must address areas from biospecimen tracking, privacy protection, and quality management to pre-analytical and clinical collection/identification of study data elements. This presentation will examine specific requirements for third-generation biobanks and how biobank informatics will meet those requirements.

Bruce Pharr in REAL TIME

Flexible Standartization

BioBank use of informatics in the1980s – bio specimens. 1999 RAND research 307 M biospecimens in US biobanks growing at 20M per year.

2nd – Gen Bioband

2005 – 3rd-Gen Biobanks – 15000 studies on Cancer, biospecimen, Consent of donors is a must.

Biobank – PAtion , Procedure, specimen acquistion, storage, processing, distribution, analysis

Building Registries – Mosaic Platform

  • Specimen Track BMS,
  • Mosaic Ontology:  application and Engine

1. standardize specimen requirement

Registries set up the storage: administrator dashboard vs user bashboard

2. Interoperability

3. Quality analysis

4. Informed Consent

 

12:15 Learn How YarcData’s Graph Analytics Appliance Makes It Easy to Use Big Data in Life Sciences

Ted Slater, Senior Solutions Architect, Life Sciences, YarcData, a division of Cray

YarcData, a division of Cray, offers high performance solutions for big data graph analytics at scale, finally giving researchers the power to leverage all the data they need to stratify patients, discover new drug targets, accelerate NGS analysis, predict biomarkers, and better understand diseases and their treatments.

12:40 Luncheon Presentation I

The Role of Portals for Managing Biostatistics Projects at a CRO

Les Jordan, Director, Life Sciences IT Consulting, Quintiles

This session will focus on how portals and other tools are used within Quintiles and at other pharmas to manage projects within the biostatistics department.

1:10 Luncheon Presentation II (Sponsorship Opportunity Available) or Lunch on Your Own

1:50 Chairperson’s Remarks

Michael Liebman, Ph.D., Managing Director, IPQ Analytics, LLC

Sabrina Molinaro, Ph.D., Head of Epidemiology, Institute of ClinicalPhysiology, National Research Council –

CNR Italy

1:55 Integration of Multi-Omic Data Using Linked Data Technologies

Aleksandar Milosavljevic, Ph.D., Professor, Human Genetics; Co-Director,

Program in Structural & Computational Biology and Molecular Biophysics;

Co-Director, Computational and Integrative Biomedical Research Center,

Baylor College of Medicine

By virtue of programmatic interoperability (uniform REST APIs), Genboree servers enable virtual integration of multi-omic data that is distributed across multiple physical locations. Linked Data technologies of the Semantic Web provide an additional “logical” layer of integration by enabling distributed queries across the distributed data and by bringing multi-omic data into the context of pathways and other background knowledge required for data interpretation.

2:25 Building Open Source Semantic Web-Based Biomedical Content Repositories to Facilitate and Speed Up Discovery and Research

Bhanu Bahl, Ph.D., Director, Clinical and Translational Science Centre,

Harvard Medical School

Douglas MacFadden, CIO, Harvard Catalyst at Harvard Medical School

Eagle-i open source network at Harvard provides a state-of-the-art informatics

Read Full Post »


AWARDS: Best of Show Awards, Best Practices Awards and 2014 Benjamin Franklin Award  @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reorter: Aviva Lev-Ari, PhD, RN

 

Best of Show Awards

The Best of Show Awards offer exhibitors an opportunity to distinguish their products from the competition. Judged by a team of leading industry experts and Bio-IT World editors, this award identifies exceptional innovation in technologies used by life science professionals today. Judging and the announcement of winners is conducted live in the Exhibit Hall. Winners will be announced on Wednesday, April 30 at 5:30pm. The deadline for product submissions is February 21, 2014. To learn more about this program, contact Ryan Kirrane at 781-972-1354 or email rkirrane@healthtech.com.

2014 WINNER(s) are announced in Real Time

2014 – Five categories

1. Clinical ad Health IT – Astazeneca with Tessella – Real Time Analytics for Clinical Trial (RTACT) – engine for innovations

2. Research and Drug Discovery: U-bioPRED with the TranSMART Foundation – Open Source  – Emperial College – Biomarkers for Asthma,  hospitals, 340 universities, 34 Pharmas

3. Informatics: Pistoia Alliance – HELM – Pfizer, released data for HELM Project

4. Knowledge Management Finalists: GENENTECH – Genentech Cell Line Resource

5. IT Infrastructure/HPC Winner:

Baylor College of Medicine with DNAnexus –

 

2014 Judges’Prize – UK for Patient Data Intgration

2014 Editors’ Choice Award: Mount Sinai – Rethinking Type 2 Diabetes through Data Informatics

2014 Benjamin Franklin Award

The Benjamin Franklin Award for Open Access in the Life Sciences is a humanitarian/bioethics award presented annually by the Bioinformatics Organization to an individual who has, in his or her practice, promoted free and open access to the materials and methods used in the life sciences. Nominations are now being accepted!

The winner will be announced in the Ampitheater at 9:00am on Wednesday, April 30 during the Plenary Keynote and Awards Program, WEDNESDAY, APRIL 30 | 8:00 – 9:45 AM.

Full details including previous laureates and entry forms are available at www.bioinformatics.org/franklin.

2014 WINNER is:

Helen Berman, Ph.D.

Board of Governors Professor of Chemistry and Chemical Biology, Rutgers University;

Founding Member, Worldwide Protein Data Bank (wwPDB); Director, Research Collaboratory for Structural Bioinformatics PDB (RCSB PDB)

Helen: ACCEPTANCE AWARD SPEECH

Proteins: Synthesis, enzymes, Health & Disease

PDB depositors: 850 new entries / month, 468 Miliions downloads & views, PDB Access

History of sharing the databank on protein

J.D. Bernl – 1944 crystalied Pepsin with Dorothy Hodgkin Oxford, manyWomen Distingushed

1960 – Early structure of proteins: Myoglobin, hemoglobin

1970

1980

1990

2000  Ribosomes

2010s: macromolecule machines

  • Science of protein structure
  • Technology: electromicroscopy,  Structure Genomics – data driven science Hybrid methods at Present for 3D structure identification

COMMUNITY ATTITUDE –  1971 PDB archive established at Cold Spring Harbor, Walter Hamilton, petition to have an Open DB of Protein, Brookhaven Labs, to be shared with UK, Nature New Biology: Seven Structures to the DB

1982 – AIDs epidemic – NIH – requested data to be Open, community set its own rules on data organization Fred Richards, Yale, requested on moral ground, DB to be Open.

1993 – mandatory to sahre dat linked to publication, no Journal will accet  an article id data was not in PDB.

1996 – dictionary put together

2008: experimental data madatory to be put in PDB, Validation

2011: PDBx  definition of X-Ray, NMR, and 3DEM, small-angle Scattering

Collaboration with to enable: self storage, structure based drug design

SCIENCE in ther IMPORTANT to be put there, IT evolved, changes to data

global organization collaboration

Communities to work together

L.D>Bernal – SOcial function of Science, 1939

Elenor Ostrom 2009 Nobel Prize in Economics – Community collaboration by rules

Best Practices Awards

Add value to your Conference & Expo attendance, sponsorship or exhibit package, and further heighten your visibility with the creative positioning offered as a Best Practices participant. Winners will be selected by a peer review expert panel in early 2014.

Bio-IT World will present the Awards in the Amphitheater at 9:30am on Wednesday, April 30 during the Plenary Keynote and Awards Program, WEDNESDAY, APRIL 30 | 8:00 – 9:45 AM

Early bird deadline (no fee) for entry is December 16, 2013 and final deadline (fee) for entry is February 10, 2014. Full details including previous winners and entry forms are available at Bio-ITWorldExpo.com.

2014 WINNER(s) are:

 

Read Full Post »

Older Posts »