Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Data management’


Better bioinformatics

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Big data in biomedicine: 4 big questions

Eric Bender  Nature Nov 2015; S19, 527.     http://dx.doi.org:/10.1038/527S19a

http://www.nature.com/nature/journal/v527/n7576_supp/full/527S19a.html

Gathering and understanding the deluge of biomedical research and health data poses huge challenges. But this work is rapidly changing the face of medicine.

 

http://www.nature.com/nature/journal/v527/n7576_supp/images/527S19a-g1.jpg

 

1. How can long-term access to biomedical data that are vital for research be improved?

Why it matters
Data storage may be getting cheaper, particularly in cloud computing, but the total costs of maintaining biomedical data are too high and climbing rapidly. Current models for handling these tasks are only stopgaps.

Next steps
Researchers, funders and others need to analyse data usage and look at alternative models, such as ‘data commons’, for providing access to curated data in the long term. Funders also need to incorporate resources for doing this.

Quote
“Our mission is to use data science to foster an open digital ecosystem that will accelerate efficient, cost-effective biomedical research to enhance health, lengthen life and reduce illness and disability.” Philip Bourne, US National Institutes of Health.

 

2. How can the barriers to using clinical trial results and patients’ health records for research be lowered?

Why it matters
‘De-identified’ data from clinical trials and patients’ medical records offer opportunities for research, but the legal and technical obstacles are immense. Clinical study data are rarely shared, and medical records are walled off by privacy and security regulations and by legal concerns.

Next steps
Patient advocates are lobbying for access to their own health data, including genomic information. The European Medicines Agency is publishing clinical reports submitted as part of drug applications. And initiatives such as CancerLinQ are gathering de-identified patient data.

Quote
“There’s a lot of genetic information that no one understands yet, so is it okay or safe or right to put that in the hands of a patient? The flip side is: it’s my information — if I want it, I should get it.”Megan O’Boyle, Phelan-McDermid Syndrome Foundation.

 

3. How can knowledge from big data be brought into point-of-care health-care delivery?

Why it matters
Delivering precision medicine will immensely broaden the scope of electronic health records. This massive shift in health care will be complicated by the introduction of new therapies, requiring ongoing education for clinicians who need detailed information to make clinical decisions.

Next steps
Health systems are trying to bring up-to-date treatments to clinics and build ‘health-care learning systems’ that integrate with electronic health records. For instance, the CancerLinQ project provides recommendations for patients with cancer whose treatment is hard to optimize.

Quote
“Developing a standard interface for innovators to access the information in electronic health records will connect the point of care to big data and the full power of the web, spawning an ‘app store’ for health.” Kenneth Mandl, Harvard Medical School.

 

4. Can academia create better career tracks for bioinformaticians?

Why it matters
The lack of attractive career paths in bioinformatics has led to a shortage of scientists that have both strong statistical skills and biological understanding. The loss of data scientists to other fields is slowing the pace of medical advances.

Next steps
Research institutions will take steps, including setting up formal career tracks, to reward bioinformaticians who take on multidisciplinary collaborations. Funders will find ways to better evaluate contributions from bioinformaticians.

Quote
“Perhaps the most promising product of big data, that labs will be able to explore countless and unimagined hypotheses, will be stymied if we lack the bioinformaticians that can make this happen.” Jeffrey Chang, University of Texas.

 

Eric Bender is a freelance science writer based in Newton, Massachusetts.

Advertisements

Read Full Post »


Huge Data Network Bites into Cancer Genomics

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Closer to a Cure for Gastrointestinal Cancer

Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source
http://www.scientificcomputing.com/news/2015/11/closer-cure-gastrointestinal-cancer

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its HPC environment with data capture and analytics capabilities, allowing data to move transparently between research steps, and driving discoveries such as a link between certain viruses and gastrointestinal cancers.

 

SANTA CLARA, CA — At the University of Miami’s Center for Computational Science (CCS), more than 2,000 internal researchers and a dozen expert collaborators across academic and industry sectors worldwide are working together in workflow management, data management, data mining, decision support, visualization and cloud computing. CCS maintains one of the largest centralized academic cyberinfrastructures in the country, which fuels vital and critical discoveries in Alzheimer’s, Parkinson’s, gastrointestinal cancer, paralysis and climate modeling, as well as marine and atmospheric science research.

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its high performance computing (HPC) environment with data capture and analytics capabilities, allowing data to move transparently between research steps. To speed scientific discoveries and boost collaboration with researchers around the world, the center deployed high-performance DataDirect Networks (DDN) GS12K scale-out file storage. CCS now relies on GS12K storage to handle bandwidth-driven workloads while serving very high IOPS demand resulting from intense user interaction, which simplifies data capture and analysis. As a result, the center is able to capture, store and distribute massive amounts of data generated from multiple scientific models running different simulations on 15 Illumina HiSeq sequencers simultaneously on DDN storage. Moreover, number-crunching time for genome mapping and SNP calling has been reduced from 72 to 17 hours.

“DDN enabled us to analyze thousands of samples for the Cancer Genome Atlas, which amounts to nearly a petabyte of data,” explained Dr. Nicholas Tsinoremas, director of the Center for Computational Sciences at the University of Miami. “Having a robust storage platform like DDN is essential to driving discoveries, such as our recent study that revealed a link between certain viruses and gastrointestinal cancers. Previously, we couldn’t have done that level of computation.”

In addition to providing significant storage processing power to meet both high I/O and interactive processing requirements, CCS needed a flexible file system that could support large parallel and short serial jobs. The center also needed to address “data in flight” challenges that result from major data surges during analysis, and which often cause a 10x spike in storage. The system’s performance for genomics assembly, alignment and mapping is enabling CCS to support all its application needs, including the use of BWA and Bowtie for initial mapping, as well as SamTools and GATK for variant analysis and SNP calling.

“Our arrangement is to share data or make it available to anyone asking, anywhere in the world,” added Tsinoremas. “Now, we have the storage versatility to attract researchers from both within and outside the HPC community … we’re well-positioned to generate, analyze and integrate all types of research data to drive major scientific discoveries and breakthroughs.”

About DDN

DataDirect Networks is a big data storage supplier to data-intensive, global organizations. For more than 15 years, the company has designed, developed, deployed and optimized systems, software and solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage DDN technology and the technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at largest scale in the most efficient, reliable and cost effective manner. DDN customers include financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.

 

“Where DDN really stood out is in the ability to adapt to whatever we would need. We have both IOPS-centric storage and the deep, slower I/O pool at full bandwidth. No one else could do that.”

Joel P. Zysman

Director of High Performance Computing

Center for Computational Science at the University of Miami

The University of Miami maintains one of the largest centralized, academic, cyber infrastructures in the US, which is integral to addressing and solving major scientific challenges. At its Center for Computational Science (CCS), more than 2,000 researchers, faculty, staff and students across multiple disciplines collaborate on diverse and interdisciplinary projects requiring HPC resources.

With 50% of the center’s users come from University of Miami’s Miller School of Medicine with ongoing projects at the Hussman Institute for Human Genomics, the explosion of next-generation sequencing has had a major impact on compute and storage demands. At CCS, the heavy I/O required to create four billion reads from one genome in a couple of days only intensifies when the data from the reads needs to be managed and analyzed

Aside from providing sufficient storage power to meet both high I/O and interactive processing demands, CCS needed a powerful file system that was flexible enough to handle very large parallel jobs as well as smaller, shorter serial jobs. CCS also needed to address as much as 10X spikes in storage, so it was critical to scale and support petabytes of machine-generated data without adding a layer of complexity or creating inefficiencies.

Read their success story to learn how high-performance DDN® Storage I/O has helped the University of Miami:

  • Establish links between certain viruses and gastrointestinal cancers discovered with computation that were not possible before
  • Reduce genomics compute and analysis time from 72 to 17 hours
CHALLENGES

  • Diverse, interdisciplinary research projects required massive compute and storage power as well as integrated data lifecycle movement and management
  • Highly demanding I/O and heavy interactivity requirements from next-gen sequencing intensified data generation, analysis and management
  • Handle large parallel jobs and smaller, shorter serial jobs
  • Data surges during analysis created “data-in-flight” challenges

SOLUTION

An end-to-end, high performance DDN GRIDScaler® solution featuring a GS12K™ scale-out appliance with an embedded IBM® GPFS™ parallel file system

TECHNICAL BENEFITS

  • Centralized storage with an embedded file system makes it easy to add storage where needed—in the high-performance, high-transaction or slower storage pools—and then manage it all through a single pane of glass
  • DDN’s transparent data movement enables using one platform for data capture, download, analysis and retention
  • The ability to maintain an active archive of storage lets the center accommodate different types of analytics with varied I/O needs

Read Full Post »


Information Management in Health Research

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Researchers find Potential Security Hole in Genomic Data-sharing Network

http://www.scientificcomputing.com/news/2015/10/researchers-find-potential-security-hole-genomic-data-sharing-network

Jennie Dusheck, Stanford University

Sharing genomic information among researchers is critical to the advance of biomedical research. Yet genomic data contains identifiable information and, in the wrong hands, poses a risk to individual privacy. If someone had access to your genome sequence — either directly from your saliva or other tissues, or from a popular genomic information service — they could check to see if you appear in a database of people with certain medical conditions, such as heart disease, lung cancer or autism.

Work by researchers at the Stanford University School of Medicine makes that genomic data more secure. Suyash Shringarpure, Ph.D., a postdoctoral scholar in genetics, and Carlos Bustamante, Ph.D., a professor of genetics, have demonstrated a technique for hacking a network of global genomic databases and how to prevent it. They are working with investigators from the Global Alliance for Genomics and Health on implementing preventive measures.

The work, published October 29, 2015, in The American Journal of Human Genetics, also bears importantly on the larger question of how to analyze mixtures of genomes, such as those from different people at a crime scene.

A network of genomic data sets on servers, or beacons, organized by the National Institutes of Health-funded Global Alliance for Genomics and Health, allows researchers to look for a particular genetic variant in a multitude of genomic databases. The networking of genomic databases is part of a larger movement among researchers to share data. Identifying a gene of interest in a beacon tells researchers where to apply for more complete access to the data. A central assumption, though, is that the identities of those who donate their genomic data are sufficiently concealed.

“The beacon system is an elegant solution that allows investigators to ‘ping’ collections of genomes,” said Bustamante. Investigators on the outside of a data set can ping and ask which data set has a particular mutation. “This allows people studying the same rare disease to find one another to collaborate.”

Beacons’ vulnerability

But many genomic data sets are specific to a condition or disease. A nefarious user who can find the match for an individual’s genome in a heart disease beacon, for example, can infer that the individual — or a relative of that person — likely has heart disease. By “pinging” enough beacons in the network of beacons, the hacker could construct a limited profile of the individual. “Working with the Global Alliance for Genomics and Health, we’ve been able to demonstrate that vulnerability and, more importantly, how to put policy changes in place to minimize the risk,” said Bustamante.

To protect donors’ identities, the organizers of the network, which is called the Beacon Project, have taken steps, such as encouraging beacon operators to “de-identify” individual genomes, so that names or other identifying information are not connected to the genome.

Despite such efforts, Shringarpure and Bustamante calculated that someone in possession of an individual’s genome could locate that individual within the beacon network. For example, in a beacon containing the genomes of 1,000 individuals, the Stanford pair’s approach could identify that individual or their relatives with just 5,000 queries.

Genomic information isn’t completely covered by the federal law that protects health information, and the consequences for a person whose information is disclosed can be significant. For example, although the national Genetic Information Nondiscrimination Act prevents health insurers from denying someone coverage or raising someone’s premiums because they have a particular genetic variant, the act does not apply to other forms of insurance, such as long-term care, disability or life insurance.

Approaches for better security

The Beacon Project has the potential to be enormously valuable to future genetic research. So, plugging this security hole is as important to Shringarpure and Bustamante as to the Global Alliance for Genomics and Health. In their paper, the Stanford researchers suggest various approaches for making the information more secure, including banning anonymous researchers from querying the beacons; merging data sets to make it harder to identify the exact source of the data; requiring that users be approved; and limiting access in a beacon to a smaller region of the genome.

The beacon system is an elegant solution that allows investigators to ‘ping’ collections of genomes.

Peter Goodhand, executive director of the Global Alliance for Genomics and Health, said, “We welcome the paper and look forward to ongoing interactions with the authors and others to ensure beacons provide maximum value while respecting privacy.”

Goodhand also said that the organization’s mitigation efforts, which adhere to the best practices outlined in its privacy and security policy, include aggregating data among multiple beacons to increase database size and obscure the database of origin; creating an information-budgeting system to track the rate at which information is revealed and to restrict access when the information disclosed exceeds a certain threshold; and introducing multiple tiers of secured access, including requiring users to be authorized for data access and to agree not to attempt specific risky scenarios.

Shringarpure and Bustamante are also interested in applying the technique described in their study to the area of DNA mixture interpretation, in which investigators seek to identify one DNA sequence in a mixture of many similar ones. The DNA mixing problem is relevant to forensics, studies of the microbiome and ecological studies. For example, Bustamante said, if a weapon used in a crime had DNA from several people on it, DNA mixture interpretation can help investigators pick out the DNA of a particular person, such as the suspect or the victim, revealing whether they touched the weapon. In fact, investigators could potentially use the same type of analysis used on the beacon network to look for individuals who may have touched a railing in a subway station or other public space.

This research was partially supported by the National Institutes of Health (grant U01HG007436).Stanford’s Department of Genetics also supported the work. Bustamante is on the scientific advisory boards for Ancestry.com, Personalis, Liberty Biosecurity and Etalon DX. He is also a founder and chair of the advisory board for IdentifyGenomics. None of these entities played a role in the design, interpretation or presentation of the study. Stanford University’s Office of Technology Licensing has evaluated the work presented in the paper for potential intellectual property and commercial rights.

 

Computational Models to Sort out the Genetic Chaos of Cancer Cells

http://www.scientificcomputing.com/news/2015/10/computational-models-sort-out-genetic-chaos-cancer-cells

University of Luxembourg

Scientists have developed a method for analyzing the genome of cancer cells more precisely than ever before. The team led by Prof. Antonio del Sol, head of the research group Computational Biology of the Luxembourg Centre for Systems Biomedicine of the University of Luxembourg, is employing bioinformatics: Using novel computing processes, the researchers have created models of the genome of cancer cells based on known changes to the genome. These models are useful for determining the structure of DNA in tumors.

“If we know this structure, we can study how cancer develops and spreads,” says del Sol. “This gives us clues about possible starting points for developing new anticancer drugs and better individual therapy for cancer patients.”

The LCSB researchers recently published their results in the scientific journal Nucleic Acids Research.

“The cause of cancers are changes in the DNA,” says Sarah Killcoyne, who is doing her PhD at the University of Luxembourg and whose doctoral thesis is a core component of the research project. “Mutations arise, the chromosomes can break or reassemble themselves in the wrong order, or parts of the DNA can be lost,” Killcoyne describes the cellular catastrophe: “In the worst case, the genome becomes completely chaotic.” The cells affected become incapable of performing their function in the body and — perhaps even worse — multiply perpetually. The result is cancer.

If we are to develop new anticancer drugs and provide personalized therapy, it is important to know the structure of DNA in cancer cells. Oncologists and scientists have isolated chromosomes from tumors and analyzed them under the microscope for decades. They found that irregularities in the chromosome structure sometimes indicated the type of cancer and the corresponding therapy.

“Sequencing technologies have made the identification of many mutations more accurate, significantly improving our understanding of cancer,” Sarah Killcoyne says. “But it has been far more difficult to use these technologies for understanding the chaotic structural changes in the genome of cancer cells.”

This is because sequencing machines only deliver data about very short DNA fragments. In order to reconstruct the genome, scientists accordingly need a reference sequence — a kind of template against which to piece together the puzzle of the sequenced genome.

Killcoyne continues: “The reference sequence gives us clues to where the fragments overlap and in what order they belong together.” Since the gene sequence in cancer cells is in complete disarray, logically, there is no single reference sequence. “We developed multiple references instead,” says Sarah Killcoyne. “We applied statistical methods for our new bioinformatics approach, to generate models, or references, of chaotic genomes and to determine if they actually show us the structural changes in a tumor genome.”

These methods are of double importance to group leader del Sol, as he states: “Firstly, Sarah Killcoyne’s work is important for cancer research. After all, such models can be used to investigate the causes of genetic and molecular processes in cancer research and to develop new therapeutic approaches. Secondly, we are interested in bioinformatics model development for reapplying it to other diseases that have complex genetic causes — such as neurodegenerative diseases like Parkinson’s. Here, too we want to better understand the relationships between genetic mutations and the resulting metabolic processes. After all, new approaches for diagnosing and treating neurodegenerative diseases are an important aim at the Luxembourg Centre for Systems Biomedicine.”

Citation: Mathematical ‘Gingko trees’ reveal mutations in single cells that characterize diseases. Sarah Killcoyne et al. Identification of large-scale genomic variation in cancer genomes using reference models , Nucleic Acids Research (2015). DOI: 10.1093/nar/gkv828

 

 

Mathematical ‘Gingko trees’ reveal mutations in single cells that characterize diseases

DOI: 10.1093/nar/gkv828

Seemingly similar cells often have significantly different genomes. This is often true of cancer cells, for example, which may differ one from another even within a small tumor sample, as genetic mutations within the cells spread in staccato-like bursts. Detailed knowledge of these mutations, called copy number variations, in individual cells can point to specific treatment regimens.

The problem is that current techniques for acquiring this knowledge are difficult and produce unreliable results. Today, scientists at Cold Spring Harbor Laboratory (CSHL) publish a new interactive analysis program called Gingko that reduces the uncertainty of single-cell analysis and provides a simple way to visualize patterns in copy number mutations across populations of .

The open-source software, which is freely available online, will improve scientists’ ability to study this important type of genetic anomaly and could help clinicians better target medications based on cells’ specific mutation profiles. The software is described online today in Nature Methods.

Mutations come in many forms. For example, in the most common type of mutation, variations may exist among individual people—or cells—at a single position in a DNA sequence. Another common mutation is a copy number variation (CNV), in which large chunks of DNA are either deleted from or added to the genome. When there are too many or too few copies of a given gene or genes, due to CNVs, disease can occur. Such mutations have been linked not only with cancer but a host of other illnesses, including autism and schizophrenia.

Researchers can learn a lot by analyzing CNVs in bulk samples—from a tumor biopsy, for example—but they can learn more by investigating CNVs in . “You may think that every cell in a tumor would be the same, but that’s actually not the case,” says CSHL Associate Professor Michael Schatz.

“We’re realizing that there can be a lot of changes inside even a single tumor,” says Schatz. “If you’re going to treat cancer, you need to diagnose exactly what subclass of cancer you have.” Simultaneously employing different drugs to target different cancer subclasses could prevent remission, scientists have proposed.

One powerful single-cell analytic technique for exploring CNV is whole genome sequencing. The challenge is that, before sequencing can be done, the cell’s DNA has to be amplified many times over. This process is rife with errors, with some arbitrary chunks of DNA being amplified more than others. In addition, because many labs use their own software to examine CNVs, there is little consistency in how researchers analyze their results.

To address these two challenges, Schatz and his colleagues created Gingko. The interactive, web-based program automatically processes sequence data, maps the sequences to a reference genome, and creates CNV profiles for every cell that can then be viewed with a user-friendly graphical interface. In addition, Gingko constructs phylogenetic trees based on the profiles, allowing cells with similar copy number mutations to be grouped together.

Importantly, Gingko, which Schatz and his colleagues validated by reproducing the findings of five major single-cell studies, also analyzes patterns in the sequence reads in order to recognize, and greatly reduce, amplification errors.

Schatz and his team named their software after the gingko tree, which has many well-documented therapeutic benefits. “We like to think our Gingko ‘trees’ will provide benefits as well,” says Schatz, referring to the graphical way that CNV changes are represented by analysts. Right now, CNV is not a commonly used diagnostic measurement in the clinic. “We’re looking into the best way of collecting samples, analyzing them, and informing clinicians about the results,” says Schatz. He adds that CSHL has collaborations with many hospitals, notably Memorial Sloan Kettering Cancer Center and the North Shore-LIJ Health System, to bring single-cell analysis to the clinic.

For Schatz, Gingko represents a culmination of CSHL’s efforts over the past decade—spearheaded by CSHL Professor Michael Wigler—to pioneer techniques for studying single cells. “Cold Spring Harbor has established itself as the world leader in single-cell analysis,” says Schatz. “We’ve invented many of the technologies and techniques important to the field and now we’ve taken all this knowledge and bundled it up so that researchers around the world can take advantage of our expertise.”

Explore further: A shift in the code: New method reveals hidden genetic landscape

More information: Interactive analysis and assessment of single-cell copy-number variations, Nature, DOI: 10.1038/nmeth.3578

 

Interactive analysis and assessment of single-cell copy-number variations

Tyler GarvinRobert AboukhalilJude KendallTimour BaslanGurinder S AtwalJames HicksMichael Wigler & Michael C Schatz

Nature Methods12,1058–1060(2015)    http://dx.doi.org:/10.1038/nmeth.3578

We present Ginkgo (http://qb.cshl.edu/ginkgo), a user-friendly, open-source web platform for the analysis of single-cell copy-number variations (CNVs). Ginkgo automatically constructs copy-number profiles of cells from mapped reads and constructs phylogenetic trees of related cells. We validated Ginkgo by reproducing the results of five major studies. After comparing three commonly used single-cell amplification techniques, we concluded that degenerate oligonucleotide-primed PCR is the most consistent for CNV analysis.

Figure 2: Assessment of data quality for different single-cell whole genome amplification methods using Ginkgo.

Assessment of data quality for different single-cell whole genome amplification methods using Ginkgo.

(a) LOWESS fit of GC content with respect to log-normalized bin counts for all samples in each of the nine data sets analyzed: three for MDA (top left, green), three for MALBAC (center left, orange) and three for DOP-PCR (bottom left, b…

 

 

Breaking Through the Barriers to Lab Innovation

http://www.technologynetworks.com/LIMS/news.aspx?ID=184014

Author: Helen Gillespie, Informatics Editor, Technology Networks

 

Innovation is a hot topic today and just about every type of laboratory is scrambling to figure out what it means for them. Lab Managers are expected to design profitable new products that enable the research organization to stay competitive in today’s marketplace. This means change. Process change. Systems change. Informatics technologies change. As a result, systemic change is occurring at all levels of the organization, driving the implementation of integrated lab solutions that unlock disparate, disconnected lab data silos and harmonize the IT infrastructure. Getting greater control of lab data is part of this and one of the most critical components of future success and corporate sustainability. As a result, some of the greatest change is taking place in Informatics in laboratories around the world.

Two of the most significant barriers to innovation are outdated informatics tools and inefficient workflows. Moving from paper-based manual methodologies to digital solutions can breathe new life into researcher productivity while enabling forward-looking companies to better compete and excel in today’s rapidly changing business environment.

This article examines the drivers behind the move for greater innovation, challenges, current trends in laboratory informatics, and the tools and techniques that can be used to break through barriers to lab innovation. Several leading informatics vendors provide their views.

Selected Vendors

featured productLaboratories worldwide seeking a single, integrated informatics platform can now standardize on one comprehensive laboratory information management system (LIMS). Thermo Fisher’s integrated informatics solution now comprises method execution, data visualization and laboratory management, and seamlessly integrates with all popular enterprise-level software packages.

“Thermo Scientific SampleManager is a fully integrated laboratory platform encompassing laboratory information management (LIMS), scientific data management (SDMS) and lab execution (LES).”
Trish Meek, Director Strategy, Informatics, Thermo Fisher Scientific

More Information

featured productBIOVIA Unified Lab Management allows for streamlined and more efficient lab workflows and a fully integrated and automated easy-to-deploy process. Based on the BIOVIA Foundation it works as an integration hub for BIOVIA applications as well as all major 3rd party systems and instruments allowing for seamless data transfer.

“BIOVIA Unified Lab Management is part of our unique end-to-end Product Lifecycle support for science-based organizations to improve innovation, quality, compliance, and efficiency.”
Dr. Daniela Jansen, Senior Solution Marketing Manager

More Information

featured productWaters® NuGenesis® Lab Management System uniquely combines data, workflow and sample management capabilities to support the entire product lifecycle from discovery through manufacturing. This user-centric platform encompasses NuGenesis SDMS, compliance-ready data repository, NuGenesis ELN, a flexible analytical electronic laboratory notebook, and NuGenesis Sample Management.

“The NuGenesis LMS readily adapts to existing informatics environments, smoothly linking data from the lab to the business operations of a company, so science-driven organizations can see more, know more and do more.”
Garrett Mullen, Senior Product Marketing Manager, Laboratory Management Informatics, Waters

More Information

The Impact of Corporate Wide Initiatives

There are a number of sweeping changes occurring throughout the corporate world that are turning the spotlight on research laboratories, examining everything from workflows to documentation. These changes are driven by corporate initiatives to increase profits, reduce costs, develop new products and drive operational efficiencies throughout the enterprise. These are not new goals, but the methodologies for achieving these goals have changed significantly thanks to the rapid changes in technology. Now, there is a greater focus on how technology can drive innovation throughout the enterprise.

In fact, almost every leading multinational organization nowadays touts innovation as an underlying theme for how they conduct business and develop the next generation products. To be truly innovative however, businesses of all types must embrace innovation at every level of the enterprise – not just in the products under development, but also how those products are being developed.

“Organizations nowadays cannot afford to not look into innovation,” emphasizes Dr. Daniela Jansen, Senior Solution Marketing Manager at BIOVIA. “Now, they are questioning how product quality is being supported by innovation throughout the end-to-end product lifecycle. The time is past when researchers looked to a single functionality to make a difference. Now, all software needs to drive innovation, to drive costs down and to drive efficiency.”

Garrett Mullen, Senior Product Marketing Manager at Waters Corporation, offers another perspective. “We drive innovation by addressing the challenges. Sometimes it is specific to the market, such as petrochemical or pharmaceutical, sometimes it is specific to the task, such as sample registration for the QA/QC department. All markets are suffering from similar challenges, whether it is products coming off patent or waning market share. So there is a big focus on what they can do about it, from controlling costs to simplifying processes.”

Operational excellence plays a significant role in corporate initiatives for innovation, and this is where the initiatives drill down into the research laboratories. According to Trish Meek, Director of Product Strategy for the Informatics business at Thermo Fisher Scientific, “Executives are looking more closely at the lab as part of a more holistic view of operational efficiencies across the entire organization. There’s a larger expectation than ever before that there is hidden value in the lab, and that can be found in optimizing efficiencies and more fully integrating processes across the lab and throughout the rest of the manufacturing or production process. Executive metrics now include the lab as they analyze data from all aspects of their operations in order to improve their processes, improve the quality of their products and drive profitability. Executives are now mining and reviewing data to determine how to make operations better from a holistic perspective, and that is causing the spotlight to be on the lab more than it ever was.”

A key aspect of operational excellence is that it goes hand in hand with product quality. Not only is there a need to expedite innovation to deliver new products, those new products need to be high quality and to comply with changing environmental regulations and consumer expectations. As a result, research organizations are reviewing their Informatics infrastructure and streamlining laboratory operations.

Further, the technology that supports lab Informatics has been evolving rapidly, delivering new functionality that is changing the way research can be performed.  This points to the heart of the matter: current technology is enabling new workflows (such as digital collaboration) while delivering greater access to research and also enabling better examination of the research (such as through the ‘Cloud’). This paradigm shift is happening at many levels, from how research is performed to how the data is shared, with technology at the center of the shift.

Barriers to Innovation: The Migration from Paper to Digital

Legacy paper-based activities in the lab are perhaps one of the greatest barriers to innovation. Data captured in paper lab notebooks is typically difficult to find, read or share. Written observations are often transcribed incorrectly. Tests and experiments are repeated because prior data is lost or inaccessible. Even though many lab activities are conducted electronically, certain steps are often still conducted on paper. Such repetitious manual activities are one of the greatest impediments to productivity. These workflow gaps are slowly being replaced with seamless digital activities.

One of the most interesting aspects of the drive for innovation is the ability to take advantage of the technology tools now available, which deliver a significant new range of functionality to users. Electronic Lab Notebooks (ELNs), for instance, can now be connected in the Cloud so that scientists anywhere can collaborate and share research data. This is important because not only is the transition to ELN’s happening on a local level, it is part of a larger global movement toward distributed research as a result of changes in how research organizations are now managing their operations. Large multinationals with research centers distributed around the globe are enabling their scientists to collaborate easily and efficiently with ELN’s as part of their effort to streamline operations.

Quote2.jpg“It is still surprising to see paper in the lab,” states Meek. “It’s in many cases a cultural issue – a comfort level – which makes it hard to move away from paper, and it’s a system everyone knows. Despite its flaws, paper is infinitely flexible, but in general it is terribly inefficient with regards to big data and computational power. Now, the need to look at all the data, and have all the data available is far more important, meaning that the move away from paper or manual data management is now more important than ever.”

“It continues to be about paper in many labs,” Jansen confirms. “But you need to look at the entire chain of cause and effect and the role that paper plays. Now, it’s about what drives the entire organization, not localized practices. This means that there’s a focus on reducing the time spent on documentation and removing barriers. There’s a focus on getting quality designed into the process, getting greater efficiency, and connecting the disparate silos of data the impede innovation. One way to do this is to use an open science-aware framework like the BIOVIA Foundation to integrate processes and applications from different providers. And virtual experiments that enable scientists to identify potential new products earlier in the process can significantly save time and money.“

Cost savings are one of the key reasons organizations make the transition from paper to digital practices. “We’ve found that processes went from hours to minutes when you eliminate the numerous manual review processes and transcriptions and replace them with electronic processes,” explains Mullen. “For example, in the past one central analytical lab at a company might have performed all LC [liquid chromatography] testing. Users submitted samples via email and the samples were boxed, tests requested, samples were received and registered at the central lab, etc. Very labor intensive. A digital solution changes all that. Now the new NuGenesis web interface enables the user to register the sample, enter the samples, specify the tests digitally, and thus reduce transcription errors and expedite the process. An automatic acknowledgement that the samples are approved is sent and the testing processes start. This eliminates the manual tasks associated with checking that everything is accurate. The time and cost savings are enormous.”

quote3.jpgOther factors are influencing the migration from paper to digital lab processes, including the recession and the heightened merger and acquisition activity. Many organizations have downsized, are running leaner, and employ fewer researchers. Yet the productivity demands remain as high as when there was more staff. Thus, there’s an increased need to ensure that researcher activities are more efficient. Manual workflows are out of sync in the digital environment.

Adopting Next Generation Technology

While there are numerous paper-based workflows in research labs worldwide, the vast majority of these labs have adopted some level of technology, including informatics software solutions. What began with instrument-specific software solutions, such as Thermo Scientific ChromeleonTM chromatography data system (CDS), has expanded to numerous application-specific and task-specific systems as computers have become an integral part of the lab work environment. Laboratory Information Management Systems (LIMS) have been commercially available since the early 1980’s. The increase in demand for fast turnaround and greater volumes of sample testing and analysis drove the growth in these solutions. NuGenesis® introduced the first Scientific Data Management System (SDMS) to help capture, catalog and archive lab data better in the 1990’s. ELN’s were one of the last lab systems to become a ubiquitous tool mainly because of the challenge of managing unstructured data versus structured data, but technology has overcome this issue too.

The increase in computing power accelerated the Informatics vendors’ ability to deliver faster, better, more comprehensive software tools. In parallel, the adoption of sophisticated technology by consumers created expectations for similar capabilities in the workplace, driving the demand for hardware such as tablets and other handheld devices as access tools for ELNs, LIMS and other lab software.

Yet while these different lab data and sample management systems have provided significant benefits to the lab, they started as separate systems and thus created separate data repositories that require an interface or middleware to enable data to be shared. But that challenge too is fast disappearing as new technology and new pathways to innovation arise.

“One of the things that Thermo Fisher Scientific is focused on is delivering  integrated informatics,” states Meek. “Traditionally, LIMS delivered specific functionality for R&D or manufacturing labs, but didn’t cover the entire laboratory process. Our customers today want an integrated solution that covers the complete lab workflow. So, we built an Integrated Informatics platform to combine many of these together so that they’re no longer separate silos with different data in different systems. Now, lab data management, method execution and scientific data management is done within the SampleManagerTM solution making its much more than just a LIMS. All of the functionality for scientific method and data management is now part of the same solution.” SampleManager has continued to evolve to offer greater functionality for our customers, so that now it has become the enabler for our customers to better manage their lab, and save their companies time and valuable financial resources formerly necessary to purchase, implement and support multiple software systems. Our goal is to continue to build upon the SampleManager platform so we can offer the greatest degree of functionality to our customers.”

“What is happening is that LIMS are now being supplemented with ELN and LES toolsets. Everyone is moving towards a center space, where LIMS become ELNs, etc.,” explains Mullen. Waters recently introduced the NuGenesis® Lab Management System (LMS) as an alternative to LIMS. Based on the NuGenesis SDMS, the LMS offers significantly more functionality that can be switched on as components are needed for various workflow and sample management tasks.

Mullen continues, “The NuGenesis LMS can create the testing protocol procedure to ensure that the tests are done correctly. It can specify the values and results, the upper and lower limits, etc., then pull the test values back into the worksheet. Results are instantly flagged as in or out of specification.  If reagents are expired or an instrument needs calibration, these are flagged automatically. The result is much faster transaction times than traditional paper-based processes.”

Quote4.jpg“For BIOVIA, when we talk about the benefits of our solutions, we’re talking about workflow efficiencies, cost savings, compliance and brand reputation,” states Jansen. “As a vendor, we support organizations by driving innovation, by strengthening the R&D pipeline while ensuring quality in their processes and outcomes. Now that BIOVIA is part of Dassault Systèmes,” Jansen continues, “we’re engaging in much larger conversations because we can now support the entire lab to plant process expanding our solutions to the 3D Experience platform. From ELNs to LIMS to virtual molecular modeling with our Discovery or Materials StudioTMsolution, BIOVIA offers an integrated, unified experience that is transforming how our customers are improving product quality, collaborating across sites, reducing cycle times and reducing costs. The bottom line is the ability to rapidly, easily and accurately transfer and utilize knowledge.”

Each of these vendors offers a different path to a similar end, with solutions that deliver greater access to not just legacy data but also the astounding volumes of data being created in labs worldwide. The ability to turn that data into knowledge that is accessible, accurate and reusable is necessary to fuel the new product demands both inside and outside the enterprise. Next generation technology is being developed and implemented with increasing rapidity to address these market requirements.

Conclusions

Corporate demand for innovation at every level of the enterprise is helping to drive laboratory innovation, from the tools adopted to perform research to the processes used to manage that research and all the associated data, samples, reagents, tests and more.

Operational excellence has risen to the top of corporate agendas, driven in part by the availability of technology that can support a global approach to better manage the entire product lifecycle, from initial research to final product. Now, informatics solutions exist that can support every stage of the process whether the organization engages in pharmaceutical research and needs to identify promising candidates early in the process, or whether the organization develops consumer product goods that have a short product lifecycle and thus require a constant stream of new products to maintain market share.

Information integration is playing a major role in breaking through the barriers to lab innovation. As a result, there is a significant transformation underway in the informatics tools to integrate the solutions so that data is no longer inaccessible in single purpose system. For some time there have been LIMS with ELN capabilities, CDS with LIMS functions, ELNs with sample management attributes, and more. Now, the need to exchange and move data quickly and easily from one user to another has driven the availability of integrated collaborative environments that can share laboratory data cross-team, cross-location and cross organizations.

At the core of these changes is the need to more rapidly address the larger business challenges in the lab through more efficient, more market-oriented new product development. And that’s the bottom line: informatics technology can be used as an enabling tool to solve both business challenges and lab challenges. Informatics vendors all approach the market requirements differently, depending on their own corporate culture, but all strive to enable their customers to innovate.

 

Bioinformatics beyond Genome Crunching

Flow Cytometry, Workflow Development, and Other Information Stores Can Become Treasure Troves If You Use the Right IT Tools and Services

  • Click Image To Enlarge +
    Shown here is the FlowJo platform’s visualization of surface activation marker expression (CD38) on live lymphocyte CD8+ T cells. Colors represent all combinations of subsets positive and negative for interferon gamma (IFNγ), perforin (Perf), and phosphorylated ERK (pERK).

     

     

     

     

     

     

     

     

     

    Advances in bioinformatics are no longer limited to just crunching through genomic and exosomic data. Bioinformatics, a discipline at the interface between biotechnology and information technology, also has lessons for flow cytometry and experimental design, as well as database searches, for both internal and external content.

    One company offering variations on traditional genome crunching is DNAnexus. With the advent of the $1,000 genome, researchers find themselves drowning in data. To analyze the terabytes of information, they must contract with an organization to provide the computing power, or they must perform the necessary server installation and maintenance work in house.

    DNAnexus offers a platform that takes the raw sequence directly from the sequencing machine, builds the genome, and analyzes the data, and it is able to do all of this work in the cloud. The company works with Amazon Web Services to provide a completely scalable system of nucleic acid sequence processing.

    “No longer is it necessary to purchase new computers and put them in the basement,” explains George Asimenos, Ph.D., director of strategic projects, DNAnexus.  “Not only is the data stored in the cloud, but it is also processed in the cloud.”

    The service provided by DNAnexus allows users to run their own software. Most users choose open source programs created by academic institutions.

    DNAnexus does not write the software to process and analyze the data. Instead, the company provides a service to its customers. It enables customers to analyze and process data in the cloud rather than buying, maintaining, and protecting their own servers.

    “Additionally, collaboration is simplified,” states Dr. Asimenos. “One person can generate the data, and others can perform related tasks—mapping sequence reads to the reference genome, writing software to analyze the data, and interpreting results. All this is facilitated by hosting the process, data, and tools on the web.”

    “When a customer needs to run a job, DNAnexus creates a virtual computer to run the analysis, then dissolves the virtual computer once the analysis is complete,” clarifies Dr. Asimenos. “This scalability allows projects to be run expeditiously regardless of size. The pure elasticity of the system allows computers to ‘magically appear’ in your basement and then ‘disappear’ when they are no longer being used. DNAnexus takes care of IT infrastructure management, security, and clinical compliance so you can focus on what matters: your science.”

    Merging IT and Flow Cytometry

    Click Image To Enlarge +
    Life scientists are being overwhelmed by the huge amounts of data they generate for specialized projects. They not only look for solutions within their own organizations but also increasingly enlist the help of service companies to help them with Big Data overload. [iStock/IconicBestiary]

    Technical advances in flow cytometry allows the labeling of individual cells with up to 50 different markers; 12,000 cells can be counted a second. This flood of information overwhelms traditional methods for data processing in flow cytometry.

    “FlowJo software offers a solution to this problem,” asserts Michael D. Stadnisky, Ph.D., CEO, FlowJo. “With an open architecture, our software serves as a platform that lets researchers run whatever program or algorithm they wish. Scientists can focus on the biological questions without having to become computer programmers.”

    FlowJo presents an intuitive and simple user interface to facilitate the visualization of complex datasets.

    While still in development (beta testing), FlowJo is offering plug-ins. Some of them are free, and others are for sale. They include software components for automatic data analysis, the discovery of trends and identification of outliers, and the centralization of data for all researchers to access. Applications for FlowJo range from traditional immunology to environmental studies, such as assessments of aquatic stream health based on analyses of single-cell organisms.

    “Ultimately, FlowJo wants to offer real-time analysis of data,” discloses Dr. Stadnisky. “Presently, we have the capacity to process a 1,536-well plate in 15 minutes.”

    FlowJo’s platform has benefitted users such as the University of California, San Francisco. Here, researchers in the midst of Phase I clinical trial were facing 632 clinical samples with 12 acquisition runs and 12 different time points. By employing FlowJo, the researchers realized a 10-fold reduction in the time spent analyzing all data.

    Clients have also integrated other data types. For example, they have integrated polymerase chain reaction (PCR), sequencing, and patient information with data from FlowJo, which facilitates this type of cross-functional team work. The data output from FlowJo, the company maintains, is easily accessible by other scientists. The platform is available as a standalone system that can be installed on a company’s computers or be hosted on the cloud.

    Optimizing Experiments

    One dilemma facing large pharmaceutical companies is the need to optimize conditions with a very limited supply of a precious reagent. Determining the best experimental design is crucial to avoid wasting valuable resources.

    Roche has used a commercially available electronic tool to build a workflow support tool. “This application allows scientists to set up their experiments more efficiently,” declares Roman Affentranger, Ph.D., head of small molecular discovery workflows, Roche. “The tool assists scientists in documenting and carrying out their work in the most effective manner.”

    “Frequently, a quick formulation of a peptide is necessary to hand over to a toxicologist for animal testing,” continues Dr. Affentranger. “The formulation of the peptide needs to be optimized for the pH, the type of buffer, and the surfactants, for example. The tool we developed evaluates the design of the scientist’s experiment to use the minimum amount of the precious resource, the peptide in question.

    “Testing these various conditions rapidly turns into a combinatorial problem with hundreds of tubes required, using more and more of the small sample. Our system assists scientist in documenting and carrying out work, taking the place of finding a colleague to evaluate your experimental design.”

    “The data is entered electronically rather than printed out as hardcopy and glued into a notebook,” points out Dr. Affentranger. “Consequently, the information is readily accessible within the lab, across labs, and across the global environment we all work in today.”

    Indexing Internal Content

    Another issue facing large, multinational pharmaceutical companies is finding material that they previously acquired. This could be as simple as a completed experiment, an expert in a content area, or an archive-bound business strategy analysis.

    To address this issue, a company could index its internal content, much the way Google indexes the Internet. At a large company, however, such a task would be onerous.

    Enter Sinequa, a French-based company that provides an indexing service. The company can convert more than 300 file formats such as pdfs, Word documents, emails, email attachments, and PowerPoint presentations into a format that its computers can “read.”

    According to Sinequa, a large enterprise, such as a pharmaceutical company, may need to cope with 200 to 500 million highly technical documents and billions of data points. This predicament is akin to the situation on the web in 1995. It was necessary to know the precise address of a website to access it. This unnecessary complication was eliminated by Google, which indexed everything on the web. Analogously, Sinequa offers the ability to index the information inside a company so that searches can yield information without requiring inputs that specify the information’s exact location.

    With this kind of search ability, a company can turn its information trove into a treasure trove. Put another way, information can be made to flow, keeping applications turning like turbines, generating the “data power” needed to reposition drugs, reduce time to market, and identify internal and external experts and thought leaders.

    “Sinequa offers a kind of Google algorithm customized for each customer,” details Xavier Pornain, vice president of sales and alliances at Sinequa. “At least 20,000 people use the technology generated by Sinequa. Modern companies create lots of data; we make it searchable.”

    The data searched is not limited to internal documents. Sinequa can also add in external databases or indexing sites such as PubMed, Medline, and Scopus. Of demonstrated flexibility, the search engine can run one version inside a company firewall and another one in the cloud.

    Emulating Intelligence Approaches

    A different search approach, one that leverages the experience of the intelligence community, it taken by the Content Analyst Company. With this approach, a company can comb through internal and external content stores to find relevant information that has value not only as output, but as input. That is, the information can cycle through the search engine, turning its machine learning gears.

    “By adapting to the voice of the user, our software package, Cerebrant, has been very successful in the intelligence and legal communities,” says Phillip Clary, vice president, Content Analyst. “For typical indexing services, such as Google and PubMed, people do huge searches using a long list of key words. A simpler scenario is to write a few sentences, enter the text, and get all the related relevant items returned. Cerebrant can take the place of an expert to sift through all the results to find the relevant ones.”

    Typical searches often yield confounding results. For example, if a user were to ask Google to generate results for the word “bank,” the top results would be financial institutions. Then there would be results for a musical band/person named Bank. Eventually, long past the first page of results, there would be information about the kind of bank that borders a stream or river course. Such results would frustrate a scientific user interested in tissue banks or cell line repositories.

    “In the past, companies have approached the problem of obtaining germane results by attempting to create databases with curation and controlled vocabulary,” notes Clary. “This is how Google works. All those misspelled words have to be entered into the code.

    “Cerebrant functions by learning how the information relates to itself. This was a powerful tool for the intelligence community, because the program can look at all kinds of information (emails, texts, metadata) and make connections within the unstructured data, even when users attempt to veil their meanings by using code words.”

    Search requests composed on Cerebrant can consist of a single sentence or a paragraph describing what sort of information the user wishes to find. This is much more efficient than determining the 30 to 40 keywords you need to use to locate all the information on a complex topic. Then there is still the task of removing the irrelevant finds.

    Cerebrant is a cloud-based application. Generally, it take only about a day to a week to get it up and running. Because it is scalable, Cerebrant can be used by an individual consultant or a multinational conglomerate.

    Given the enormous amount of time, energy, and money invested by the intelligence community, it is refreshing to see a novel application of the wisdom gained from all this work, just as we saw innovative uses of the technology that was developed by the space program.

Read Full Post »


  • Oracle Industry Connect Presents Their 2015 Life Sciences and Healthcare Program

 

Reporter: Stephen J. Williams, Ph.D. and Aviva Lev-Ari, Ph.D., R.N.

oraclehealthcare

Copyright photo Oracle Inc. (TM)

 

Transforming Clinical Research and Clinical Care with Data-Driven Intelligence

March 25-26 Washington, DC

For more information click on the following LINK:

https://www.oracle.com/oracleindustryconnect/life-sciences-healthcare.html

oracle-healthcare-solutions-br-1526409

https://www.oracle.com/industries/health-sciences/index.html  

Oracle Health Sciences: Life Sciences & HealthCare — the Solutions for Big Data

Healthcare and life sciences organizations are facing unprecedented challenges to improve drug development and efficacy while driving toward more targeted and personalized drugs, devices, therapies, and care. Organizations are facing an urgent need to meet the unique demands of patients, regulators, and payers, necessitating a move toward a more patient-centric, value-driven, and personalized healthcare ecosystem.

Meeting these challenges requires redesigning clinical R&D processes, drug therapies, and care delivery through innovative software solutions, IT systems, data analysis, and bench-to-bedside knowledge. The core mission is to improve the health, well-being, and lives of people globally by:

  • Optimizing clinical research and development, speeding time to market, reducing costs, and mitigating risk
  • Accelerating efficiency by using business analytics, costing, and performance management technologies

 

  • Establishing a global infrastructure for collaborative clinical discovery and care delivery models
  • Scaling innovations with world-class, transformative technology solutions
  • Harnessing the power of big data to improve patient experience and outcomes

The Oracle Industry Connect health sciences program features 15 sessions showcasing innovation and transformation of clinical R&D, value-based healthcare, and personalized medicine.

The health sciences program is an invitation-only event for senior-level life sciences and healthcare business and IT executives.

Complete your registration and book your hotel reservation prior to February 27, 2015 in order to secure the Oracle discounted hotel rate.

Learn more about Oracle Healthcare.

General Welcome and Joint Program Agenda

Wednesday, March 25

10:30 a.m.–12:00 p.m.

Oracle Industry Connect Opening Keynote

Mark Hurd, Chief Executive Officer, Oracle

Bob Weiler, Executive Vice President, Global Business Units, Oracle

Warren Berger, Author of “A More Beautiful Question: The Power of Inquiry to Spark Breakthrough Ideas.”

12:00 p.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:45 p.m.

Oracle Industry Connect Keynote

Bob Weiler, Executive Vice President, Global Business Units, Oracle

2:45 p.m.–3:45 p.m.

Networking Break

3:45 p.m.–5:45 p.m.

Life Sciences and Healthcare General Session

Robert Robbins, President, Chief Executive Officer, Texas Medical Center

Steve Rosenberg, Senior Vice President and General Manager Health Sciences Global Business Unit, Oracle

7:00 p.m.–10:00 p.m.

Life Sciences and Healthcare Networking Reception

National Museum of American History
14th Street and Constitution Avenue, NW
Washington DC 20001

Life Sciences Agenda

Thursday, March 26

7:00 a.m.–8:00 a.m.

Networking Breakfast

8:00 a.m.–9:15 a.m.

Digital Trials and Research Models of the Future 

Markus Christen, Senior Vice President and Head of Global Development, Proteus

Praveen Raja, Senior Director of Medical Affairs, Proteus Digital Health

Michael Stapleton, Vice President and Chief Information Officer, R&D IT, Merck

9:15 a.m.–10:30 a.m.

Driving Patient Engagement and the Internet of Things 

Howard Golub, Vice President of Clinical Research, Walgreens

Jean-Remy Behaeghel, Senior Director, Client Account Management, Product Development Solutions, Vertex Pharmaceuticals

10:30 a.m.–10:45 a.m.

Break

10:45 a.m.–12:00 p.m.

Leveraging Data and Advanced Analytics to Enable True Pharmacovigilance and Risk Management 

Leonard Reyno, Senior Vice President, Chief Medical Officer, Agensys

 

Accelerating Therapeutic Development Through New Technologies 

Andrew Rut, Chief Executive Officer, Co-Founder and Director, MyMeds&Me

12:45 a.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:30 p.m.

Oracle Industry Connect Keynote

2:30 p.m.–2:45 p.m.

Break

2:45 p.m.–3:15 p.m.

Harnessing Big Data to Increase R&D Innovation, Efficiency, and Collaboration 

Sandy Tremps, Executive Director, Global Clinical Development IT, Merck

3:15 p.m.–3:30 p.m.

Break

3:30 p.m.–4:45 p.m.

Transforming Clinical Research from Planning to Postmarketing 

Kenneth Getz, Director of Sponsored Research Programs and Research Associate Professor, Tufts University

Jason Raines, Head, Global Data Operations, Alcon Laboratories

4:45 p.m.–6:00 p.m.

Increasing Efficiency and Pipeline Performance Through Sponsor/CRO Data Transparency and Cloud Collaboration 

Thomas Grundstrom, Vice President, ICONIK, Cross Functional IT Strategies and Innovation, ICON

Margaret Keegan, Senior Vice President, Global Head Data Sciences and Strategy, Quintiles

6:00 p.m.–9:00 p.m.

Oracle Customer Networking Event

Healthcare Agenda

Thursday, March 26

7:00 a.m.–8:15 a.m.

Networking Breakfast

8:30 a.m.–9:15 a.m.

Population Health: A Core Competency for Providers in a Post Fee-for-Service Model 

Margaret Anderson, Executive Director, FasterCures

Balaji Apparsamy, Director, Business Intellegence, Baycare

Leslie Kelly Hall, Senior Vice President, Policy, Healthwise

Peter Pronovost, Senior Vice President, Patient Safety & Quality, Johns Hopkins

Sanjay Udoshi, Healthcare Product Strategy, Oracle

9:15 a.m.–9:30 a.m.

Break

9:30 a.m.–10:15 a.m.

Population Health: A Core Competency for Providers in a Post Fee-for-Service Model (Continued)

10:15 a.m.–10:45 a.m.

Networking Break

10:45 a.m.–11:30 a.m.

Managing Cost of Care in the Era of Healthcare Reform 

Chris Bruerton, Director, Budgeting, Intermountain Healthcare

Tony Byram, Vice President Business Integration, Ascension

Kerri-Lynn Morris, Executive Director, Finance Operations and Strategic Projects, Kaiser Permanente

Kavita Patel, Managing Director, Clinical Transformation, Brookings Institute

Christine Santos, Chief of Strategic Business Analytics, Providence Health & Services

Prashanth Kini, Senior Director, Healthcare Product Strategy, Oracle

11:30 a.m.–11:45 a.m.

Break

11:45 a.m.–12:45 p.m.

Managing Cost of Care in the Era of Healthcare Reform (Continued)

12:45 p.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:30 p.m.

Oracle Industry Connect Keynote

2:30 p.m.–2:45 p.m.

Break

2:45 p.m.–3:30 p.m.

Precision Medicine 

Annerose Berndt, Vice President, Analytics and Information, UPMC

James Buntrock, Vice Chair, Information Management and Analytics, Mayo Clinic

Dan Ford, Vice Dean for Clinical Investigation, Johns Hopkins Medicine

Jan Hazelzet, Chief Medical Information Officer, Erasmus MC

Stan Huff, Chief Medical Information Officer, Intermountain Healthcare

Vineesh Khanna, Director, Biomedical Informatics, SIDRA

Brian Wells, Vice President, Health Technology, Penn Medicine

Wanmei Ou, Senior Product Strategist, Healthcare, Oracle

3:30 p.m.–3:45 p.m.

Networking Break

3:45 p.m.–4:30 p.m.

Precision Medicine (Continued)

4:30 p.m.–4:45 p.m.

Break

6:00 p.m.–9:00 p.m.

Oracle Customer Networking Event

Additional Links to Oracle Pharma, Life Sciences and HealthCare

 
Life Sciences | Industry | Oracle <http://www.oracle.com/us/industries/life-sciences/overview/>

http://www.oracle.com/us/industries/life-sciences/overview/

 
Oracle Corporation

 
Oracle Applications for Life Sciences deliver a powerful combination of technology and preintegrated applications.

  • Clinical

<http://www.oracle.com/us/industries/life-sciences/clinical/overview/index.html>

  • Medical Devices

<http://www.oracle.com/us/industries/life-sciences/medical/overview/index.html>

  • Pharmaceuticals

<http://www.oracle.com/us/industries/life-sciences/pharmaceuticals/overview/index.html>

 
Life Sciences Solutions | Pharmaceuticals and … – Oracle <http://www.oracle.com/us/industries/life-sciences/solutions/index.html>

http://www.oracle.com  Industries  Life Sciences

 
Oracle Corporation

 
Life Sciences Pharmaceuticals and Biotechnology.

 
Oracle Life Sciences Data Hub – Overview | Oracle <http://www.oracle.com/us/products/applications/health-sciences/e-clinical/data-hub/index.html>

http://www.oracle.com  …  E-Clinical Solutions

 
Oracle Corporation

 
Oracle Life Sciences Data Hub. Better Insights, More Informed Decision-Making. Provides an integrated environment for clinical data, improving regulatory …

 
Pharmaceuticals and Biotechnology | Oracle Life Sciences <http://www.oracle.com/us/industries/life-sciences/pharmaceuticals/overview/index.html>

http://www.oracle.com/us/…/life-sciences/…/index.html

 
Oracle Corporation

 
Oracle Applications for Pharmaceuticals and Biotechnology deliver a powerful combination of technology and preintegrated applications.

 
Oracle Health Sciences – Healthcare and Life Sciences … <https://www.oracle.com/industries/health-sciences/>

https://www.oracle.com/industries/health-sciences/

 
Oracle Corporation

 
Oracle Health Sciences leverages industry-shaping technologies that optimize clinical R&D, mitigate risk, advance healthcare, and improve patient outcomes.

 
Clinical | Oracle Life Sciences | Oracle <http://www.oracle.com/us/industries/life-sciences/clinical/overview/index.html>

http://www.oracle.com  Industries  Life Sciences  Clinical

 
Oracle Corporation

 
Oracle for Clinical Applications provides an integrated remote data collection facility for site-based entry.

 
Oracle Life Sciences | Knowledge Zone | Oracle … <http://www.oracle.com/partners/en/products/industries/life-sciences/get-started/index.html>

http://www.oracle.com/partners/…/life-sciences/…/index.ht&#8230;

 
Oracle Corporation

 
This Knowledge Zone was specifically developed for partners interested in reselling or specializing in Oracle Life Sciences solutions. To become a specialized …

 
[PDF]Brochure: Oracle Health Sciences Suite of Life Sciences … <http://www.oracle.com/us/industries/life-sciences/oracle-life-sciences-solutions-br-414127.pdf>

http://www.oracle.com/…/life-sciences/oracle-life-sciences-s&#8230;

 
Oracle Corporation

 
Oracle Health Sciences Suite of. Life Sciences Solutions. Integrated Solutions for Global Clinical Trials. Oracle Health Sciences provides the world’s broadest set …

 

 

Read Full Post »


Third Annual TCGC: The Clinical Genome Conference, San Francisco, June 10-12, 2014 by Bio-IT World and Cambridge Healthtech Institute

Reporter: Aviva Lev-Ari, PhD, RN

 

UPDATED on 5/1/2014

Register by May 2 for

Hotel Kabuki, San Francisco, CA

June 10 – 12, 2014

FINAL AGENDA

CLINICAL GENOME

conference

THE 3rd ANNUAL

Mining the Genome for Medicine Clinical Genome Conference.com

TCGC

The unstoppable march of genomics into clinical practice continues. In an ideal world, the expanding use of genomic tools will identify disease before the onset of clinical symptoms and determine individualized drug treatment leading to precision medicine. However, many challenges remain or the successful translation of genomic knowledge and technologies into health advances and actionable patient care. Join vital discussions of the applications, questions and solutions surrounding clinical genome analysis.

KEYNOTE SPEAKERS

Atul Butte, M.D., Ph.D.

Division Chief and Associate Professor, Stanford University School of Medicine; Director, Center for Pediatric Bioinformatics, Lucile Packard Children’s Hospital

David Galas, Ph.D.

Principal Scientist, Pacific Northwest Diabetes Research Institute

Gail P. Jarvik, M.D., Ph.D.

Head, Division of Medical Genetics, Arno G. Motulsky Endowed Chair in Medicine and Professor, Medicine and Genome Sciences, University of Washington Medical Center

John Pfeifer, M.D., Ph.D.

Vice Chair, Clinical Affairs, Pathology and Immunology; Professor, Pathology and Immunology, Washington University

John Quackenbush, Ph.D.

Professor, Dana-Farber Cancer Institute and Harvard School of Public Health; Co-Founder and CEO, GenoSpace

Topics Include:

• Working with the Payer Process

• Genome Variation and Clinical Utility

• NGS Is Guiding Therapies

• NGS Is Redefining Genomics

• Interpretation and Translation to the Client

• Integrating Genomic Data into the Clinic

ClinicalGenomeConference.com

Cambridge Healthtech Institute

250 First Avenue, Suite 300

Needham, MA 02494

www.healthtech.com

 

TUESDAY, JUNE 10

7:30 am Conference Registration and Morning Coffee

Working with the Payer Process

8:30 Chairperson’s Opening Remarks

»»KEYNOTE PRESENTATION

8:45 Case Study on Working through the Payer Process

John Pfeifer, M.D., Ph.D., Vice Chair, Clinical Affairs, Pathology; Professor,

Pathology and Immunology; Professor, Obstetrics and Gynecology, Washington

University School of Medicine

If next-generation sequencing (NGS) is to become a part of patient care in routine clinical practice (whether in the setting of oncology or in the setting of inherited genetic disorders), labs that perform clinical NGS must be reimbursed for the testing they provide. Genomics and Pathology Services at Washington University in St. Louis (GPS@WUSTL) will be used as a case study of a national reference lab that has been successful in achieving high levels of reimbursement for the clinical NGS testing it performs, including from private payers. The reasons for GPS’s success will be discussed, including NGS test design, clinical focus of testing, use of different models for reimbursement and payer education.

9:30 Implementation of Clinical Cancer Genomics within an Integrated

Healthcare System

Lincoln D. Nadauld, M.D., Ph.D., Director, Cancer Genomics, Intermountain Healthcare

Precision cancer medicine involves the detection of tumor-specific DNA alterations followed by treatment with therapeutics that specifically target the actionable mutations. Significant advances in genomic technologies have now rendered extended genomic analyses of human malignancies technologically and financially feasible for clinical adoption. Intermountain Healthcare, an integrated healthcare delivery system, is taking advantage of these advances to programmatically implement genomics into the regular treatment of cancer patients to improve clinical outcomes and reduce treatment costs.

10:00 PANEL DISCUSSION:

Payer’s Dilemma: Evolution vs. Revolution

As falling genome sequencing costs help clinicians refine patient diagnoses and therapeutic approaches, new complexities arise over insurance coverage of such tests, classification by CPT codes and other reimbursement issues. Experts on this panel will discuss payer challenges and changes—both rapid and gradual—occurring alongside these advances in clinical genomics.

Moderator: Katherine Tynan, Ph.D., Business Development & Strategic Consulting for Diagnostics

Companies, Tynan Consulting LLC

Panelists:

Tonya Dowd, MPH, Director, Reimbursement Policy and Market Access, Quorum Consulting

Mike M. Moradian, Ph.D., Director of Operations and Molecular Genetics Scientist, Kaiser

Permanente Southern California Regional Genetics Laboratory

Rina Wolf, Vice President of Commercialization Strategies, Consulting and Industry Affairs, XIFIN

Additional Panelists to be Announced

10:45 Networking Coffee Break

11:15 Beyond Genomics: Preparing for the Avalanche of Post-Genomic

Clinical Findings

Jimmy Lin, M.D., Ph.D., President, Rare Genomics Institute

Whole genomic and exomics sequencing applied clinically is revealing newly discovered genes and syndromes at an astonishing rate. While clinical databases and variant annotation continue to grow, much of the effort needed is functional analysis and clinical correlation. At RGI, we are building a comprehensive functional genomics platform that includes electronic health records, biobanking, data management, scientific idea crowdsourcing and contract research sourcing.

11:45 The MMRF CoMMpass Clinical Trial: A Longitudinal Observational

Trial to Identify Genomic Predictors of Outcome in Multiple Myeloma

Jonathan J. Keats, Ph.D., Assistant Professor, Integrated Cancer Genomics Division, Translational

Genomics Research Institute

12:15 pm Luncheon Presentation: Sponsored by

Big Data & Little Data – From Patient Stratification

to Precision Medicine

Colin Williams, Ph.D., Director, Product Strategy, Thomson Reuters

Molecular data has the power, when unlocked, to transform our understanding of disease to support drug discovery and patient care. The key to unlocking this potential is ‘humanising’ the data, through tools and techniques, to a level that supports interpretation by Life Science professionals. This talk will focus on strategies for extracting insight from ‘big data’ by shrinking it to ‘little data’, with a focus on applications to support patient stratification in drug discovery and for practising precision medicine in a clinical setting.

Genome Variation and Clinical Utility

1:45 Chairperson’s Remarks

»»KEYNOTE PRESENTATION

1:50 Lessons from the Clinical Sequencing Exploratory

Research (CSER) Consortium: Genomic Medicine

Implementation

Gail P. Jarvik, M.D., Ph.D., Head, Division of Medical Genetics, Arno G. Motulsky Endowed Chair in Medicine and Professor, Medicine and Genome

Sciences, University of Washington Medical Center

Recent technologies have led to affordable genomic testing. However, implementation of genomic medicine faces many hurdles. The Clinical Sequencing Exploratory Research (CSER) Consortium, which includes nine genomic medicine projects, was formed to explore these challenges and opportunities. Dr. Jarvik is the PI of a CSER genomic medicine project and of the CSER coordinating center. She will focus on the frequency of exomic incidental findings, including those of the 56 genes recommended for incidental finding return by the ACMG. The CSER group has annotated the putatively pathogenic and novel variants of the Exome Variant Server (EVS) to estimate the rate of these in individuals of European and African ancestry. Experience with consenting and returning incidental findings will also be reviewed.

2:35 Decoding the Patient’s Genome: Clinical Use of Genome-Wide

Sequencing Data

Elizabeth Worthey, Ph.D., Assistant Professor, Pediatrics & Bioinformatics Program, Human & Molecular Genetics Center, Medical College of Wisconsin

Despite significant advances in our understanding of the genetic basis of disease, genomewide identification and subsequent interpretation of the molecular changes that lead to human disease represent the most significant challenges in modern human genetics.

Starting in 2009 at MCW, we have performed clinical WGS and WES to diagnose patients coming from across all clinical specialties. I will discuss findings, pros and cons in approach, challenges remaining and where we go next.

3:05 Analyzing Variants with a DTC Genetics Database

Brian Naughton, Ph.D., Founding Scientist, 23andMe, Inc.

Sequencing a genome results in dozens of potentially disease-causing variants (VUS). I describe some examples of using the 23andMe database, including quick recontact of participants, to determine if a variant is disease-causing.

3:35 Refreshment Break in the Exhibit Hall with Poster Viewing

 

Genome Interpretation Software Solutions: Software Spotlights

(Sponsorship Opportunities Available)

Obtaining clinical genome data is rapidly becoming a reality, but analyzing and interpreting the data remains a bottleneck. While there are many commercial software solutions and pipelines for managing raw genome sequence data, providing the medical interpretation and delivering a clinical diagnosis will be the critical step in fulfilling the promise of genomic medicine. This session will showcase how genome data analysis companies are streamlining the genomic diagnostic pipeline through:

• Transferring raw sequencing data

• Interpreting genetic variations

• Building new software and cloud-based analysis pipelines

• Investigating the genetic basis of disease or drug response

• Integrating with other clinical data systems

• Creating new medical-grade databases

• Reporting relevant clinical information in a physician-friendly manner

• Continuous learning feedback

4:15 Software Spotlight #1

4:30 Copy Number Variant Detection Using Sponsored by

Next-Generation Sequencing: State of the Art

Alexander Kaplun, Ph.D., Field Applications Scientist, BIOBASE

This talk will provide a short review about the current state of the art in detection of larger variants that have an important role in many diseases such as haplotypes, indels, repeats, copy number variants (CNVs), structural variants (SVs) and fusion genes using NGS methods, and an outlook to their use for pharmacogenomic genotyping.

4:45 Software Spotlight #3

5:00 Software Spotlight #4

5:15 Software Spotlight #5

5:30 Pertinence Metric Enables Hypothesis-Independent Sponsored by

Genome-Phenome Analysis in Seconds

Michael M. Segal, M.D., Ph.D., Chief Scientist, SimulConsult

Genome-phenome analysis combines processing of a genomic variant table and comparison of the patient’s findings to those of known diseases (“phenome”). In a study of 20 trios, accuracy was 100% when using trios with family-aware calling, and close to that if only probands were used. The gene pertinence metric calculated in the analysis was 99.9% for the causal genes. The analysis took seconds and was hypothesis-independent as to form of inheritance or number of causal genes. Similar benefits were found in gene discovery situations.

6:00 Welcome Reception in the Exhibit Hall with Poster Viewing

7:00 Close of Day

WEDNESDAY, JUNE 11

7:30 am Breakfast Presentation (Sponsorship Opportunity Available) or Morning Coffee

NGS Is Guiding Therapies

8:30 Chairperson’s Opening Remarks

8:35 Next-Generation Sequencing Approaches for Identifying Patients

Who May Benefit from PARP Inhibitor Therapy

Mitch Raponi, Ph.D., Senior Director and Head, Molecular Diagnostics, Clovis Oncology

The following questions will be addressed: What biomarkers should we be focusing on to identify appropriate patients who will likely benefit from PARP inhibitors? How can we apply next-generation sequencing technologies to identify all patients who will respond to the PARP inhibitor rucaparib? What regulatory challenges are we faced with for approval of NGS companion diagnostics?

9:05 Whole-Genome and Whole-Transcriptome Sequencing to Guide

Therapy for Patients with Advanced Cancer

Glen J. Weiss, M.D., MBA, Director, Clinical Research, Cancer Treatment Centers of America

Treating advanced cancer with agents that target a single-cell surface receptor, up-regulated or amplified gene product or mutated gene has met with some success; however, eventually the cancer progresses. We used next-generation sequencing technologies (NGS) including whole-genome sequencing (WGS), and where feasible, whole-transcriptome sequencing (WTS) to identify genomic events and associated expression changes in advanced cancer patients. While the initial effort was a slower process than anticipated due to a variety of issues, we demonstrated the feasibility of using NGS in advanced cancer patients so that treatments for patients with progressing tumors may be improved. This lecture will highlight some of these challenges and where we are today in bringing NGS to patients.

9:35 The SmartChip TE™ Target Enrichment System for Sponsored by

Clinical Next-Gen Sequencing

Gianluca Roma, MS MBA, Director, Product Management, WaferGen Biosystems

10:05 Coffee Break in the Exhibit Hall with Poster Viewing

Data Mining

»»KEYNOTE PRESENTATION

10:45 Translating a Trillion Points of Data into

Therapies, Diagnostics and New Insights into Disease

Atul Butte, M.D., Ph.D., Division Chief and Associate Professor, Stanford University School of Medicine; Director, Center for Pediatric Bioinformatics,

Lucile Packard Children’s Hospital; Co-Founder, Personalis and Numedii

There is an urgent need to translate genome-era discoveries into clinical utility, but the difficulties in making bench-to-bedside translations have been well described. The nascent field of translational bioinformatics may help. Dr. Butte’s lab at Stanford builds and applies tools that convert more than a trillion points of molecular, clinical and epidemiological data— measured by researchers and clinicians over the past decade—into diagnostics, therapeutics and new insights into disease. Dr. Butte, a bioinformatician and pediatric endocrinologist, will highlight his lab’s work on using publicly available molecular measurements to find new uses for drugs, including drug repositioning for inflammatory bowel disease, discovering new treatable inflammatory mechanisms of disease in type 2 diabetes and the evaluation of patients presenting with whole genomes sequenced.

11:30 DGIdb – Mining the Druggable Genome

Malachi Griffith, Ph.D., Research Faculty, Genetics, The Genome Institute, Washington University School of Medicine

In the era of high-throughput genomics, investigators are frequently presented with lists of mutated or otherwise altered genes implicated in human disease. Numerous resources exist to generate hypotheses about how such genomic events might be targeted therapeutically or prioritized for drug development. The Drug-Gene Interaction database (DGIdb) mines these resources and provides an interface for searching lists of genes against a compendium of drug-gene interactions and potentially druggable genes. DGIdb can be accessed at dgidb.org.

12:00 pm Sponsored Presentation (Opportunity Available)

12:30 Luncheon Presentation (Sponsorship Opportunity Available)

 

The unstoppable march of genomics into clinical practice continues. In an ideal world, the expanding use of genomic tools will identify disease before the onset of clinical symptoms and determine individualized drug treatment leading to precision medicine. However, many challenges remain for the successful translation of genomic knowledge and technologies into health advances and clinical practice.

Bio-IT World and Cambridge Healthtech Institute are again proud to host the Third Annual TCGC: The Clinical Genome Conference, inviting stakeholders from all arenas impacting clinical genomics to share new findings and solutions for advancing the application of clinical genome medicine.

TCGC brings together many constituencies for frank and vital discussion of the applications, questions and solutions surrounding clinical genome analysis, including scientists, physicians, diagnosticians, genetic counselors, bioinformaticists, ethicists, regulators, insurers, lawyers and administrators.

Topics addressing successful translation of genomic knowledge and technologies into advancement of clinical utility (medicines and diagnostics) include but are not limited to:

Scientific Investigation and Interpretation

  • Technologies/Platforms
  • WGS/Exome/Single-Cell Sequencing
  • Drug and Diagnostic Targets
  • Interpretation and Analysis Pipelines
  • Case Studies

Clinical Integration and Implementation

  • Mechanisms to Monitor Genomic Medicine
  • Determining Clinical Utility
  • Standardization/Regulation/Certification
  • Reimbursement
  • Data Management
  • Diagnostic Lab Infrastructure
  • HIT/Data Integration
  • Reporting Results to Patients/Physicians

Call for Speakers
For a limited time, we are inviting researchers and clinicians applying genome analysis tools in clinical settings, as well as regulators and administrators implementing genomics into the clinic, to submit proposals for platform presentations. Please note that due to limited speaking slots, preference is given to abstracts from those within pharmaceutical and biopharmaceutical companies, regulators and those from academic centers. Additionally, as per CHI policy, a select number of vendors/consultants who provide products and services to these genomic researchers are offered opportunities for podium presentation slots based on a variety of Corporate Sponsorships.

All proposals are subject to review by the organizers and Scientific Advisory Committee.

Please click here to submit a proposal.

Submission deadline for priority consideration: November 15, 2013

For more details on the conference, please contact:
Mary Ann Brown
Executive Director, Conferences
Cambridge Healthtech Institute
250 First Avenue, Suite 300
Needham, MA 02494
T:  781-972-5497
E:  mabrown@healthtech.com

For exhibit and sponsorship opportunities, please contact:
Jay Mulhern
Manager, Business Development, Conferences & Media
Cambridge Healthtech Institute
250 First Avenue, Suite 300
Needham, MA 02494
T: 781-972-1359
E: jmulhern@healthtech.com

SOURCE

http://www.clinicalgenomeconference.com/

 

Read Full Post »