Feeds:
Posts
Comments

Archive for the ‘Bio Instrumentation in Experimental Life Sciences Research’ Category

Reporter: Aviva Lev-Ari, PhD, RN

Medical Education Firm Launches Online Tool to Help Docs Guide Personalized Rx Decisions in NSCLC

September 12, 2012
Clinical Care Options, a developer of continuing education and medical decision support resources, has launched a web-based tool to help oncologists figure out which lung cancer patients may benefit from molecularly guided personalized treatments.

The online decision-support tool provides oncologists with expert recommendations on first-line and maintenance treatment options for non-small cell lung cancer patients based on their patients’ medical information and tumor features, including oncogenic markers.

Clinical Care Options developed the online tool based on the treatment choices made by five US experts who were presented 96 cases with specific variables regarding patients’ medical history, such as tumor histology, genomic mutations, age, and smoking history.

In order to use the tool, oncologists select their patients’ medical information and desires and select their treatment of choice. The tool then displays how the five experts would treat this patient. The program then surveys users about how the expert recommendations impacted their treatment decisions.

The firm presented the results of this survey in a poster at the Chicago Multidisciplinary Symposium in Thoracic Oncology this week. The tool has been used by approximately 1,000 physicians around the world, according to Jim Mortimer, senior director of oncology programs and partnership development at Clinical Care Options. Overall, approximately 23 percent of clinicians who used the tool have said it helped change their decisions, while 50 percent indicated the tool helped confirm their initial treatment strategy.

Specifically, with regard to genomically guided personalized NSCLC treatments, all five of the experts selected Pfizer’s Xalkori (crizotinib) whenever a patient case involved the ALK fusion gene. However, out of 80 cases entered by oncologists involving this marker, only around 40 percent selected Xalkori. And although in NSCLC cases with mutated EGFR the experts selected Genentech’s Tarceva (erlotinib), only 60 percent of the 100 such cases entered by clinicians into the tool chose the drug.

The data collected by Clinical Care Options suggest that its decision-support tool may be a useful resource when oncologists want to assess how their peers would prescribe a genomically targeted personalized treatment. These drugs, compared to standard treatments, are relatively new to the market and expensive. Pfizer’s Xalkori was approved by the US Food and Drug Administration last year while Genentech is in the process of getting approval for Tarceva in the US as a first-line treatment for NSCLC patients who have EGFR mutations. Last year, the European Commission approved the use of Tarceva as a first-line treatment for NSCLC in patients with EGFR mutations (PGx Reporter 9/7/2011).

Clinical Care Options said launched the online tool because it noticed that physicians often look for advice beyond broad treatment guidelines when it comes to making decisions for specific patients.

“The tool recommendations align very well with the treatment guidelines but the advantage of the tool is the granularity of the case specifics. Users of the tool can quickly enter in details of a case and see the results for what five experts would recommend,” Mortimer told PGx Reporter. “This contrasts with guidelines that apply to broad groups and provide lists of suitable treatments.”

Mortimer noted that some of the experts’ recommendations included in the tool are outside of the exact indication of a particular drug. However, because the experts’ treatment decisions were evidence based, they “did not indicate any issues with reimbursement.”

Clinical Care Options has developed a continuing medical education-certified program that includes the tool with educational grants from Genentech and Pfizer.

Read Full Post »

Head and Neck Cancer Studies Suggest Alternative Markers More Prognostically Useful than HPV DNA Testing

Reporter: Aviva Lev-Ari, PhD, RN

September 18, 2012
 

NEW YORK (GenomeWeb News) – The presence or absence of human papillomavirus DNA on its own in an individual’s head or neck cancer does not provide enough information to help predict a patient’s survival, according to a pair of new papers in the journal Cancer Research.

Two research teams — headed by investigators at Brown University and Heidelberg University, respectively — looked at the reliability of using PCR-based HPV testing to determine which head and neck squamous cell carcinomas were HPV-related and, thus, more apt to respond to treatment.

Previous studies have shown that individuals with HPV-associated head and neck cancers tend to have more favorable outcomes than individuals whose head and neck cancers that are not related to HPV infection.

“Everybody who has studied it has shown that people with virally associated disease do better,” Brown University pathology researcher Karl Kelsey, a senior author on one of the new studies, explained in a statement.

“There are now clinical trials underway to determine if they should be treated differently,” he added. “The problem is that you need to appropriately diagnose virally related disease, and our data suggests that people need to take a close look at that.”

For their part, Kelsey and his co-authors from the US and Germany assessed the utility of testing for the presence of HPV by various means in individuals with head and neck cancer. This included PCR-based tests for HPV DNA in the tumor itself, tests aimed at detecting infection-associated antibodies in an individual’s blood, and tests for elevated levels of an HPV-related tumor suppressor protein.

For 488 individuals with HNSCC, researchers did blood-based testing for antibodies targeting HPV16 in general, as well as testing for antibodies that target the viral proteins E6 and E7.

For a subset of patients, the team assessed the tumors themselves for the presence of HPV DNA and/or for elevated levels of the host tumor suppressor protein p16.

Based on patterns in the samples, the group determined that the presence of viral E6 and E7 proteins in the blood was linked to increased survival for individuals with an oropharyngeal form of HNSCC, which affects part of the throat known as the oropharynx.

A positive test for HPV DNA alone was not significantly linked to head and neck cancer outcomes. On the other hand, when found in combination with E6 and E7 expression, a positive HPV16 test did coincide with improved oropharyngeal cancer outcomes.

Likewise, elevated levels of p16 in a tumor were not especially informative on their own, though they did correspond to better oropharyngeal cancer survival when found together with positive blood tests for E6 and E7.

Based on these findings, Kelsey and his team concluded that “[a] stronger association of HPV presence with prognosis (assessed by all-cause survival) is observed when ‘HPV-associated’ HNSCC is defined using tumor status (HPV DNA or P16) and HPV E6/E7 serology in combination rather [than] using tumor HPV status alone.”

In a second study, meanwhile, a German group that focused on the oropharyngeal form of the disease found its own evidence arguing against the use of HPV DNA as a solo marker for HPV-associated head and neck cancer.

For that analysis, researchers assessed 199 fresh-frozen oropharyngeal squamous cell carcinoma samples, testing the tumors for HPV DNA and p16. They also considered the viral load in the tumors and looked for gene expression profiles resembling those described in cervical carcinoma — another cancer associated with HPV infection.

Again, the presence of HPV DNA appeared to be a poor indicator of HPV-associated cancers or predictor of cancer outcomes. Whereas nearly half of the tumors tested positive for HPV16 DNA, just 16 percent and 20 percent had high viral loads and cervical cancer-like expression profiles, respectively.

The researchers found that a subset of HPV DNA-positive tumors with high viral load or HPV-associated expression patterns belonged to individuals with better outcomes. In particular, they found that cervical cancer-like expression profiles in oropharyngeal tumors coincided with the most favorable outcomes, while high viral load in the tumors came a close second.

“We showed that high viral load and a cancer-specific pattern of viral gene expression are most suited to identify patients with HPV-driven tumors among patients with oropharyngeal cancer,” Dana Holzinger, that study’s corresponding author, said in a statement.

“Once standardized assays for these markers, applicable in routine clinical laboratories, are established, they will allow precise identification of patients with oropharyngeal cancer with or without HPV-driven cancers and, thus, will influence prognosis and potentially treatment decisions,” added Holzinger, who is affiliated with the German Cancer Research Center and Heidelberg University.

In a commentary article online today in Cancer Research, Eduardo Méndez, a head and neck surgery specialist with the University of Washington and Fred Hutchinson Cancer Research Centerdiscussed the significance of the two studies and their potential impact on oropharyngeal squamous cell carcinoma prognoses and treatment.

But he also cautioned that more research is needed to understand whether the patterns described in the new studies hold in other populations and to tease apart the prognostic importance of HPV infection in relation to additional prognostic markers.

 

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Set of Papers Outline ENCODE Findings as Consortium Looks Ahead to Future Studies

NEW YORK (GenomeWeb News) – An international collaboration involving more than 400 researchers working to characterize gene regulatory networks in the human genome is publishing dozens of new studies this week.

In papers appearing in NatureScienceGenome ResearchGenome BiologyJournal of Biological Chemistry, and elsewhere, members of the Encyclopedia of DNA Elements, or ENCODE, consortium describe approaches used to define some four million regulatory regions in the genome, among other things. All told, the team explained, ENCODE efforts have made it possible assign biological functions to around 80 percent of genome sequences — filling in large gaps left by studies that focused on protein-coding sequences alone.

“We found that a much bigger part of the genome — a surprising amount, in fact — is involved in controlling when and where proteins are produced, than in simply manufacturing the building blocks,” ENCODE’s lead analysis coordinator Ewan Birney, associate director of the European Molecular Biology Laboratory European Bioinformatics Institute, said in a statement.

“This concept of ‘junk DNA,’ which has been sort of perpetuated for the past 20 years or so is really not accurate,” ENCODE researcher Rick Myers, director of the HudsonAlpha Institute for Biotechnology, said during a telephone briefing with reporters today. “Most of the genome — more than 80 percent of the base pairs in the genome — has some biological activity, some biological function.”

Researchers participating in a complementary effort within the larger ENCODE project, known as GENCODE, more completely characterize the coding portions of the genome. “As part of the ENCODE project, we both tidied up the protein-coding genes and we also found many non-coding RNA genes as well,” Birney said during today’s telebriefing.

Based on the success of ENCODE so far, the project is expected to be extended by another four years or so. The amount of new funding from the National Human Genome Research Institute for that follow-up work is expected to be as high as $123 million.

“Later this month, NHGRI will be announcing a new round of funding that will take the ENCODE project into its next phase,” NHGRI Director Eric Green said during the call.

Studies done in the decade or so since the human genome was deciphered have highlighted how little of the genome is actually comprised of gene sequences. With the realization that only around 2 percent of the genome is dedicated to protein-coding functions came a spate of speculation about the role of the other 98 percent of genome.

While this portion of the genome was suspected of harboring regulatory sequences, the extent of that regulation and its impact on coding sequences in human tissues over time was not known.

“When the Human Genome Project ended in 2003, we quickly realized that we understood the meaning of only a very small percent of the human genome’s letters,” Green explained. “We did know the genetic code for determining the order of amino acids and proteins, but we understood precious little about the signals that turned genes on or off — or that controlled the amount of proteins produced in different tissues.”

To begin studying such control networks systematically, the international ENCODE consortium kicked off the main phase of its analyses in 2007, following an earlier pilot study.

NHGRI has provided $123 million for the project over the past five years. Another $30 million went to support the development of ENCODE-related technologies since the ENCODE pilot started in 2003, while $40.6 million from NHGRI went towards the pilot itself.

During the study’s main phase, investigators from nearly three-dozen labs around the world took multi-pronged approaches to assess transcription factor binding patterns, histone modification patterns, chromatin structure signatures and other features of the genome that interact with one another to control gene expression over time and across different tissues in the body.

To accomplish the roughly 1,600 experiments done to test some 180 cell types for ENCODE, teams turned to methods such as chromatin immunoprecipitation coupled with sequencing to define the genome-wide binding patterns for more than 100 different transcription factors, for example, while other strategies were used to profile DNA methylation patterns, chromatin features, and so forth.

“It’s really a detailed hierarchy, where proteins bind and epigenetic marks — like DNA methylation and other marks — precisely cooperate and regulate how the genes are going to get turned on [or off] and the amount of this,” Myers said. “These complex networks are one of the big components of the contributions of the 30 papers that are being published today.”

For example, a University of Washington-led team reporting in Science online todaydefined millions of regulatory regions, including some that are operational during normal development, by taking advantage of an enzyme known as DNase I, which chops off DNA specifically at open chromatin sites in the genome. That group found that more than three-quarters of disease-associated variants identified in genome-wide association studies fall in parts of the genome that overlap with regulatory sites.

“We now know that the majority of these changes that are associated with common diseases and traits that don’t fall within genes actually occur within the gene-controlling switches,” University of Washington genome sciences researcher John Stamatoyannopoulos, senior author on that study, said during today’s telebriefing. “This phenomenon is not confined to a particular type of disease. It seems to be present across the board for a very wide variety of different diseases and traits.”

Results from such analyses also hint that some outwardly unrelated conditions might be traced back to similar regulatory processes. And, researchers say, by bringing together information on active regulatory regions with disease-risk variants, it may be possible to define new functionally important tissues for certain conditions.

“By creating these extensive blueprints of the control circuitry, we’re now exposing previously hidden connections between different kinds of diseases that may explain common clinical features,” Stamatoyannopoulos said.

“This has also allowed us to see that the GWAS studies that have been performed contain far more information than was previously believed,” he added, “because hundreds of additional DNA changes that were not thought to be important also appear to affect these gene-controlling switches.”

The new data are also expected to help in understanding genetic disease and interpreting information from personal genomes, according to Michael Snyder, an ENCODE investigator and director of Stanford University’s Center of Genomics and Personalized Medicine.

“We believe the ENCODE project will have a profound impact on personal genomes and, ultimately on personalized medicine,” Snyder told reporters. “We can now better see what personal variants do, in terms of causing phenotypic differences, drug responses, and disease risk.”

Many of the studies stemming from ENCODE can be viewed through a Nature,Genome Research, and Genome Biology-conceived website that links ENCODE papers that share themes or “threads” that are related to one another.

Along with the newly published papers, the ENCODE team is making data available to other members of the research community through the project’s website. Data from studies can also be accessed through an ENCODE browser housed at the University of California at Santa Cruz or via NCBI or EBI sites.

“For basic researchers, the ENCODE data represents a powerful resource for understanding fundamental questions about how life is encoded in our genome,” NHGRI’s Green said. “For more clinically-oriented researchers, the ENCODE data provide key information about which genome sequences are functionally important.”

Related Stories

  • Team IDs Characteristic Epigenetic Enhancer Patterns in Colon Cancer
    April 12, 2012 / GenomeWeb Daily News
  • NIH to Award $25M for Newborn Sequencing Studies
    August 10, 2012 / GenomeWeb Daily News
  • Illumina Q2 Revenues Down 2 Percent
    July 25, 2012 / GenomeWeb Daily News
  • Study: Exon Arrays Have Benefits over RNA-seq, but Fall Short in Finding Novel Transcription Events
    July 10, 2012 / In Sequence
  • Consortium Members Publish Collection of Studies Stemming from Human Microbiome Project
    June 13, 2012 / GenomeWeb Daily News
    Source:

    NEWS & VIEWS

    52 | NATURE | VOL 489 | 6 SEPTEMBER 2012

    FORUM: Genomics

    ENCODE explained

    The Encyclopedia of DNA Elements (ENCODE) project dishes up a hearty banquet of data that illuminate the roles of the functional elements of the human genome. Here, five scientists describe the project and discuss how the data are influencing research directions across many fields. See Articles p.57, p.75, p.83, p.91, p.101 & Letter p.109

    Serving up a genome feast

    JOSEPH R. ECKER

    Starting with a list of simple ingredients and blending them in the precise amounts needed to prepare a gourmet meal is a challenging task. In many respects, this task is analogous to the goal of the ENCODE project1, the recent progress of which is described in this issue2–7. The project aims to fully describe the list of common ingredients (functional elements) that make up the human genome (Fig. 1). When mixed in the right proportions, these ingredients constitute the information needed to build all the types of cells, body organs and, ultimately, an entire person from a single genome.

    The ENCODE pilot project8 focused on just 1% of the genome — a mere appetizer — and its results hinted that the list of human genes was incomplete. Although there was scepticism about the feasibility of scaling up the project to the entire genome and to many hundreds of cell types, recent advances in low-cost, rapid DNA-sequencing technology radically changed that view9. Now the ENCODE consortium presents a menu of 1,640 genome-wide data sets prepared from 147 cell types, providing a six-course serving of papers in Nature, along with many companion publications in other journals.

    One of the more remarkable findings described in the consortium’s ‘entrée’ paper (page 57)2 is that 80% of the genome contains elements linked to biochemical functions, dispatching the widely held view that the human genome is mostly ‘junk DNA’. The authors report that the space between genes is filled with enhancers (regulatory DNA elements), promoters (the sites at which DNA’s transcription into RNA is initiated) and numerous previously overlooked regions that encode RNA transcripts that are not translated into proteins but might have regulatory roles. Of note, these results show that many DNA variants previously correlated with certain diseases lie within or very near non-coding functional DNA elements, providing new leads for linking genetic variation and disease.

    The five companion articles3–7 dish up diverse sets of genome-wide data regarding the mapping of transcribed regions, DNA binding of regulatory proteins (transcription factors) and the structure and modifications of chromatin (the association of DNA and proteins that makes up chromosomes), among other delicacies.

    Djebali and colleagues3 (page 101) describe ultra-deep sequencing of RNAs prepared from many different cell lines and from specific compartments within the cells. They conclude that about 75% of the genome is transcribed at some point in some cells, and that genes are highly interlaced with overlapping transcripts that are synthesized from both DNA strands. These findings force a rethink of the definition of a gene and of the minimum unit of heredity.

    Moving on to the second and third courses, Thurman et al.4 and Neph et al.5 (pages 75 and 83) have prepared two tasty chromatin-related treats. Both studies are based on the DNase I hypersensitivity assay, which detects genomic regions at which enzyme access to, and subsequent cleavage of, DNA is unobstructed by chromatin proteins. The authors identified cell-specific patterns of DNase I hypersensitive sites that show remarkable concordance with experimentally determined and computationally predicted binding sites of transcription factors. Moreover, they have doubled the number of known recognition sequences for DNA-binding proteins in the human genome, and have revealed a 50-base-pair ‘footprint’ that is present in thousands of promoters5.

    The next course, provided by Gerstein and colleagues6 (page 91) examines the principles behind the wiring of transcription-factor networks. In addition to assigning relatively simple functions to genome elements (such as ‘protein X binds to DNA element Y’), this study attempts to clarify the hierarchies of transcription factors and how the intertwined networks arise.

    Beyond the linear organization of genes and transcripts on chromosomes lies a more complex (and still poorly understood) network of chromosome loops and twists through which promoters and more distal elements, such as enhancers, can communicate their regulatory information to each other. In the final course of the ENCODE genome feast, Sanyal and colleagues7 (page 109) map more than 1,000 of these long-range signals in each cell type. Their findings begin to overturn the long-held (and probably oversimplified) prediction that the regulation of a gene is dominated by its proximity to the closest regulatory elements.

    One of the major future challenges for ENCODE (and similarly ambitious projects) will be to capture the dynamic aspects of gene regulation. Most assays provide a single snapshot of cellular regulatory events, whereas a time series capturing how such processes change is preferable. Additionally, the examination of large batches of cells — as required for the current assays — may present too simplified a view of the underlying regulatory complexity, because individual cells in a batch (despite being genetically identical) can sometimes behave in different ways. The development of new technologies aimed at the simultaneous capture of multiple data types, along with their regulatory dynamics in single cells, would help to tackle these issues.

    A further challenge is identifying how the genomic ingredients are combined to assemble the gene networks and biochemical pathways that carry out complex functions, such as cell-to-cell communication, which enable organs and tissues to develop. An even greater challenge will be to use the rapidly growing body

    “These findings force a rethink of the definition of a gene and of the minimum unit of heredity.”ENCODEEncyclopedia of DNA Elementsnature.com/encode

    © 2012 Macmillan Publishers Limited. All rights reserved

    RESEARCH

    NEWS & VIEWS

    6 SEPTEMBER 2012 | VOL 489 | NATURE | 53

    of data from genome-sequencing projects to understand the range of human phenotypes (traits), from normal developmental processes, such as ageing, to disorders such as Alzheimer’s disease10.

    Achieving these ambitious goals may require a parallel investment of functional studies using simpler organisms — for example, of the type that might be found scampering around the floor, snatching up crumbs in the chefs’ kitchen. All in all, however, the ENCODE project has served up an all-you-can-eat feast of genomic data that we will be digesting for some time. Bon appétit!

    Joseph R. Ecker is at the Howard Hughes Medical Institute and the Salk Institute for Biological Studies, La Jolla, California 92037, USA.

    e-mail: ecker@salk.eduNucleosomeHistoneChromatinmodicationsLong-rangechromatin interactionsFunctionalgenomicelementsDNase IhypersensitivesitesDNA methylationChromosomeDNALong-rangeregulatoryelementsProtein-codingand non-codingtranscriptsPromoterarchitectureTranscriptionfactorTranscriptionmachineryTranscription-factorbinding sitesTranscribed region

    Figure 1 | Beyond the sequence. The ENCODE project2–7 provides information on the human genome far beyond that contained within the DNA sequence — it describes the functional genomic elements that orchestrate the development and function of a human. The project contains data about the degree of DNA methylation and chemical modifications to histones that can influence the rate of transcription of DNA into RNA molecules (histones are the proteins around which DNA is wound to form chromatin). ENCODE also examines long-range chromatin interactions, such as looping, that alter the relative proximities of different chromosomal regions in three dimensions and also affect transcription. Furthermore, the project describes the binding activity of transcription-factor proteins and the architecture (location and sequence) of gene-regulatory DNA elements, which include the promoter region upstream of the point at which transcription of an RNA molecule begins, and more distant (long-range) regulatory elements. Another section of the project was devoted to testing the accessibility of the genome to the DNA-cleavage protein DNase I. These accessible regions, called DNase I hypersensitive sites, are thought to indicate specific sequences at which the binding of transcription factors and transcription-machinery proteins has caused nucleosome displacement. In addition, ENCODE catalogues the sequences and quantities of RNA transcripts, from both non-coding and protein-coding regions.

    Expression control

    WENDY A. BICKMORE

    Once the human genome had been sequenced, it became apparent that an encyclopaedic knowledge of chromatin organization would be needed if we were to understand how gene expression is regulated. The ENCODE project goes a long way to achieving this goal and highlights the pivotal role of transcription factors in sculpting the chromatin landscape.

    Although some of the analyses largely confirm conclusions from previous smaller-scale studies, this treasure trove of genome-wide data provides fresh insight into regulatory pathways and identifies prodigious numbers of regulatory elements. This is particularly so for Thurman and colleagues’ data4 regarding DNase I hypersensitive sites (DHSs) and for Gerstein and colleagues’ results6 concerning DNA binding of transcription factors. DHSs are genomic regions that are accessible to enzymatic cleavage as a result of the displacement of nucleosomes (the basic units of chromatin) by DNA-binding proteins (Fig. 1). They are the hallmark of cell-type-specific enhancers, which are often located far away from promoters.

    The ENCODE papers expose the profusion of DHSs — more than 200,000 per cell type, far outstripping the number of promoters — and their variability between cell types. Through the simultaneous presence in the same cell type of a DHS and a nearby active promoter, the researchers paired half a million enhancers with their probable target genes. But this leaves

    © 2012 Macmillan Publishers Limited. All rights reserved

    RESEARCH

    NEWS & VIEWS

    more than 2 million putative enhancers without known targets, revealing the enormous expanse of the regulatory genome landscape that is yet to be explored. Chromosome-conformation-capture methods that detect long-range physical associations between distant DNA regions are attempting to bridge this gap. Indeed, Sanyal and colleagues7 applied these techniques to survey such associations across 1% of the genome.

    The ENCODE data start to paint a picture of the logic and architecture of transcriptional networks, in which DNA binding of a few high-affinity transcription factors displaces nucleosomes and creates a DHS, which in turn facilitates the binding of further, lower-affinity factors. The results also support the idea that transcription-factor binding can block DNA methylation (a chemical modification of DNA that affects gene expression), rather than the other way around — which is highly relevant to the interpretation of disease-associated sites of altered DNA methylation11.

    The exquisite cell-type specificity of regulatory elements revealed by the ENCODE studies emphasizes the importance of having appropriate biological material on which to test hypotheses. The researchers have focused their efforts on a set of well-established cell lines, with selected assays extended to some freshly isolated cells. Challenges for the future include following the dynamic changes in the regulatory landscape during specific developmental pathways, and understanding chromatin structure in tissues containing heterogeneous cell populations.

    Wendy A. Bickmore is in the Medical Research Council Human Genetics Unit, MRC Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh EH4 2XU, UK.

    e-mail: wendy.bickmore@igmm.ed.ac.uk 

    “The results imply that sequencing studies focusing on protein-coding sequences risk missing crucial parts of the genome.”

    11 Years Ago

    The draft human genome

    OUR GENOME UNVEILED

    Unless the human genome contains a lot of genes that are opaque to our computers, it is clear that we do not gain our undoubted complexity over worms and plants by using many more genes. Understanding what does give us our complexity — our enormous behavioural repertoire, ability to produce conscious action, remarkable physical coordination (shared with other vertebrates), precisely tuned alterations in response to external variations of the environment, learning, memory … need I go on? — remains a challenge for the future.

    David Baltimore

    From Nature 15 February 2001

    GENOME SPEAK

    With the draft in hand, researchers have a new tool for studying the regulatory regions and networks of genes. Comparisons with other genomes should reveal common regulatory elements, and the environments of genes shared with other species may offer insight into function and regulation beyond the level of individual genes. The draft is also a starting point for studies of the three-dimensional packing of the genome into a cell’s nucleus. Such packing is likely to influence gene regulation … The human genome lies before us, ready for interpretation.

    Peer Bork and Richard Copley

    From Nature 15 February 2001

    Non-codingbut functional

    INÊS BARROSO

    The vast majority of the human genome does not code for proteins and, until now, did not seem to contain defined gene-regulatory elements. Why evolution would maintain large amounts of ‘useless’ DNA had remained a mystery, and seemed wasteful. It turns out, however, that there are good reasons to keep this DNA. Results from the ENCODE project2–8 show that most of these stretches of DNA harbour regions that bind proteins and RNA molecules, bringing these into positions from which they cooperate with each other to regulate the function and level of expression of protein-coding genes. In addition, it seems that widespread transcription from non-coding DNA potentially acts as a reservoir for the creation of new functional molecules, such as regulatory RNAs.

    What are the implications of these results for genetic studies of complex human traits and disease? Genome-wide association studies (GWAS), which link variations in DNA sequence with specific traits and diseases, have in recent years become the workhorse of the field, and have identified thousands of DNA variants associated with hundreds of complex traits (such as height) and diseases (such as diabetes). But association is not causality, and identifying those variants that are causally linked to a given disease or trait, and understanding how they exert such influence, has been difficult. Furthermore, most of these associated variants lie in non-coding regions, so their functional effects have remained undefined.

    The ENCODE project provides a detailed map of additional functional non-coding units in the human genome, including some that have cell-type-specific activity. In fact, the catalogue contains many more functional non-coding regions than genes. These data show that results of GWAS are typically enriched for variants that lie within such non-coding functional units, sometimes in a cell-type-specific manner that is consistent with certain traits, suggesting that many of these regions could be causally linked to disease. Thus, the project demonstrates that non-coding regions must be considered when interpreting GWAS results, and it provides a strong motivation for reinterpreting previous GWAS findings. Furthermore, these results imply that sequencing studies focusing on protein-coding sequences (the ‘exome’) risk missing crucial parts of the genome and the ability to identify true causal variants.

    However, although the ENCODE catalogues represent a remarkable tour de force, they contain only an initial exploration of the depths of our genome, because many more cell types must yet be investigated. Some of the remaining challenges for scientists searching for causal disease variants lie in: accessing data derived from cell types and tissues relevant to the disease under study; understanding how these functional units affect genes that may be distantly located7; and the ability to generalize such results to the entire organism.

    Inês Barroso is at the Wellcome Trust Sanger Institute, Hinxton CB10 1SA, UK, and at the University of Cambridge Metabolic Research Laboratories and NIHR Cambridge Biomedical Research Centre, Cambridge, UK.e-mail: ib1@sanger.ac.uk5 4 | N AT U R E | VO L 4 8 9 | 6 S E P T E M B E R 2 0 1 2

    © 2012 Macmillan Publishers Limited. All rights reserved

    Evolution and the code

    JONATHAN K. PRITCHARD & YOAV GILAD

    One of the great challenges in evolutionary biology is to understand how differences in DNA sequence between species determine differences in their phenotypes. Evolutionary change may occur both through changes in protein-coding sequences and through sequence changes that alter gene regulation.

    There is growing recognition of the importance of this regulatory evolution, on the basis of numerous specific examples as well as on theoretical grounds. It has been argued that potentially adaptive changes to protein-coding sequences may often be prevented by natural selection because, even if they are beneficial in one cell type or tissue, they may be detrimental elsewhere in the organism. By contrast, because gene-regulatory sequences are frequently associated with temporally and spatially specific gene-expression patterns, changes in these regions may modify the function of only certain cell types at specific times, making it more likely that they will confer an evolutionary advantage12.

    However, until now there has been little information about which genomic regions have regulatory activity. The ENCODE project has provided a first draft of a ‘parts list’ of these regulatory elements, in a wide range of cell types, and moves us considerably closer to one of the key goals of genomics: understanding the functional roles (if any) of every position in the human genome.

    Nonetheless, it will take a great deal of work to identify the critical sequence changes in the newly identified regulatory elements that drive functional differences between humans and other species. There are some precedents for identifying key regulatory differences (see, for example, ref. 13), but ENCODE’s improved identification of regulatory elements should greatly accelerate progress in this area. The data may also allow researchers to begin to identify sequence alterations occurring simultaneously in multiple genomic regions, which, when added together, drive phenotypic change — a process called polygenic adaptation14.

    However, despite the progress brought by the ENCODE consortium and other research groups, it remains difficult to discern with confidence which variants in putative regulatory regions will drive functional changes, and what these changes will be. We also still have an incomplete understanding of how regulatory sequences are linked to target genes. Furthermore, the ENCODE project focused mainly on the control of transcription, but many aspects of post-transcriptional regulation, which may also drive evolutionary changes, are yet to be fully explored.

    Nonetheless, these are exciting times for studies of the evolution of gene regulation. With such new resources in hand, we can expect to see many more descriptions of adaptive regulatory evolution, and how this has contributed to human evolution.

    Jonathan K. Pritchard and Yoav Gilad are in the Department of Human Genetics, University of Chicago, Chicago 60637 Illinois, USA. J.K.P. is also at the Howard Hughes Medical Institute, University of Chicago.

    e-mails: pritch@uchicago.edu; gilad@uchicago.edu 

    From catalogue to function

    ERAN SEGAL

    Projects that produce unprecedented amounts of data, such as the human genome project15 or the ENCODE project, present new computational and data-analysis challenges and have been a major force driving the development of computational methods in genomics. The human genome project produced one bit of information per DNA base pair, and led to advances in algorithms for sequence matching and alignment. By contrast, in its 1,640 genome-wide data sets, ENCODE provides a profile of the accessibility, methylation, transcriptional status, chromatin structure and bound molecules for every base pair. Processing the project’s raw data to obtain this functional information has been an immense effort.

    For each of the molecular-profiling methods used, the ENCODE researchers devised novel processing algorithms designed to remove outliers and protocol-specific biases, and to ensure the reliability of the derived functional information. These processing pipelines and quality-control measures have been adapted by the research community as the standard for the analysis of such data. The high quality of the functional information they produce is evident from the exquisite detail and accuracy achieved, such as the ability to observe the crystallographic topography of protein–DNA interfaces in DNase I footprints5, and the observation of more than one-million-fold variation in dynamic range in the concentrations of different RNA transcripts3.

    But beyond these individual methods for data processing, the profound biological insights of ENCODE undoubtedly come from computational approaches that integrated multiple data types. For example, by combining data on DNA methylation, DNA accessibility and transcription-factor expression. Thurman et al.4 provide fascinating insight into the causal role of DNA methylation in gene silencing. They find that transcription-factor binding sites are, on average, less frequently methylated in cell types that express those transcription factors, suggesting that binding-site methylation often results from a passive mechanism that methylates sites not bound by transcription factors.

    Despite the extensive functional information provided by ENCODE, we are still far from the ultimate goal of understanding the function of the genome in every cell of every person, and across time within the same person. Even if the throughput rate of the ENCODE profiling methods increases dramatically, it is clear that brute-force measurement of this vast space is not feasible. Rather, we must move on from descriptive and correlative computational analyses, and work towards deriving quantitative models that integrate the relevant protein, RNA and chromatin components. We must then describe how these components interact with each other, how they bind the genome and how these binding events regulate transcription.

    If successful, such models will be able to predict the genome’s function at times and in settings that have not been directly measured. By allowing us to determine which assumptions regarding the physical interactions of the system lead to models that better explain measured patterns, the ENCODE data provide an invaluable opportunity to address this next immense computational challenge. ■

    Eran Segal is in the Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel.

    e-mail: eran.segal@weizmann.ac.il

    1. The ENCODE Project Consortium Science 306, 636–640 (2004).

    2. The ENCODE Project Consortium Nature 489, 57–74 (2012).

    3. Djebali, S. et al. Nature 489, 101–108 (2012).

    4. Thurman, R. E. et al. Nature 489, 75–82 (2012).

    5. Neph, S. et al. Nature 489, 83–90 (2012).

    6. Gerstein, M. B. et al. Nature 489, 91–100 (2012).

    7. Sanyal, A., Lajoie, B., Jain, G. & Dekker, J. Nature 489, 109–113 (2012).

    8. Birney, E. et al. Nature 447, 799–816 (2007).

    9. Mardis, E. R. Nature 470, 198–203 (2011).

    10. Gonzaga-Jauregui, C., Lupski, J. R. & Gibbs, R. A. Annu. Rev. Med. 63, 35–61 (2012).

    11. Sproul, D. et al. Proc. Natl Acad. Sci. USA 108, 4364–4369 (2011).

    12. Carroll, S. B. Cell 134, 25–36 (2008).

    13. Prabhakar, S. et al. Science 321, 1346–1350 (2008).

    14. Pritchard, J. K., Pickrell, J. K. & Coop, G. Curr. Biol. 20, R208–R215 (2010).

    15. Lander, E. S. et al. Nature 409, 860–921 (2001).

    “The high quality of the functional information produced is evident from the exquisite detail and accuracy achieved.” 

    6 S E P T E M B E R 2 0 1 2 | VO L 4 8 9 | N AT U R E | 5 5 NEWS & VIEWS RESEARCH © 2012 Macmillan Publishers Limited. All rights reserved

    http://www.sciencemag.org SCIENCE VOL 337 7 SEPTEMBER 2012 1159

    NEWS&ANALYSIS

    When researchers fi rst sequenced the human

    genome, they were astonished by how few

    traditional genes encoding proteins were

    scattered along those 3 billion DNA bases.

    Instead of the expected 100,000 or more

    genes, the initial analyses found about 35,000

    and that number has since been whittled down

    to about 21,000. In between were megabases

    of “junk,” or so it seemed.

    This week, 30 research papers, including

    six in Nature and additional papers published

    by Science, sound the death knell for

    the idea that our DNA is mostly littered with

    useless bases. A decadelong project, the

    Encyclopedia of DNA Elements (ENCODE),

    has found that 80% of the human genome

    serves some purpose, biochemically speaking.

    “I don’t think anyone would have anticipated

    even close to the amount of sequence

    that ENCODE has uncovered that looks like

    it has functional importance,” says John A.

    Stamatoyannopoulos, an ENCODE re searcher

    at the University of Washington, Seattle.

    Beyond defi ning proteins, the DNA bases

    highlighted by ENCODE specify landing

    spots for proteins that infl uence gene activity,

    strands of RNA with myriad roles, or

    simply places where chemical modifi cations

    serve to silence stretches of our chromosomes.

    These results are going “to change

    the way a lot of [genomics] concepts are

    written about and presented in textbooks,”

    Stamatoyannopoulos predicts.

    The insights provided by ENCODE into

    how our DNA works are already clarifying

    genetic risk factors for a variety of diseases

    and offering a better understanding of gene

    regulation and function. “It’s a treasure trove

    of information,” says Manolis Kellis, a computational

    biologist at Massachusetts Institute

    of Technology (MIT) in Cambridge who analyzed

    data from the project.

    The ENCODE effort has revealed that

    a gene’s regulation is far more complex

    than previously thought, being infl uenced

    by multiple stretches of regulatory DNA

    located both near and far from the gene

    itself and by strands of RNA not translated

    into proteins, so-called noncoding RNA.

    “What we found is how beautifully complex

    the biology really is,” says Jason Lieb,

    an ENCODE researcher at the University of

    North Carolina, Chapel Hill.

    Throughout the 1990s, various researchers

    called the idea of junk DNA into question.

    With the human genome in hand, the

    National Human Genome Research Institute

    (NHGRI) in Bethesda, Maryland, decided it

    wanted to fi nd out once and for all how much

    of the genome was a wasteland with no functional

    purpose. In 2003, it funded a pilot

    ENCODE, in which 35 research teams analyzed

    44 regions of the genome—30 million

    bases in all, about 1% of the total genome. In

    2007, the pilot project’s results revealed that

    much of this DNA sequence was active in

    some way. The work called into serious question

    our gene-centric view of the genome,

    fi nding extensive RNA-generating activity

    beyond traditional gene boundaries (Science,

    15 June 2007, p. 1556). But the question

    remained whether the rest of the genome was

    like this 1%. “We want to know what all the

    bases are doing,” says Yale University bioinformatician

    Mark Gerstein.

    Teams at 32 institutions worldwide have

    now carried out scores of tests, generating

    1640 data sets. While the pilot phase tests

    depended on computer chip–like devices

    called microarrays to analyze DNA samples,

    the expanded phase benefi ted from the arrival

    of new sequencing technology, which made it

    cost-effective to directly read the DNA bases.

    Taken together, the tests present “a greater

    idea of what the landscape of the genome

    looks like,” says NHGRI’s Elise Feingold.

    Because the parts of the genome used

    could differ among various kinds of cells,

    ENCODE needed to look at DNA function

    in multiple types of cells and tissues. At

    fi rst the goal was to study intensively three

    types of cells. They included GM12878, the

    immature white blood cell line used in the

    1000 Genomes Project, a large-scale effort to

    catalog genetic variation across humans; a leukemia

    cell line called K562; and an approved

    human embryonic stem cell line, H1-hESC.

    As ENCODE was ramping up, new

    sequencing technology brought the cost of

    sequencing down enough to make it feasible

    to test extensively even more cell types.

    ENCODE added a liver cancer cell line,

    HepG2; the laboratory workhorse cancer cell

    line, HeLa S3; and human umbilical cord tissue

    to the mix. Another 140 cell types were

    studied to a much lesser degree.

    In these cells, ENCODE researchers

    closely examined which DNA bases are transcribed

    into RNA and then whether those

    strands of RNA are subsequently translated

    into proteins, verifying predicted proteincoding

    genes and more precisely locating

    each gene’s beginning, end, and coding

    regions. The latest protein-coding gene count

    is 20,687, with hints of about 50 more, the

    consortium reports in Nature. Those genes

    account for about 3% of the human genome,

    less if one counts only their coding regions.

    Another 11,224 DNA stretches are classifi ed

    as pseudogenes, “dead” genes now known to

    be active in some cell types or individuals.

    ENCODE Project Writes Eulogy

    For Junk DNA

    GENOMICS

    Hypersensitive

    sites

    CH3CO

    CH3

    Long-range regulatory elements

    (enhancers, repressors/

    silencers, insulators)

    cis-regulatory elements

    (promoters, transcription

    factor binding sites)

    Gene Transcript

    RNA

    polymerase

    CH3CO (Epigenetic modifications)

    ChIP-seq

    Computational

    predictions and

    RT-PCR

    RNA-seq

    DNase-seq

    FAIRE-seq

    5C

    Zooming in. A diagram of DNA in ever-greater detail shows how ENCODE’s various tests (gray boxes) translate

    DNA’s features into functional elements along a chromosome.

    CREDIT: ADAPTED FROM THE ENCODE PROJECT CONSORTIUM, PLOS BIOLOGY 9, 4 (APRIL 2011)

    Published by AAAS

    Downloaded from http://www.sciencemag.org on September 10, 2012

    http://www.sciencemag.org SCIENCE VOL 337 7 SEPTEMBER 2012 1161

    NEWS&ANALYSIS

    ENCODE drives home, however, that

    there are many “genes” out there in which

    DNA codes for RNA, not a protein, as the end

    product. The big surprise of the pilot project

    was that 93% of the bases studied were transcribed

    into RNA; in the full genome, 76%

    is transcribed. ENCODE defi ned 8800 small

    RNA molecules and 9600 long noncoding

    RNA molecules, each of which is at least 200

    bases long. Thomas Gingeras of Cold Spring

    Harbor Laboratory in New York has found

    that various ones home in on different cell

    compartments, as if they have fi xed addresses

    where they operate. Some go to the nucleus,

    some to the nucleolus, and some to the cytoplasm,

    for example. “So there’s quite a lot

    of sophistication in how RNA works,” says

    Ewan Birney of the European Bioinformatics

    Institute in Hinxton, U.K., one of the key leaders

    of ENCODE (see p. 1162).

    As a result of ENCODE, Gingeras and

    others argue that the fundamental unit of

    the genome and the basic unit of heredity

    should be the transcript—the piece of

    RNA decoded from DNA—and not the

    gene. “The project has played an important

    role in changing our concept of the gene,”

    Stamatoyannopoulos says.

    Another way to test for functionality of

    DNA is to evaluate whether specific base

    sequences are conserved between species, or

    among individuals in a species. Previous studies

    have shown that 5% of the human genome

    is conserved across mammals, even though

    ENCODE studies implied that much more

    of the genome is functional. So MIT’s Lucas

    Ward and Kellis compared functional regions

    newly identifi ed by ENCODE among multiple

    humans, sampling from the

    1000 Genomes Project. Some

    DNA sequences not conserved

    between humans and other

    mammals were nonetheless

    very much preserved across

    multiple people, indicating

    that an additional 4% of the

    genome is newly under selection

    in the human lineage, they

    report in a paper published

    online by Science (http://scim.

    ag/WardKellis). Two such regions were near

    genes for nerve growth and the development

    of cone cells in the eye, which underlie distinguishing

    traits in humans. On the fl ip side,

    they also found that some supposedly conserved

    regions of the human genome, as highlighted

    by the comparison with 29 mammals,

    actually varied among humans, suggesting

    these regions were no longer functional.

    Beyond transcription, DNA’s bases function

    in gene regulation through their interactions

    with transcription factors and other

    proteins. ENCODE carried out several tests

    to map where those proteins bind along the

    genome (Science, 25 May 2007, p. 1120). Two,

    DNase-seq and FAIRE-seq, gave an overview

    of the genome, identifying where the protein-

    DNA complex chromatin unwinds and a protein

    can hook up with the DNA, and were

    applied to multiple cell types. ENCODE’s

    DNase-seq found 2.89 million such sites

    in 125 cell types. Stamatoyannopoulos and

    his colleagues describe their more extensive

    DNase-seq studies in Science (p. 1190): His

    team examined 349 types of cells, including

    233 60- to 160-day-old fetal tissue samples.

    Each type of cell had about 200,000 accessible

    locations, and there seemed to be at least

    3.9 million regions where transcription factors

    can bind in the genome. Across all cell

    types, about 42% of the genome can be accessible,

    he and his colleagues report. In many

    cases, the assays were able to pinpoint the specifi

    c bases involved in binding.

    Last year, Stamatoyannopoulos showed

    that these newly discovered functional regions

    sometimes overlap with specifi c DNA bases

    linked to higher or lower risks of various diseases,

    suggesting that the regulation of genes

    might be at the heart of these risk variations

    (Science, 27 May 2011, p. 1031). The work

    demonstrated how researchers could use

    ENCODE data to come up with new hypotheses

    about the link between genetics and a

    particular disorder. (The ENCODE analysis

    found that 12% of these bases, or SNPs,

    colocate with transcription factor binding

    sites and 34% are in open chromatin defi ned

    by the DNase-seq tests.) Now, in their new

    work published in Science,

    Stamatoyannopoulos’s lab has

    linked those regulatory regions

    to their specifi c target genes,

    homing in on the risk-enhancing

    ones. In addition, the group

    fi nds it can predict the cell type

    involved in a given disease.

    For example, the analysis fi ngered

    two types of T cells as

    pathogenic in Crohn’s disease,

    both of which are involved in

    this inflammatory bowel disorder. “We are

    informing disease studies in a way that would

    be very hard to do otherwise,” Birney says.

    Another test, called ChIP-seq, uses an

    antibody to home in on a particular DNAbinding

    protein and helps pinpoint the locations

    along the genome where that protein

    works. To date, ENCODE has examined

    about 100 of the 1500 or so transcription

    factors and about 20 other DNA binding

    proteins, including those involved in modifying

    the chromatin-associated proteins

    called histones. The binding sites found

    through ChIP-seq coincided with the sites

    mapped through FAIRE-seq and DNAseseq.

    Overall, 8% of the genome falls within

    a transcription factor binding site, a percentage

    that is expected to double once more

    transcription factors have been tested.

    Yale’s Gerstein used these results to fi gure

    out all the interactions among the transcription

    factors studied and came up with a network

    view of how these regulatory proteins

    work. These transcription factors formed a

    three-layer hierarchy, with the ones at the top

    having the broadest effects and the ones in

    the middle working together to coregulate a

    common target gene, he and his colleagues

    report in Nature.

    Using a technique called 5C, other

    researchers looked for places where DNA

    from distant regions of a chromosome, or

    even different chromosomes, interacted. It

    found that an average of 3.9 distal stretches

    of DNA linked up with the beginning of each

    gene. “Regulation is a 3D puzzle that has to

    be put together,” Gingeras says. “That’s what

    ENCODE is putting out on the table.”

    To date, NHGRI has put $288 million

    toward ENCODE, including the pilot project,

    technology development, and ENCODE

    efforts for the mouse, nematode, and fruit fl y.

    All together, more than 400 papers have been

    published by ENCODE researchers. Another

    110 or more studies have used ENCODE data,

    says NHGRI molecular biologist Michael

    Pazin. Molecular biologist Mathieu Lupien of

    the University of Toronto in Canada authored

    one of those papers, a study looking at epigenetics

    and cancer. “ENCODE data were

    fundamental” to the work, he says. “The cost

    is defi nitely worth every single dollar.”

    –ELIZABETH PENNISI

    ENCODE By the Numbers

    147 cell types studied

    80% functional portion of human genome

    20,687 protein-coding genes

    18,400 RNA genes

    1640 data sets

    30 papers published this week

    442 researchers

    $288 million funding for pilot,

    technology, model organism, and current project

    “ We are informing

    disease studies in a

    way that would be

    very hard to do

    otherwise.”

    —EWAN BIRNEY,

    EUROPEAN BIOINFORMATICS

    INSTITUTE

    Published by AAAS

    Downloaded from http://www.sciencemag.org on September 10, 2012

    http://www.nature.com/encode/

Read Full Post »

Comprehensive Genomic Characterization of Squamous Cell Lung Cancers

Reporter: Aviva Lev-Ari, PhD, RN

Nature (2012) doi:10.1038/nature11404

Received 09 March 2012 
Accepted 09 July 2012 
Published online 09 September 2012
Correspondence to: 

The primary and processed data used to generate the analyses presented here can be downloaded by registered users fromThe Cancer Genome Atlas (https://tcga-data.nci.nih.gov/tcga/tcgaDownload.jsp,https://cghub.ucsc.edu/ and https://tcga-data.nci.nih.gov/docs/publications/lusc_2012/).

Lung squamous cell carcinoma is a common type of lung cancer, causing approximately 400,000 deaths per year worldwide. Genomic alterations in squamous cell lung cancers have not been comprehensively characterized, and no molecularly targeted agents have been specifically developed for its treatment. As part of The Cancer Genome Atlas, here we profile 178 lung squamous cell carcinomas to provide a comprehensive landscape of genomic and epigenomic alterations. We show that the tumour type is characterized by complex genomic alterations, with a mean of 360 exonic mutations, 165 genomic rearrangements, and 323 segments of copy number alteration per tumour. We find statistically recurrent mutations in 11 genes, including mutation of TP53 in nearly all specimens. Previously unreported loss-of-function mutations are seen in the HLA-A class I major histocompatibility gene. Significantly altered pathways included NFE2L2 andKEAP1 in 34%, squamous differentiation genes in 44%, phosphatidylinositol-3-OH kinase pathway genes in 47%, and CDKN2A and RB1 in 72% of tumours. We identified a potential therapeutic target in most tumours, offering new avenues of investigation for the treatment of squamous cell lung cancers.

Read Full Post »

Imaging: seeing or imagining? (Part 1)

Author and Curator: Dror Nir, PhD

That is the question…

We are all used to clichés such as “seeing is believing”, “seeing is knowing”, “don’t be blind” and so on. Out of our seven (natural and supernatural) senses we tend to use and trust our eyes the most. Especially, when it comes to learning, accumulation of experience and acceptance of information as correct. On the other hand, we are taught from childhood to be aware of illusions and not to judge according to looks but rather according to matter. The problem is, does one recognise the substance inside an image? To answer this, a wide-ranging discipline of image interpretation was developed alongside with imaging technology. In order not to fatigue the innocent reader, I’ll review the state of the art of imaging in medicine in subsequent posts, each dedicated to a specific modality. This post is dedicated to…

Current main trends in ultrasound imaging in cancer patients’ management;

The most used imaging modality in medicine is ultrasound. This is due to the fact that it is noninvasive, practically harmless, relatively inexpensive and fairly accessible; i.e. everyone can operate it, even a layman! No formal training or certification is required!

Interesting enough, ultrasound is labeled by the regulatory agencies, FDA and CE, as a diagnostic medical device! This is real demonstration of the aforementioned tendency to believe our eyes, even if these eyes do not see well or the brain behind them is lacking the experience required for ultrasound image interpretation.

Since “ultrasound imaging in medicine” is the subject of many text books and articles I found it  appropriate, for the sake of this post, simply  to refer the reader to Wikipedia’s page (http://en.wikipedia.org/wiki/Medical_ultrasonography) on ultrasound in medicine: “Diagnostic Sonography (ultrasonography) is an ultrasound-based diagnostic imaging technique used for visualizing subcutaneous body structures including tendonsmuscles, joints, vessels and internal organs for possible pathology or lesionsObstetric sonography is commonly used during pregnancy and is widely recognized by the public. In physics, the term “ultrasound” applies to all sound waves with a frequency above the audible range of normal human hearing, about 20 kHz. The frequencies used in diagnostic ultrasound are typically between 2 and 18 MHz.”

When it comes to cancer patients’ management, ultrasound provides real-time imaging of body organs at a relatively cost effective workflow. However, it suffers from lack of sensitivity and specificity, especially if the investigator is still fairly inex­perienced. Therefore, no diagnosis is confirmed without biopsy of the suspected lesion discovered during the ultrasound scan. As mentioned in my previous post; identification of suspicious lesions in the prostate during TRUS is so inconclusive that in order to reach diagnosis biopsies are taken randomly.

Did we hit the target?

To improve prostate cancer detection, various biopsy strategies to increase the diagnostic yield of prostate biopsy have been devised: sampling of visually abnormal areas; more lateral placement of biopsies; anterior biop­sies; and obtaining an increased number of cores, with up to 45 biopsy cores [1-5].

In recent years, new features such as 3D and contrast-enhanced sonography, elastography and HistoScanning were added to the basic video image in order to improve the quality of ultrasound based investiga­tion of cancer patients.

3-D Sonography.

3-D ultrasound allows si­multaneous biplanar imaging of the organ with com­puter reconstructions providing a coronal plane as well as a rendered 3-D image. This promises to improve the detection and pre-clinical grading of cancer lesions. Still, the interpretation is very much “image quality” and “user experience” dependent.

3D imaging of breast using ABUS by Siemens; using the coronal view to better investigate a lesion.

  

 

3D imaging of breast using Voluson 730 by GE; three planes are presented for review by the radiologist.

 

 

 Contrast-Enhanced Sonography.

Using intravenous micro-bubble agents in combination with color and pow­er Doppler imaging contributes to increase in the signal obtained in areas of increased vascularity. The underlying assumption is that vascularization in the tumor’s area will be more pronounced than in normal tissue. Hot off the press: The UK National Institute for Health and Clinical Excellence (NICE) has published guidance that supports the use of contrast-enhanced ultrasound with Bracco’s SonoVue ultrasound contrast agent for the diagnosis of liver cancer [6].  The main use of contrast-enhanced ultrasound is directing biopsies to the “most suspicious” areas; i.e. those who presents higher vascularity. Never­theless, in reported clinical studies [7] targeted biopsies’ sensitivity on contrast-enhanced ultrasound was only 68%.

 

Elastography.

Elastography is an imaging technique that evaluates the elasticity of the tissue. The underlying assumption is that tumors present greater stiffness than normal tissue and therefore will be characterized by limited compressibility. The first person to introduce this concept was  Professor Jonathan Ophir, University of Huston, Texas [http://www.uth.tmc.edu/schools/med/rad/elasto/]:
Estimation of differences in lesions’ stiffness relies  on computing the level of correlation between consecutive imaging frames while the tissue that is being imaged is subjected to changing compression, usually applied by the sonographer who manipulates the ultrasound probe. Since malignant and benign lesions exhibit similar elasticity, elastography is not suitable for lesion characterisation. Therefore, as in the previous example, elastography’s main use is identifying suspicious areas in which to take biopsies [8, 9].  Furthermore, users’ experiences related to elastography reveal a lot of controversy.  For example, according to Prof. Bruno Fornage of MD. Anderson [http://www.auntminnie.com/index.aspx?sec=sup&sub=wom&pag=dis&ItemID=99028]; “current commercially available scanners are confounded by a lack of intraobserver reliability, so that it’s not unusual to produce an opposite result on repeat testing a few seconds later”. “There are very few evidence-based non-industry sponsored studies reporting substantial superiority [of elastography] over standard grayscale ultrasound,” he said. “In fact, a sensitivity of 82% in the diagnosis of breast cancer has been reported for elastography, versus 94% for conventional grayscale ultrasound. More disturbing is that even if the technology of elastography worked flawlessly, the huge overlap in breast pathology between very firm solid benign lesions and less firm malignancies gives this technology no practical place in the differential diagnosis of solid breast masses.”

 

HistoScanning.

HistoScanning™ is a novel ultrasound-based software technology that utilizes advanced tissue characterization algorithms to address the clinical requirements for tissue characterization. It visualizes the position and extent of tissue suspected of being malignant in the target organ. In this respect its design is unique and superior to other ultrasound based-technologies [10, 11]. HistoScanning’s first clinically available application (since 2009) is in the management of prostate cancer patients.

 

 

HistoScanning indicating suspicious lesions superimposed on 3-D ultrasound of the prostate. The three imaging plans and 3D reconstruction of the segmented prostate are presented.

 

 

 To conclude; if we are looking to improve the current state of the art in ultrasound-based cancer patients’ management we should strive to introduce systems which will enable the medical practitioners to rule in or rule out suspicious lesions at imaging before they biopsy them. Using ultrasound just as a tool for directing biopsies as done today is not enough. Indeed, this requires capability of ultrasound-based tissue characterisation in addition to detection of ultrasound-based abnormality (i.e. circumstantial evidence for cancer). To-date, the only available system that bears the promise to provide such improvement is HistoScanning. Obviously, the level of confidence in the Negative Predictive Value of HistoScanning and future systems alike must be built to become high enough to provide the medical practitioner the reassurance and comfort that he is not missing any significant cancer by not taking a biopsy. Such confidence can only be built by subjecting these systems (i.e. HistoScanning and alike) to properly designed clinical studies and, not less important, by reporting the experience of early adopters who will test them in a controlled routine use.

 

References

  1. Flanigan RC, Catalona WJ, Richie JP, Ah-mann FR, Hudson MA, Scardino PT, de-Kernion JB, Ratliff TL, Kavoussi LR, Dalkin BL: Accuracy of digital rectal examination and transrectal ultrasonography in localiz­ing prostate cancer: results of a multicenter clinical trial of 6,630 men. J Urol 1994; 152: 1506–1509.
  2. Eichler K, Hempel S, Wilby J, Myers L, Bach­mann LM, Kleijnen J: Diagnostic value of systematic biopsy methods in the investiga­tion of prostate cancer: a systematic review. J Urol 2006; 175: 1605–1612.
  3. Delongchamps NB, de la Roza G, Jones R, Jumbelic M, Haas GP: Saturation biopsies on autopsied prostates for detecting and charac­terizing prostate cancer. BJU Int 2009; 10: 49–54.
  4. Rifkin MD, Dähnert W, Kurtz AB: State of the art: endorectal sonography of prostate gland. AJR Am J Roentgenol 1990; 154: 691– 700.
  5. Chrouser KL, Lieber MM: Extended and sat­uration needle biopsy. Curr Urol Rep 2004; 5: 226–230.
  6. http://www.auntminnieeurope.com/index.aspx?sec=nws&sub=rad&pag=dis&ItemID=607068&wf=284
  7. Yi A, Kim JK, Park SH, Kim KW, Kim HS, Kim JH, Eun HW, Cho KS: Contrast-en­hanced sonography for prostate cancer de­tection in patients with indeterminate clini­cal findings. Am J Roentgenol 2006; 186: 1431–1435.
  8. König K, Scheipers U, Pesavento A, Lorenz A, Ermert H, Senge T: Initial experiences with real-time elastography guided biopsies of the prostate. J Urol 2005; 174: 115–117.
  9. 32 Pallwein L, Mitterberger M, Struve P, Hor-ninger W, Aigner F, Bartsch G, Gradl J, Schurich M, Pedross F, Frauscher F: Com­parison of sonoelastography guided biopsy with systematic biopsy: impact on prostate cancer detection. Eur Radiol 2007; 17: 2278– 2285.
  10. SALOMON (G.), SPETHMANN (J.), BECKMANN (A.), AUTIER (P.), MOORE (C.), DURNER (L.), SANDMANN (M.), HASE (A.), SCHLOMM (T.), MICHL (U.), HEINZER (H.), GRAFEN (M.), STEUBER (T.).Accuracy of HistoScanning for the prediction of a negative surgical margin in patients undergoing radical prostatectomy. Published online in British Journal of Urology International (BJUI). 09/08/2012.
  11. SIMMONS (L.A.M.), AUTIER (P.), ZATURA (F.), BRAECKMAN (J.G.), PELTIER (A.), ROMICS (I.), STENZL (A.), TREURNICHT (K.), WALKER (T.), NM (D.), MOORE (C.M.), EMBERTON (M.).  Detection, localisation and characterisation of prostate cancer by Prostate Hist°Scanning; Published in British Journal of Urology International (BJUI). Issue 1 (July). Vol 110, P 28-35.

 

 Written by Dror Nir

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

Glucose in the ICU — Evidence, Guidelines, and Outcomes

Brian P. Kavanagh, M.B., F.R.C.P.C.

September 7, 2012 (10.1056/NEJMe1209429)

Just over a decade ago, a single-center Belgian study showed that normalization of blood glucose in critically ill patients lowered hospital mortality by more than 30%.1 Although subsequent studies were unable to reproduce these findings, the appeal of such a straightforward intervention was too great to resist: guidelines from professional organizations2,3 were published, and editorial commentary4 highlighted initiatives by the Institute for Healthcare Improvement, the Joint Commission on Accreditation of Healthcare Organizations, and the Volunteer Hospital Association that incorporated tight glucose control as a standard. Indeed, the prestigious Codman Award of the Joint Commission was presented in 2004 for a program of glycemic control in critical care that “saved” patients’ lives.5 Tight glucose control for critically ill patients was in vogue.

The publication in 2009 of a large international trial (the Normoglycemia in Intensive Care Evaluation–Survival Using Glucose Algorithm Regulation [NICE-SUGAR] study6) followed that of several negative trials. The NICE-SUGAR study, which involved more than 6100 patients, showed that tight glycemic control didn’t decrease mortality — it increased it. Most guidelines were hastily revised. However, in the same year a separate study by Vlasselaers et al.7 in pediatric intensive care unit (ICU) patients, most of whom had undergone cardiac surgery, showed that normalizing glucose decreased mortality from 6% to 3%, keeping open the question — at least in critically ill children.

The study by Agus et al.8 now reported in the Journal provides new key data. A total of 980 children (up to 36 months of age) admitted to an ICU after cardiac surgery were randomly assigned to usual care or tight glucose control. The results are clear — there was no significant difference in the incidence of health care–associated infections (the primary outcome) or in any of the secondary outcomes, including survival. Moreover, the rate of hypoglycemia (blood glucose level <40 mg per deciliter [2.2 mmol per liter]) in the intervention group (3%) was far less than that previously reported (25%).7 These findings contrast sharply with those of Vlasselaers et al.,7 who found that secondary infections, length of stay, and mortality were reduced. Faced with contradictory results from two large clinical trials, how does the clinician know which results are correct?

First, biologic plausibility is important in attributing a survival benefit to a specific intervention. In the first pediatric ICU study, the additional deaths in the control group did not appear to be due to causes related to hyperglycemia,7 a finding that suggests that the benefit was unlikely to be reproducible. The current authors, exclusively studying children after cardiac surgery, recognized that mortality in this population is usually due to prohibitive anatomy or surgical challenge; these are circumstances not amenable to correction by metabolic control.

Second, might differences in the target plasma glucose explain the discrepant findings? Agus et al. aimed for a higher target range of plasma glucose in the intervention group (80 to 110 mg per deciliter [4.4 to 6.1 mmol per liter]) than was targeted in the first pediatric study (infants, 50 to 80 mg per deciliter [2.8 to 4.4 mmol per liter]; children, 70 to 100 mg per deciliter [3.9 to 5.6 mmol per liter]).7 Perhaps the lower glucose target is preferable? The weight of evidence is against this, and if this target were used, the incidence and severity of hypoglycemia would have been greater, as previously reported.7 Hypoglycemia is never to a patient’s benefit, and its negative impact on neurocognitive development in children is of particular concern.

It seems that — as in adults — claims for survival benefit in critically ill children are incorrect. Furthermore, there is no reason why the effects of glucose control in children would be opposite to those in adults. In aggregate, the data do not support a basis for embarking on a pediatric megatrial.

Assuming the results of the NICE-SUGAR study6 are generalizable, we must be grateful for the future lives saved by avoiding the practice of normalizing glucose in the ICU. At the same time, we should reflect on why a large study with mortality as an end point was needed in the first place.

Perhaps the most important question from a decade of studying glucose control in the ICU is how influential practice guidelines advocating tight glucose control were developed2,3 yet turned out to be harmful — an issue noted in the lay press.9 Guideline writers, reflecting on the experience, must accept that there are multiple sources of clinical knowledge10 and must pay careful attention to trial characteristics — especially study reproducibility — in order to provide advice that genuinely helps clinicians. Clinicians in turn should use guidelines wisely, recognizing that no single source of knowledge is sufficient to guide clinical decisions.10

Is the door closed on studying glucose homeostasis in the critically ill? No, but it should be closed on the routine normalization of plasma glucose in critically ill adults and children.

Disclosure forms provided by the author are available with the full text of this article at NEJM.org.

This article was published on September 7, 2012, at NEJM.org.

SOURCE INFORMATION

From the Department of Critical Care Medicine and Anesthesia, Hospital for Sick Children, University of Toronto, Toronto.

Source:

http://www.nejm.org/doi/full/10.1056/NEJMe1209429?query=OF

REFERENCES

    1. 1

      van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001;345:1359-1367
      Full Text | Web of Science | Medline

    1. 2

      American College of Endocrinology and American Diabetes Association Consensus statement on inpatient diabetes and glycemic control. Diabetes Care 2006;29:1955-1962
      CrossRef | Web of Science

    1. 3

      Dellinger RP, Carlet JM, Masur H, et al. Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock. Intensive Care Med 2004;30:536-555
      CrossRef | Web of Science | Medline

    1. 4

      Angus DC, Abraham E. Intensive insulin therapy in critical illness. Am J Respir Crit Care Med 2005;172:1358-1359
      CrossRef | Web of Science | Medline

    1. 5

      The Joint Commission. 2004 Ernest Amory Codman Award winners (http://www.jointcommission.org/2004_ernest_amory_codman_award_winners).

    1. 6

      The NICE-SUGAR Study Investigators. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009;360:1283-1297
      Full Text | Web of Science

    1. 7

      Vlasselaers D, Milants I, Desmet L, et al. Intensive insulin therapy for patients in paediatric intensive care: a prospective, randomised controlled study. Lancet 2009;373:547-556
      CrossRef | Web of Science | Medline

    1. 8

      Agus MSD, Steil GM, Wypij D, et al. Tight glycemic control versus standard care after pediatric cardiac surgery. N Engl J Med 2012. DOI: 10.1056/NEJMoa1206044.

    1. 9

      Groopman J, Hartzband P. Why `quality’ care is dangerous. Wall Street Journal. April 9, 2009 (http://online.wsj.com/article/SB123914878625199185.html).

  1. 10

    Tonelli MR, Curtis JR, Guntupalli KK, et al. An official multi-society statement: the role of clinical research results in the practice of critical care medicine. Am J Respir Crit Care Med2012;185:1117-1124
    CrossRef | Web of Science

 

Read Full Post »

FDA: Strengthening Our National System for Medical Device Post-market Surveillance

Reporter: Aviva Lev-Ari, PhD, RN

 

September 7, 2012 | By Damian Garde

The FDA wants industry feedback on a host of new post-market surveillance initiatives, designed to better track, analyze and report the performance of medical devices.

In a report released Thursday, the agency proposes a four-point plan to improve its post-market system, including the previously announced unique ID program and a modernization of MedWatch. The agency is taking any and all opinions from devicemakers and members of the healthcare community through its website, and the FDA plans to host public meetings on the plan this month.

 

Here’s a summary of the four points:

 

Establish a unique device ID system: In keeping with its July announcement, the FDA wants to require devicemakers to tag their products with an alphanumeric code, disclosing the device’s production information, serial number, manufacturing date and expiration date. The goal is to help the FDA and healthcare community to more accurately track and analyze device-related adverse events. Once rolled out, the ID system will cost the industry $65 million, the FDA has said.

Promote international device registries: The agency isn’t looking to found a huge, centralized registry, housing data on device uses and reactions. Instead, the FDA wants to help governments and private outfits set up and operationalize smaller registries, sharing data with one another to keep tabs on device performance. The agency plans to host a series of public workshops to educate would-be registry founders on the best way to move forward.

Modernize adverse event reporting: Currently, the FDA relies on spontaneous reporting for when devices go awry, primarily using its voluntary MedWatch system. That model is inherently limited, the agency says, and it wants to institute automated reporting systems in hospitals, encourage more electronic reporting, develop a mobile app for MedWatch and update the MAUDE adverse event database, which the FDA says is technologically outdated.

Develop new tools and methods for post-market surveillance: This is the catch-all part of the FDA’s plan, in which the agency discusses future innovations that could generate, synthesize and interpret post-market data to drive decision-making. For instance, the FDA wants to automate data analysis to identify spikes in adverse events across disparate data sources. The agency also proposes instituting quantitative decision analysis in its post-market deliberations, aiming to better standardize its methods.

http://www.fiercemedicaldevices.com/story/fda-unveils-plan-device-surveillance/2012-09-07?utm_medium=nl&utm_source=internal

http://www.fda.gov/downloads/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDRH/CDRHReports/UCM301924.pdf

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Countries colored in brown rank highly in the Growth Competitiveness Index 2004 – 2005, World Economic Forum. Black circles represent select biotechnology and life sciences clusters.

North AmericaSeattle, USA
San Francisco, USA
Los Angeles, USA
San Diego, USA
Saskatoon, Canada
*Minneapolis/St. Paul/Rochester USA
Austin, USA
Toronto, Canada
Montreal, Canada
Boston, USA
New York/New Jersey, USA
Philadelphia, USA
Baltimore/Washington, DC, USA
Research Triangle NC, USA
Central America / South AmericaWest Havana, Cuba
Belo Horizonte/Rio de Janeiro, Brazil
Sao Paulo, Brazil
United Kingdom / IrelandGlasgow-Edinburgh, Scotland
Manchester-Liverpool, England
London, England
Cambridge-SE England
Dublin, Republic of Ireland
Continental EuropeBrussels, Belgium
Medicon Valley, Denmark/Sweden
Stockholm/Uppsala, Sweden
Helsinki, Finland
Paris, France
Biovalley, France/Germany/Switzerland
BioAlps, France/Switzerland
Sophia-Antipolis, France
BioRhine, Germany
BioTech Munich, Germany
BioCon Valley, Germany
MideastIsrael AfricaCapetown,
South Africa
AsiaBeijing, China
Shanghai, China
Shenzhen, China
Hong Kong, China
Tokyo-Kanto, Japan
Kansai, Japan
Hokkaido, Japan
Taipei, Taiwan
Hsinchu, Taiwan
Singapore
Dengkil, Malaysia
New Delhi, India
Hyderabad, India
Bangalore, India
OceaniaBrisbane, Australia
Sydney, Australia
Melbourne, Australia
Dunedin, New Zealand

Definitions

Biotechnology: Biotechnology is the use of cellular and biomolecular processes to solve problems or make useful products. [Biotechnology Industry Organization – BIO]

Bioscience/Life Science: pharmaceuticals, biotechnology, medical devices, R&D in the life sciences. [Devol et al., 2005]

Clusters: Clusters are a geographically proximate group of interconnected companies and associated institutions in a particular field, including product producers, service providers, suppliers, universities, and trade associations. [Cluster Mapping Project, Institute for Strategy and Competitiveness, Harvard Business School]

* Cited no. 8 for Total Life Sciences Current Impact by Devol (2005) defined as pharmaceutical, biotechnology, medical devices, and R&D in the life sciences. Minneapolis/St. Paul/Rochester is principally a medical device cluster.

References

Map is a Mercator projection that exaggerates the size of areas far from the equator.

Global biotechnology clusters map published by:

Andersen, Jørn Bang, “Establishment of Nordic Innovation Centres in Asia?” by the Nordic Innovation Centre for the Nordic Council of Ministers, Copenhagen, 2008.

Dimova, Maria, Andres Mitnik, Paula Suarez-Buitron and Marcos Siqueira. “Brazil Biotech Cluster: Minas Gerais” [PDF] Institute for Strategy and Competitiveness, Harvard Business School, Spring 2009.

Encyclopedia of Globalization, Routledge, November 2006.

Hamdouch, Abdelillah and Feng He. “R&D Offshoring and Clustering Dynamics in Pharmaceuticals and Biotechnology: Insights from the Chinese Case,” [PDF] The Spirit of Innovation Forum III, May 14-16, 2007.

Loh, Melvyn Wei Ming, “Riding the Biotechnology Wave: A Mixed-Methods Analysis of Malaysia’s emerging Biotechnology industry” [PDF] Victoria University of Wellington, New Zealand, 2009.

Murray, Fiona and Helen Hsi, “Knowledge Workers in Biotechnology: Occupational Structures, Careers & Skill Demands” [PDF] MIT Sloan School of Management, September 2007.

Rinaldi, Andrea. “More than the sum of their parts? Clustering is becoming more prevalent in the biosciences, despite concerns over the sustainability and economic effectiveness of science parks and hubs,”EMBO reports, February 2006 [PDF]

Royer, Susanne, “Crossing-borders: International Clusters: An analysis of Medicon Valley based on Value-Adding Web “ [PDF] University of Flensburg, July 8, 2007.

Salerno, Reynolds. “International Biological Threat Reduction at Sandia,” Sandia National Laboratory, July 31, 2006 [PDF]

Source:

http://biotech.about.com/gi/o.htm?zi=1/XJ&zTi=1&sdn=biotech&cdn=b2b&tm=7&f=00&tt=3&bt=1&bts=1&zu=http%3A//mbbnet.umn.edu/scmap/biotechmap.html

The 26th annual issue of Beyond borders, E&Y annual report on the global biotechnology industry.

Our analysis of trends across the leading centers of biotech activity reveals both signs of hope and causes for concern. The financial performance of publicly traded companies is more robust than at any time since the onset of the global financial crisis, with the industry returning to double-digit revenue growth.

Companies that had made drastic cuts in R&D spending in the aftermath of the crisis are now making substantial increases in their pipeline development efforts.

But even as things are heading back to normal on the financial performance front, the financing situation remains mired in the “new normal” we have been describing for the last few years. While the biotech industry raised more capital in 2011 than at any time since the genomics bubble of 2000, this increase was driven entirely by large debt financings by the industry’s commercial leaders.

The money flowing to the vast majority of smaller firms, including pre-commercial, R&D-phase companies — a measure we refer to as “innovation capital” — has remained flat for the last several years.

As such, the question we have posed for the last two years is more relevant than ever: how can biotech innovation be sustained during a time of serious resource constraints?

These are timely topics, and we look forward to exploring them with you.

Take a closer look at our findings and point of view:

  • Holistic open learning networks -Holistic open learning networks (HOLNets) could make R&D shades more efficient by harnessing the power of big data to develop real-time insights.Even as biotech adjusts to its new normal, health care is moving to an outcomes-based ecosystem characterized by new incentives, new technologies and big data.

    HOLNets could reinvent R&D by pooling data, creating standards and engaging regulators and patients.

    Now, more than ever, this approach is feasible because it is in the self interest of the entities that would need to be part of it.

  • Financial performance heads back to normal -The aggregate financial performance of publicly traded biotechnology companies in the four established clusters — the United States, Europe, Canada and Australia — showed encouraging signs of recovery and stabilization.Growth in established biotechnology centers, 2010-11 (US$b)

    Source: Ernst & Young and company financial statement data.
    Numbers may appear inconsistent because of rounding.

    The acquisition of three large US companies — Genzyme Corp., Cephalon and Talecris Biotherapeutics —by non-biotech buyers made a significant dent in the industry’s 2011 performance.

    To get a sense of the organic “apples-to-apples” growth of the industry, we have therefore calculated normalized growth rates that remove these three firms from the 2010 numbers.

    After adjusting for these large acquisitions, the industry’s revenue growth rate returned to double-digit territory for the first time since the global financial crisis. R&D grew by 9% in 2011, after being slashed in 2009 and growing by a modest 2% in 2010.

    US biotechnology at a glance, 2010-11 (US$b)

    Source: Ernst & Young and company financial statement data.
    Numbers may appear inconsistent because of rounding.

    As always, since the US accounts for a large majority of the industry’s revenues, the US story is very similar to the global one.

    After normalizing for the acquisitions of Genzyme, Cephalon and Talecris , the US industry’s revenues increased by 12%, outpacing the 10% growth rate seen in 2010 and 2009 (adjusted for the Genentech acquisition).

    Source:

    http://www.ey.com/GL/en/Industries/Life-Sciences/Beyond-borders—global-biotechnology-report-2012_Financial-performance-heads-back-to-normal


  • Financing remains stuck in the “new normal”
  • Big pharma stayed away from M&A deals -Given the critical role that big pharma could play in supporting the biotech innovation ecosystem and the fact that the expected exit for most venture investors is an acquisition, this lack of activity is unsettling.With big pharma in the midst of crossing the long-awaited patent cliff, many expected a more pronounced upsurge in transactions — particularly for targets with product revenue or very late-stage product candidates.

    However, only Sanofi’s acquisition of Genzyme (which really played out in 2010 but did not get finally negotiated and closed until 2011) entered the ranks of the year’s 10 largest deals. Even more noteworthy, big pharma was the buyer in only 7 of the year’s 57 M&A transactions.

    US and European M&As, 2006-11

    US and European M&As, 2006-11Source: Ernst & Young, Capital IQ, MedTRACK and company news.
    Chart excludes transactions where deal terms were not publicly disclosed.

    Meanwhile, the number of strategic alliances declined for the second straight year, and the potential “biobucks” value of these deals hit a six-year low.

    US and European strategic alliances based on up-front payments, 2006-11

    US and European strategic alliances based on up-front payments, 2006-11



Source:

http://www.ey.com/GL/en/Industries/Life-Sciences/Beyond-borders—global-biotechnology-report-2012-Big-pharma-stayed-away-from-MandA-deals

http://www.ey.com/GL/en/Industries/Life-Sciences/Beyond-borders—global-biotechnology-report-2012

Resizing the Global Contract R&D Services Market

 A new study revises estimates of the market

By Kenneth Getz, Mary Jo Lamberti, Adam Mathias, Stella Stergiopoulos, Tufts CSDD

Published May 30, 2012

Pharmaceutical, biotechnology and medical device company managers serving every R&D function — from discovery and manufacturing through post-approval clinical trials — are keenly aware today of the integral role that outsourcing plays in supplementing capacity and expertise. Demand for outsourced services has increased sharply as drug and device development sponsors have downsized and consolidated infrastructure in response to a sharp global economic downturn, poor short-term revenue growth prospects and costly and inefficient operating conditions. In addition, startups and small companies actively leverage contract service providers to gain access to expertise and skills not available internally.Contract service organizations have proliferated across a wide spectrum of R&D services areas. A 2011 analysis by Tufts Center for the Study of Drug Development (Tufts CSDD) found a nearly four-fold increase in the number of contract research organizations (CROs) in the U.S. alone during the past decade: Whereas an estimated 800 contract service providers operated in the U.S. in 2000, more than 3,100 did so at the end of 2011. (Data on the proliferation of contract R&D service providers in Europe and in other regions around the world are not available.) In another study, Tufts CSDD found that in 2010, CRO-employed professionals were more than doubling the capacity of the global drug development enterprise — the first time in history when CROs were providing more head count in support of R&D activity than were pharma and biopharma companies.

Despite this dramatic proliferation during the last 10 years, however, little information exists that characterizes the size and characteristics of the overall global outsourcing landscape. Coverage of CRO markets and usage practices by peer-review and trade journals has largely focused on individual service areas aligned with either each publisher’s readership or the author’s primary area of expertise. Contract lead identification and optimization services markets and practices, for example, tend to be covered in publications reaching discovery scientists. Similarly, the contract formulation services area is typically discussed in publications catering to professionals in chemistry, manufacturing and controls. Some directories (e.g., Contract Pharma (www.contractpharma.com/csd), PharmaCircle (www.pharmacircle.com)) profile companies across contract R&D service areas. These directories do not publish macro-analyses of the global aggregate R&D outsourcing market.

Capital market analysts and industry observers have also largely focused on characterizing only the most mature R&D outsourcing markets: contract clinical and preclinical research services. These markets have historically had the highest prevalence of large, publicly-traded companies making it relatively easy to monitor performance, assess transactions and evaluate corporate strategies. Goldman Sachs, UBS, Fairmount Partners, Jefferies and William Blair are among the many financial services firms that support transactions and cover developments in the global outsourcing marketplace. Published reports from these organizations typically only cover and estimate the size of the clinical and preclinical markets — a fraction of the total contract services marketplace. Industry professionals and analysts tend to use these estimates as proxy measures for total market size when they grossly underestimate the size of the overall outsourcing market.

Two recent reports stand out as noteworthy attempts to size the overall CRO market and affirm the growing interest in this aggregate market metric: the Harris Williams & Company 2008 Market Monitor report and the 2011 BCC Research Report. The former report focused on the larger healthcare and life sciences arena but estimated — using a top-down approach — the size of the contract clinical, preclinical, manufacturing, clinical laboratory and sales markets. Harris Williams, a private investment banking firm, estimated that the total market for these specific service areas in 2008 reached approximately $75 billion. The later BCC Research report sized the overall 2011 global outsourcing market at $217.9 billion. This top-down analysis included not only contract service providers supporting prescription drugs, but also over-the-counter and nutraceuticals products.

As demand for — and the adoption of — contract research services has grown there is a greater need for more accurate and comprehensive measures of the size and structure of the overall landscape. Better metrics assist companies and analysts in assessing the financial health, trends, structure, operating conditions and maturity of the overall market for contract research services. Sponsor companies can also use these metrics for strategic planning purposes and to forecast the impact of new management practices on the landscape. More accurate metrics enable analysts to monitor consolidation, diversification and divestiture activities. And more accurate descriptive statistics on the landscape assist CRO companies in developing, implementing and evaluating strategic initiatives.

In late 2010, Tufts CSDD began a new study using a rigorous, bottom-up approach to independently size the U.S. market for all contract R&D services. The goal of the study was to perform a carefully designed, methodical and systematic market-sizing study using actual data wherever possible. It is our hope that this initial but definitive quantitative assessment will serve as a basis for sizing contract service providers in Europe and in the rest of the world, and that it will better inform discussion, analysis and understanding of the global outsourcing landscape.

Methods
Tufts CSDD focused on the U.S. market for this initial study due to the labor-intensive nature of analyzing a large, fragmented market predominantly made up of small, privately held organizations and independent consultants. Tufts CSDD developed detailed definitions of primary contract service markets, and compiled a list — to the best of its ability — of all known contract service providers in each respective market within high concentration metropolitan and industrial areas. A total of 15 major geographic clusters, defined by Metropolitan Statistical Area (MSA), were identified and analyzed. These clusters capture approximately 75% of the list of contract service companies operating in the US. Contract service companies operating within these 15 geographic regions likely capture an even larger proportion of total U.S. outsourced services revenue as these companies include all the major, widely-recognized players. Data on more than 4,500 companies — some of them divisions or branches of diversified players — were analyzed.

Market Segment Definitions: The five primary market segments evaluated correspond with primary R&D and manufacturing processes: Applied Research, Non-Clinical Research, Clinical Research, Chemistry Manufacturing and Controls (CMC) and Staffing-Consulting-Management (Other) services. This ‘Other’ segment includes a wide variety of small, independent companies as well as large providers offering contract professional staffing, supply chain management, import/ export and distribution services as well as business development support. Specific main service category and common sub-category service areas within each of the primary market segments are characterized in Figure 1. (Main Categories and Sub-Categories are not mutually exclusive.)

Figure 1: Service Area Map

Service Provider Identification: Tufts CSDD used seven published, commercially available print and online directories of contract service providers to identify individual contract R&D services companies:

  • Applied Clinical Trials 2010 Directory & Buyers Guide
  • Contract Pharma2010/2011 Contract Services Directory
  • Fierce Marketplace 2010/2011 Directory for Contract Manufacturing
  • Hoovers.com Biotechnology Services Directory
  • The Pharmaceutical OutsourcingTM 2011 Company Focus and Industry Reference Guide (Volume 11, Issue 6, October 2010)
  • The PharmaCircle Database 2010/2011
  • ReferenceUSA.com (SIC Code 591207; “Pharmaceutical Consultants”) as of December 2010

Top Areas of Geographic Concentration: From these directories, company names and addresses were captured. Each company’s main address zip code was organized according to the U.S. Office of Management and Budget (OMB)’s definition of Metropolitan Statistical Areas (MSA). This approach was used in order to systematically identify and analyze areas of highest geographic concentration. The OMB’s definition of the MSA is “one or more adjacent counties or county equivalents that have at least one urban core area of at least 50,000 population, plus adjacent territory that has a high degree of social and economic integration with the core as measured by commuting ties.” The largest 15 geographic areas, defined by MSAs, containing contract service providers are:

  • New York/Northern New Jersey (i.e., New York-Northern New Jersey-Long Island)
  • Greater Boston (i.e., Boston-Worcester-Lawrence)
  • Delaware Valley (i.e., Philadelphia-Wilmington-Atlantic City)
  • Los Angeles (i.e., Los Angeles-Riverside-Orange County)
  • The Washington DC Area
  • San Francisco Bay (i.e., San Francisco-Oakland-Freemont)
  • San Diego (i.e., San Diego-Carlsbad-San Marcos)
  • Durham NC (i.e., Durham-Chapel Hill)
  • Greater Chicago (i.e., Chicago-Joliet-Naperville)
  • Greater Baltimore (i.e., Baltimore-Towson)
  • Raleigh NC (i.e., Raleigh-Cary)
  • Minneapolis (i.e., Minneapolis-St. Paul-Bloomington)
  • Kansas City Area
  • San Jose (i.e., San Jose-Sunnyvale-Santa Clara)
  • Houston (i.e., Houston-Sugar Land-Baytown)

Figure 2 provides a visual representation of the 15 highest concentration areas of contract R&D services providers in the United States. These concentrated areas of contract service providers are in close proximity to geographic areas where pharmaceutical, biotechnology and manufacturing sectors in the US originated.

Figure 2: High Concentration Geographic Areas

Contract Service Company Types: Tufts CSDD organized companies along the following lines to assist with its evaluation of overall market and service segment characteristics:

  • Pure-play companies: companies offering only one service area main-category. Examples of pure-play companies include: Abpro Corporation, cGMP Validation LLC. and Profacgen.
  • Mid-sized companies: companies with two to five service area main-categories. Examples include: Accugenix Inc., Beckloff Associates Inc., QS Pharma and the Zitter Group.
  • Conglomerate companies: companies with six or more service areas main-categories. Examples include: Aptuit (multiple sites); Covance (multiple Sites); PPD (multiple sites) and Quest Diagnostics (multiple sites)

(Service areas are defined in Figure 1.)

Tufts CSDD used company websites to determine branch and satellite office locations. If a company did not have a website, it was removed from the analysis. If the website did not specify which site performed which service, it was assumed that all locations offered the same number of services.

For publicly traded companies, Tufts CSDD used published company reports — annual reports, 10Ks, trade journal and newspaper articles — for operating information, revenue figures, locations and employee size. For privately held companies, Tufts CSDD used Hoovers.com.

Actual revenues and employee data were used whenever possible. In those cases where actual data were not available, financial and employee data were imputed using benchmark metrics derived from actual data:

  • Pure-play companies: assigned average revenue and employee values based on actual data from other pure-play companies.
  • Mid-sized companies: derived revenue and employee values based on actual data from companies of equal size and diversity.
  • Conglomerate companies: derived revenue and employee values based on actual data from companies of equal size and diversity.
  • Public companies: If service area-specific revenue and employee data was not reported, values were distributed equally across service areas.

Results
In total, 3,244 unique contract R&D service companies actively operating in the U.S. were identified and analyzed. These companies generated an estimated $32.9 to $39.5 billion in contract R&D services revenue with the largest share coming from the CMC and Non-Clinical market segments — 29%, and 21% respectively. The U.S. Clinical Research Services segment — which includes regulatory services — generated approximately $6.5 billion. Chart 1 shows the relative U.S. market share of each contract R&D service segment.

In the aggregate, companies operating in the overall U.S. contract R&D services market employ approximately 154,000 people and were founded more than 17 years ago. The typical company is privately-held, generates $10 million ($US) in revenue annually and is operating in 1.4 service areas.

The CMC and Non-Clinical Research segments have the largest number of companies providing services as shown in Chart 2. An estimated 1,274 companies in the U.S. offered CMC services in 2011, and 1,205 companies in the U.S. offered Non-Clinical Research Services. The Clinical Research segment had 643 active companies in the U.S. providing services in 2011.

The majority — 69% — of contract R&D service providers overall are privately held companies. CMC and Non-Clinical Research services segments have the highest concentration of publicly traded companies at 47% and 52% respectively. Approximately 17% of all companies providing Clinical Research Services are public. Chart 3 depicts the proportion of public to private companies in each major U.S. contract R&D services market segment.

Applied Research Services and Other Services U.S. market segments are the least mature and most productive segments, as reflected in Table 1 and Table 2. Companies in the Applied Research Services segment are the youngest, the most likely to be privately held, and the smallest. As a more nascent segment, revenue per employee in the Applied Research Services segment is one of the highest, at $267,000. The Other Services segment is also relatively young, with a high concentration of privately held companies. Revenue per employee in this segment is higher than any other U.S. market segment, at $284,000.

Individual companies in the Clinical Research Services and Other Services segments generate more revenue per company and have relatively higher levels of employee productivity. The CMC and Non-Clinical Research Services segments are the most mature, with the highest proportion of publicly-traded companies, the highest average number of employees and the lowest relative employee productivity.

Discussion
This initial Tufts CSDD study sizes the overall U.S. contract R&D services using a systematic bottom-up approach based on actual company data whenever possible and imputed data based on benchmarked actuals. The overall U.S. market for the 15 highest concentration geographic areas — as defined by MSA — is estimated at between $32.5 and $39.5 billion. Assuming that these geographic areas represent 75% of the total U.S. market, and that the U.S. market contributes 50% of contract services worldwide, Tufts CSDD estimates that the total global market for all contract services supporting prescription drug R&D is $90 billion to $105 billion. The total global market for contract R&D services therefore is more than five times larger than commonly cited figures.

Adjusting the service areas to adhere to traditional market definitions established by the investment banking community, the Tufts CSDD figures for the Clinical Research and Preclinical Research markets are consistent with those published by financial analysts (see Table 3).

It is highly likely that the overall market and individual segment sizes are larger than the conservative estimates presented in this paper. Tufts CSDD acknowledges the limitations of usingHoovers.com to characterize the high proportion of privately held companies, as Hoovers tends to present ultra-conservative figures. In addition, there are some limitations to using imputed data within service area revenues, as there is a tendency to inflate the smallest company revenue. However, using our estimates combined with actual data from public and some private companies helps to mitigate this limitation to some degree.

The major market segment definitions and service areas that comprise them are a useful approach to organizing contract services companies and it may provide a valuable framework for future analyses. The Tufts CSDD study finds that all of the market segments are accommodating very large and highly diversified publicly traded companies and many small, specialty companies. CMC and Non-Clinical Research segments are the most mature with the oldest relative companies, the highest proportion publicly traded, and the lowest levels of employee productivity (e.g., revenue per employee). Segment maturity is a function of historical receptivity by pharmaceutical, biotechnology and medical device companies to outsource high fixed cost, manufacturing and labor-intensive activities that are deemed non-core. Relative to the other segments, the Clinical Research Services segment is one of the most productive with the highest proportion of privately held companies.

The Other Services segment remains too diverse, making it difficult to characterize this segment adequately. In the future, Tufts CSDD will look to refine the definition of this segment to ensure that it is a more homogeneous group of companies.

At the present time, Tufts CSDD is analyzing contract services company data by geographic cluster to better understand the economic impact of each market segment locally. In addition, Tufts CSDD plans to apply this more robust methodology to sizing the overall contract services market in Europe and in other major global regions.

Drug and device innovation is evolving and re-inventing itself continually. As R&D costs rise, operating and regulatory complexity increases, and mergers, acquisitions and consolidation continue, the use of contract service providers as integral and integrated sourcing providers will similarly continue to grow. It is our hope that the analysis and results contained in this article will play a role in improving future assessments of the size and structure of the outsourcing landscape.


Kenneth Getz, MBA, is Senior Research Fellow and Assistant Professor at Tufts Center for the Study of Drug Development. He can be reached at kenneth.getz@tufts.eduMary Jo Lamberti, Ph.D., is Senior Project Manager at Tufts CSDD. Stella Stergiopoulos is project manager, Tufts CSDD.  Adam Mathias is Research Analyst, Tufts CSDD. This project was funded by an unrestricted grant from the Kansas Bioscience Authority (KBA).

Source:

http://www.contractpharma.com/issues/2012-06/view_features/resizing-the-global-contract-rd-services-market/

US cities lose jobs and revenues as big

pharma companies close R&D facilities


By Tony Favro, USA Editor*

9 April 2012: 

In 2007, Pfizer, the pharmaceutical company, closed its research and development facility in Ann Arbor, Michigan, displacing 2100 workers. In 2009, the University of Michigan purchased the vacant site and expected to create two to three thousand jobs over ten years. At the time of the sale, Ann Arbor Mayor John Hieftje expressed mixed emotions. On the one hand, he said in a statement, “If the University of Michigan is able to greatly expand life sciences research in Ann Arbor it will have far-reaching long-term economic benefits for the whole region.” On the other hand, Mayor Hieftje told Crains’ Detroit Business newspaper, “[The deal] has troubling aspects for local government”. Hieftje was referring to the $14 million in local taxes paid by Pfizer, which will not continue since the University of Michigan is a tax-exempt organization.

• Profits versus R&D
• The Government steps in
• Shift in research culture
• Bigger government

The Ann Arbor story is not unique. According to the US Bureau of Labor Statistics, the pharmaceutical industry shed 35,000 in the United States in 2010, the most recent year for which complete data are available. Cities throughout the US were burdened by plant closures. Ann Arbor was luckier than most cities. The University of Michigan employed about 1,700 workers at the former Pfizer site at the end of 2011. These workers are doing much of the research formerly done by Pfizer — and this gets to the heart of the matter. Big pharma companies are abandoning basic drug research, leaving the federal government and universities to pick up the slack.

Profits versus R&D
According to the August 2011 issue of the journal Nature Reviews Drug Discovery, the decline of prescription drug research and development R&D is the result of 15 years of continuous industry consolidations and the drive by drug manufacturers to maximize profits.

Since 2000, for example, Pfizer has acquired three major drug makers, Warner-Lambert, Pharmacia, and Wyeth, closing research centers with each acquisition. “These [closed] sites housed thousands of scientists, and many major drugs were discovered there,” the journal notes. “The same pattern has been observed after most of the mergers and acquisitions by other major pharmaceutical companies during the past decade.”

Profit is another reason big pharma companies are abandoning basic research. Over the past couple of decades, big drug firms competed to produce blockbuster drugs that yielded huge payoffs. Drugs such as Merck’s Vioxx and Pfizer’s Lipitor generate several billion dollars in annual sales and reap big profits for their makers. The fierce competition leads to costly duplication of work with as many as 20 companies vying to be the first to come out with the next blockbuster drug. The stakes for drug companies become higher as patents expire for popular and profitable drugs and revenue streams dry up.

The potentially enormous profits of a breakthrough discovery, however, are proving too elusive to offset the heavy upfront costs of basic research and development, an estimated 10 to 20 per cent of total expenditures. As a researcher told the Rochester Business Journal, “The days of the blockbuster drug are over”.

Businesses survive by making money for their shareholders, and when part of a business can no longer reliably generate profits — in this case, basic drug research — the unprofitable part is understandably jettisoned.

This makes good business sense, but poor public policy. People need pharmaceuticals — in many instances, it’s a question of life or death — and so the federal government has had to fill the void left by drug companies’ retreat from basic and early-stage research.

The government steps in
Over the past few years, the federal National Institutes of Health has invested hundreds of millions of dollars to build a drug-discovery infrastructure. Most of the federal expenditures have been used to establish a network of 60 “clinical translational centers” at research universities. These centers are changing the direction of pharmaceutical research and creating new opportunities for public-private collaborations.

In essence, the emerging drug-development model in the USA has big pharmaceutical firms coming in at a later stage to market and distribute drugs that have been discovered and tested by university researchers and small, private biotech companies.

The emerging model promises to greatly expand opportunities for universities to earn royalties from pharmaceutical companies. The federal funding for “translational” research also incentivizes entrepreneurship at universities. Universities that develop and hold patents are expected to translate that knowledge into jobs, not only by contracting with big pharma but also by incubating and spinning-off small, private drug-development companies. In the federal model, big drug makers will strike licensing deals directly with universities or with small companies, primarily university spin-offs. One potential benefit of the new model is that entire categories of drugs previously ignored by big pharma because of their low-profitability may now be brought to market.

Shift in research culture
Federal monies are helping build a research infrastructure at the university level to bring basic discoveries to market as well as catalyze broader economic growth. This requires a culture shift at both universities and businesses. Traditionally, a scientific advance by a university professor might end as a research paper read by a few colleagues in the same field. In the clinical translational model supported by the National Institutes of Health, scientists must collaborate with colleagues in different fields — the chemist with the engineer and sociologist and marketing professor, for example. Drug companies also have to discuss their research and results with academics and with their counterparts at different drug firms. They can no longer label such information as proprietary and keep it to themselves.

Bigger government
Critics of big government should take note: when businesses contract, government often has to expand to protect citizens. Businesses may create jobs, but they will also pass their costs to taxpayers when they can. Large drug companies consider delivering a return to shareholders their first duty, and therefore cut R&D that drains short-term profits. But short-term business sense may threaten public health and even the profitability of corporations since, over the long-term, a less-healthy labor pool could drive up the cost of doing business.

And sometimes government requirements and mandates, such as the clinical translational research model, can spark economic growth. According to Dr. Karl Kieburtz of the University of Rochester, one of the first universities to be funded by the National Institutes of Health, “We are looking at many things, surgical devices and other things, not just drugs.” The University of Rochester, which purchased a building that Wyeth vacated for research, has already spun-off 30 companies. As multinational pharmaceutical companies unload more of their marginally-profitable but publicly-indispensible activities, the public and nonprofit sectors will have to fill the gap.

The effect on US city governments is uneven. Cities will lose jobs and property tax revenues when pharmaceutical companies close their R&D facilities. Cities fortunate enough to have a university with a translational research center should eventually recover their losses and more.

*Tony Favro also maintains the blog Planning and Investing in Cities.

Source:

http://www.citymayors.com/economics/usa_big_pharma_cities.html#Anchor-Profits-49575

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

 

ABOUT CGC

The Consumer Genetics Conference covers the key issues facing clinical genetics, personalized medicine, molecular diagnostics, and consumer-targeted DNA applications. It provides a unique outlet where all voices can be heard: pro & con, physician & consumer, research & clinical, academic & corporate, financial & regulatory. CGC is more than just another personalized medicine conference. Since the inaugural meeting in 2009, CGC has been the place where consumer companies learn genomics, and where genomics companies learn how to approach consumers. This year’s event is highlighted by keynote presentations from:

– Kenneth Chahine, Ph.D., J.D., ancestry.com
– Jay Flatley, President and CEO, Illumina
– Lee Silver, Ph.D., Princeton University

Spanning three days, the conference will focus on:
– Day 1: Technology
– Day 2: Business + Translation
– Day 3: Application

And 40+ Cutting-Edge Presentations on:
– Personal Genomics
– Third-Generation Sequencing
– Molecular Diagnostics
– Investment & Funding Opportunities
– Genome Interpretation
– The Future of Personalized Medicine
– Big Data
– Prenatal/Neonatal & Disease Diagnostics
– Empowering Patients
– Nutrition, Food Genetics & Cosmetics

SPEAKERS

Confirmed speakers to date include:

Sandy Aronson, Executive Director of IT, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM)

Arindam Bhattacharjee, Ph.D., CEO and Founder, Parabase Genomics

Diana Bianchi, M.D., Executive Director, Mother Infant Research Institute; Vice Chair for Research, Department of Pediatrics, Floating Hospital for Children, Tufts Medical Center

Cinnamon Bloss, Ph.D., Assistant Professor and Director, Social Sciences and Bioethics, Scripps Translational Science Institute

Alexis Borisy, Partner, Third Rock Ventures

John Boyce, President and CEO, GnuBio

Mike Cariaso, Founder, SNPedia; Author of Promethease

Kenneth Chahine, Ph.D., J.D., Senior Vice President and General Manager, DNA, ancestry.com

Michael Christman, CEO, Coriell Institute for Medical Research

Cindy Crowninshield, RD, LDN, Licensed Registered Dietitian, Body Therapeutics & Sodexo; Founder, Eat2BeWell & Eat4YourGenes; Conference Director, Cambridge Healthtech Institute

Kevin Davies, Ph.D., Editor-in-Chief, Bio-IT World

Chris Dwan, Principal Investigator and Director, Professional Services, BioTeam

Jay Flatley, President & CEO, Illumina

Andrew C. Fish, Executive Director, AdvaMedDx

Dennis Gilbert, Ph.D., Founder, President and CEO, VitaPath Genetics

Rosalynn Gill, Ph.D., Vice President, Clinical Affairs, Boston Heart Diagnostics

Steve Gullans, Managing Director, Excel Venture Management

Don Hardison, President & CEO, Good Start Genetics, Inc.

Richard Kellner, Founder and President, Genome Health Solutions, Inc.

Robert Klein, Ph.D., Chief Business Development Officer, Complete Genomics

Isaac S. Kohane, M.D., Ph.D., Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School; Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing; Co-Director, HMS Center for Biomedical Informatics

Stan Lapidus, President, CEO and Founder, SynapDx

Gholson Lyon, M.D., Ph.D., Assistant Professor in Human Genetics, Cold Spring Harbor Laboratory; Research Scientist, Utah Foundation for Biomedical Research

Daniel MacArthur, Ph.D., Assistant Professor, Massachusetts General Hospital; Co-founder, Genomes Unzipped

Craig Martin, Chief Executive Officer, Feinstein Kean Healthcare

James McCullough, CEO and Founder, Exosome Diagnostics

Kevin McKernan, CSO, Courtagen Life Sciences

Neil A. Miller, Director of Informatics, Center for Pediatric Genomic Medicine, Children’s Mercy Hospital

Paul Morrison, Ph.D., Laboratory Director, Molecular Biology Core Facilities, Dana-Farber Cancer Institute

Geert-Jan Mulder, M.D., General Partner, Forbion Capital

Steve Murphy, M.D., Managing Partner, Wellspring Total Health

Michael Murray, M.D., Clinical Chief, Genetics Division, Brigham and Women’s Hospital; Instructor, Harvard Medical School, The Harvard Clinical and Translational Science Center

Brian T. Naughton, Ph.D., Founding Scientist, 23andMe

Nathan Pearson, Ph.D., Director of Research, Knome, Inc.

Michael S. Phillips, Ph.D., Canada Research Chair in Translational Pharmacogenomics; Director, Molecular Diagnostic Laboratory, Montreal Heart Institute; Associate Professor, Université de Montréal

John Quackenbush, Ph.D., Professor, Biostatistics and Computational Biology, Cancer Biology Center for Cancer Computational Biology, Dana-Farber Cancer Institute

Martin G. Reese, President and CEO, Omicia

Heidi L. Rehm, Ph.D., FACMG, Chief Laboratory Director, Molecular Medicine, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM); Assistant Professor of Pathology, Harvard Medical School

Oliver Rinner, Ph.D., CEO, BiognoSYS AG

Meredith Salisbury, Senior Consultant, Bioscribe

Marc Salit, Group Leader, Biochemical Science and Multiplexed Biomolecular Science, National Institute of Standards and Technology

Lee Silver, Ph.D., Professor of Molecular Biology and Public Affairs; Faculty Associate, Science, Technology & Environmental Policy Program, Office of Population Research, and the Center for Health and Wellbeing, Woodrow Wilson School, Princeton University

Jamie Streator, Managing Director, Healthcare Investment Banking, Cowen & Company

Joseph V. Thakuria, M.D., MMSc, Attending Physician in Clinical and Biochemical Genetics Medical Genetics, Massachusetts General Hospital; Medical Director, Personal Genome Project; Harvard Catalyst Translational Genetics and Bioinformatics Program, MGH Center for Human Genetics Research

Samuil R. Umansky, M.D., Ph.D., D.Sc., Co-founder, CSO, and President, DiamiR LLC

David A. Weitz, Ph.D., Mallinckrodt Professor of Physics and Applied Physics, Harvard School of Engineering and Applied Sciences

Speaker to be Announced, Barclays

DAY 1: TECHNOLOGY

WEDNESDAY, OCTOBER 3

7:30 am Conference Registration

8:30 Opening Remarks

John Boyce, President and CEO, GnuBIO and Meredith Salisbury, Senior Consultant, Bioscribe

 

OPENING PLENARY SESSION

 

» 8:45 KEYNOTE PRESENTATION

Self-Discovery in the Age of Personal Genomes

Lee Silver, Ph.D., Professor of Molecular Biology and Public Affairs; Faculty Associate, Science, Technology & Environmental Policy Program, Office of Population Research, and the Center for Health and Wellbeing, Woodrow Wilson School, Princeton University

With blinding speed, the biomedical research enterprise is advancing the technology to read personal genomes with greater accuracy, in less time, and at less expense.Meanwhile, consumer genetics has blossomed from infancy to adolescence with an array of innovative consumer-facing products. This unanticipated cottage industry is struggling with growing pains in a mix of conflicted regulators, restless innovators, and demanding consumers. Genetic information, like all information, “wants to be free,” but the commercialization environment is not yet optimized for personal freedom.

 

9:40 The Era of Clinical Sequencing and Personalized Medicine

Michael Christman, CEO, Coriell Institute for Medical Research

Advances in understanding genomic variation and associated clinical phenotypes continue to increase while the cost of full genome sequencing rapidly declines. Having access to your genomic information will become increasingly important as physicians are progressively receptive to incorporating genomics into routine clinical practice. When you need a new prescription, it will be necessary for your physician to quickly and securely access your genetic data to understand drug efficacy prior to dosing. Who will patients and medical professionals trust to store and interpret the data? Coriell is positioned to significantly contribute to the research needed to accelerate the adoption and routine use of genomics in medicine.

 

10:20 FEATURED PRESENTATION

Stan Lapidus, President, CEO and Founder, SynapDx

 

10:50 Coffee Break

 

BIG DATA/ANALYSIS

11:20 IT Infrastructure Required to Manage Patient Genetic Test Results

Sandy Aronson, Executive Director of IT, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM)

There are many challenges associated with getting the maximum value out of a genetic test. This talk will focus on information technology infrastructure that can help.

11:50 Issues in Genomics at Scale

Chris Dwan, Principal Investigator and Director, Professional Services, BioTeam

2012 marks, in many respects, the beginning of the second decade of high-throughput DNA sequencing. Robust, well understood solutions exist for many of the major technical challenges involved in operating a high-throughput genomics facility. Petabyte scale data storage, well suited to research computing in this space, provides a clean example. Certainly it still requires careful planning and thorough engineering to deploy such infrastructure. However, we can now purchase robust systems from multiple vendors rather than having to stitch together solutions in-house. Perhaps more importantly, we can rely on the experience of a community of peers who have been through the exercise before. By contrast, the legal, regulatory, ethical, and privacy concerns in this space have only begun to be explored. As we plan for the coming years, we must certainly plan for technical uncertainty. Technologists find themselves in the role of guessing at the future. As translational medicine, clinical genome sequencing, and other practices become the norm, we must assume extreme and occasionally capricious changes to the social ecosystem. This talk will explore these issues in the context of nearly a decade supporting research computing and genomics for a broad variety of institutions.

12:20 pm Sponsored Presentation (Opportunity Available)

12:50 Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

MOLECULAR DIAGNOSTICS

2:05 Panel Discussion
Panelists will first give a brief presentation and then convene for a panel discussion.

Michael S. Phillips, Ph.D., Canada Research Chair in Translational Pharmacogenomics; Director, Molecular Diagnostic Laboratory, Montreal Heart Institute; Associate Professor, Université de Montréal (Moderator)

Molecular Diagnostics and the Patient/Consumer

Andrew C. Fish, Executive Director, AdvaMedDx

This presentation will envision a future in which molecular diagnostics are widely utilized not only for decision making by health professionals, but also for the development and use of a wide range of consumer products that include genetic tests themselves. The speaker will discuss various policy implications of this convergence of patient and consumer interests driven by the expanding availability of molecular diagnostics.

Bridging the Gap between Genetic Risk and Blood Diagnostics by Personalized Health Monitoring

Oliver Rinner, Ph.D., CEO, Biognosys AG

Biognosys has developed a solution to quantify and track protein levels over time from a drop of blood. With a novel mass spectrometric technology, we can record protein signals from 1000s of proteins in a single instrument run and store such digital protein maps in a digital bio-bank that can be screened in silico for known and novel biomarkers. We will provide this technology as personalized health monitoring to patients and consumers that seek actionable information about their state of health.

Measuring Disease Treatment and Progression at the Molecular Level without Biopsy

James McCullough, CEO and Founder, Exosome Diagnostics

Exosome has developed a solution that has the ability to measure, at the molecular level without biopsy, the dynamic nature of both treatment and disease progression. The company has developed a means of isolating exosomes: exosomes are shed into all biofluids, including blood, urine, and CSF, forming a stable source of intact, disease-specific nucleic acids. From these, the company is able to develop predictive gene expression profiles to achieve high sensitivity for rare gene transcripts and the expression of genes responsible for cancers and other diseases. This technology obviates the need for biopsy, and provides a means for detection at a much earlier stage of treatment.

3:20 Refreshment Break

3:50 Sponsored Presentation (Opportunity Available)

 

SEQUENCING

4:20 Panel Discussion

Like a double helix, the future growth of consumer genetics is intimately entwined with technology advances in next-generation sequencing. While the industry excitedly awaits the commercial debut of potentially disruptive nanopore sequencing platforms, existing platforms continue to roll out new enhancements and sequencing strategies that bring us within striking distance of clinical-grade whole genome sequencing. This panel discussion brings together leaders from existing and emerging sequencing providers to present and debate a range of questions including the pros and cons of targeted versus whole-genome sequencing, the emergence of third-generation sequencing platforms, and the challenges of integrating genome sequencing into the clinic.

Paul Morrison, Ph.D., Laboratory Director, Molecular Biology Core Facilities, Dana-Farber Cancer Institute (Moderator)

Panelists:

John Boyce, President and CEO, GnuBIO
Robert Klein, Ph.D., Chief Business Development Officer, Complete Genomics Inc.
Speaker to be Announced, Life Technologies
Speaker to be Announced, Illumina

5:50-6:50 Welcome Reception in the Exhibit Hall with Poster Viewing

 

DAY 2: BUSINESS + TRANSLATION

THURSDAY, OCTOBER 4

7:45 am Morning Coffee

 

TRANSLATIONAL GENOMICS

8:15 Panel Discussion
Panelists will first give a brief presentation and then convene for a panel discussion.

Kevin Davies, Ph.D., Editor-in-Chief, Bio-IT World (Moderator)

All Genomes are Dysfunctional: The Challenges of Interpreting Whole-Genome Data from Healthy Individuals

Daniel MacArthur, Ph.D., Assistant in Genetics, Massachusetts General Hospital; Co-founder, Genomes Unzipped

Recent advances in DNA sequencing technology have made cheap, rapid interrogation of complete genome and exome sequences an almost mundane exercise, and have resulted in significant progress in the discovery of disease-causing sequence changes from the genomes of individuals with rare diseases or cancers. However, such successes do not necessarily translate into an improved ability to use genome-scale data to predict future disease probability for currently healthy individuals. In this presentation I will highlight some of the major technical and analytical challenges associated with developing predictive genomic medicine for the healthy majority.

Consumer Genomics: What do People do with Their Genomes?

Cinnamon Bloss, Ph.D., Assistant Professor and Director, Social Sciences and Bioethics, Scripps Translational Science Institute

Direct-to-consumer personalized genomic testing is controversial, and there are few empirical data to inform the debate regarding use and regulation. The Scripps Genomic Health Initiative is a large longitudinal cohort study of over 2,000 adults who have undergone testing with a commercially available genomic test. Findings from this initiative regarding the psychological, behavioral and clinical impacts of genomic testing on consumers will be presented.

Advances in Noninvasive Prenatal Genetic Testing: Does this Mean “Designer” Babies for All?

Diana Bianchi, M.D., Executive Director, Mother Infant Research Institute; Vice Chair for Research, Department of Pediatrics, Floating Hospital for Children, Tufts Medical Center

Noninvasive prenatal testing for Down syndrome and other chromosome disorders using massively parallel DNA sequencing techniques is now available on a clinical basis in the US. With expected advances in sequencing techniques it will soon be possible to take a blood sample from a pregnant woman and determine if her fetus has a chromosome abnormality or a single gene disorder. How much information do prospective couples want and how do these technical advances affect well-established algorithms for prenatal care?

Translating Genomics into Clinical Care

Heidi L. Rehm, Ph.D., FACMG, Chief Laboratory Director, Molecular Medicine, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM); Assistant Professor of Pathology, Harvard Medical School

This talk will focus on approaches to integrate clinical sequencing into genomic medicine. It will cover next generation sequencing test development from disease panels to whole genomes and the interpretation and reporting of genetic variants identified in patients.

Impact of Genomic Sequencing on Public Health and Preventive Medicine

Joseph V. Thakuria, M.D., MMSc, Attending Physician in Clinical and Biochemical Genetics and Medical Director, Personal Genome Project, Massachusetts General Hospital Center for Human Genetics Research

Early findings in the Personal Genome Project (established by George Church) suggest significant impact for public health and preventive medicine. Solutions to accelerate clinical adoption and address large molecular data challenges will be explored.

9:30 FEATURED PRESENTATION
Genome-in-a-Bottle: Reference Materials and Methods for Confidence in Whole Genome Sequencing

Marc Salit, Group Leader, Biochemical Science and Multiplexed Biomolecular Science, National Institute of Standards and Technology

Genome-in-a-Bottle: Reference Materials and Methods for Confidence in Whole Genome Sequencing Clinical application of ultra high throughput sequencing (UHTS) or “Next Generation Sequencing” for hereditary genetic diseases and oncology is rapidly emerging.  At present, there are no widely accepted genomic standards or quantitative performance metrics for confidence in variant calling. These are needed to achieve the confidence in measurement results expected for sound, reproducible research and regulated applications in the clinic.  NIST has convened the “Genome-in-a-Bottle Consortium” to develop the reference materials, reference methods, and reference data needed to assess confidence in human whole genome variant calls. A principal motivation for this consortium is to develop an infrastructure of widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.

10:00 Coffee Break in the Exhibit Hall with Poster Viewing

 

VENTURE CAPITAL & INVESTMENT BANKING

10:30 Panel Discussion

This “Funding to IPO Panel” consists of some of the top venture capitalists and investment bankers in therapeutics, diagnostics, and consumer genetics. This series of presentations and follow-on panel, will take attendees through the financial cycle – from funding to IPO, with VC’s and bankers highlighting the corporate criteria most important to them, and the metrics by which they make their decisions.
Panelists:

Geert-Jan Mulder, M.D., General Partner, Forbion Capital

Alexis Borisy, Partner, Third Rock Ventures

Steve Gullans, Managing Director, Excel Venture Management

Jamie Streator, Managing Director, Healthcare Investment Banking, Cowen & Company

Speaker to be Announced, Barclays

12:15 pm Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

GENOME DATA: THE PHYSICIAN’S PERSPECTIVE

1:45 Panel Discussion

While making the effort to deploy genomics and sequence data in preventative and clinical care is a noble cause, it is also one that requires pragmatic solutions. This panel discussion will address practical issues related to the day-to-day use of genomic technologies in the clinic — from hospital to private practice to academia.

Steve Murphy, M.D., Managing Partner, Wellspring Total Health (Moderator)
Panelists:

Michael Murray, M.D., Clinical Chief, Genetics Division, Brigham and Women’s Hospital; Instructor, Harvard Medical School, The Harvard Clinical and Translational Science Center

Isaac Samuel Kohane, M.D., Ph.D., Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School; Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing; Co-Director, HMS Center for Biomedical Informatics

3:00 Refreshment Break in the Exhibit Hall with Poster Viewing

 

GENOME INTERPRETATION

3:30 Omicia: Interpreting Genomes for Clinical Relevance

Martin G. Reese, President and CEO, Omicia

Automatic annotation of variants and integration of disparate data sources is just the first step in the eventual adoption of genomes into clinical practice. The next step is reducing this complexity into the very few, actionable clinically relevant findings. We will show how we integrate such methods within an automated, comprehensive and easy-to-use platform for the interpretation of individual genome data. This system allows for prioritizing variants with respect to its potential clinical impact and is preloaded with clinical gene sets and proprietary annotations to enhance discovery and reporting of personal genes and variants. Furthermore, it is extensible and allows the integration of the user’s proprietary gene and variants sets. We will show several exome and genome analyses.

3:50 Personalized Genomic Interpretation with SNPedia and Promethease

Mike Cariaso, Founder, SNPedia; Author of Promethease

With whole genome prices falling and microarray genotyping accessible to ordinary people over the internet, the challenge is no longer in acquiring the raw data, but in interpreting and using it. In this talk, I will outline a freely available database of literature, organized by the relevant DNA position and phenotypic effects. A complementary analysis program reads raw genomic data and produces a hyperlinked and searchable report of known associations. It can also perform special processing of family trios (child, mother, father), make predictions about offspring, and identify shared ancestry.

4:10 GenoSpace: Creating an Information Ecosystem for 21st Century Genomic Medicine

John Quackenbush, Ph.D., Professor, Biostatistics and Computational Biology, Cancer Biology Center for Cancer Computational Biology, Dana-Farber Cancer Institute

New sequencing technologies are driving the cost of genomic data generation to unprecedented lows, making sequencing available as a potentially valuable clinical and diagnostic tool. The challenge is solving “the last 100 yards” problem–delivering the data to those who need to access it in a manner in which they can use it effectively. GenoSpace has developed technology to connect the diverse consumers and producers of genomic data, creating an ecosystem in which we have the potential to advance genomic medicine.

 

VISIONS FOR PERSONALIZED MEDICINE

 

» 4:30 KEYNOTE PRESENTATION

The Big Picture: Visions for Personalized Medicine

Jay Flatley, President and CEO, Illumina

 

Illumnia logo small5:30 Social Event and Party

 

DAY 3: APPLICATIONS

FRIDAY, OCTOBER 5

8:00 am Morning Coffee

» 8:30 KEYNOTE PRESENTATION 

An Inside Look at How AncestryDNA Uses Population Genetics to Enrich Its Online Family History Experience

Kenneth Chahine, Ph.D., J.D., Senior Vice President & General Manager, DNA, ancestry.com

Ancestry.com is the world’s largest online resource for family history with an extensive collection of over 10 billion historical records that are digitized, indexed and made available online over the past 13 years. In May 2012, AncestryDNA launched a direct-to-consumer genealogical DNA test that delivers two results to customers. The first result predicts identity-by-descent and allows the customer to find genetic relatives within the AncestryDNA customer database. The second determines the customer’s admixture to provide a predicted genetic ethnicity using a state-of-the-art algorithm. The AncestryDNA team leverages pedigrees, documents, geographical information and its extensive biobank of worldwide DNA samples to conduct innovative research in population genetics and translates the complexities of genetic science into a simple, understandable, and meaningful user experience.

 

9:15 Past, Present and Future of Consumer Genetics, a Pioneer’s Perspective

Rosalynn Gill, Ph.D., Vice President, Clinical Affairs, Boston Heart Diagnostics

The first consumer genetics company, Sciona, founded by Rosaylnn Gill, launched its services in April 2001 in the UK in what was either a breakthrough in innovation or an act of incredible naiveté. Twelve years later, many lessons have been learned, but the jury is still out on the appropriate regulatory framework, the necessary industry standards and what constitutes a sustainable business model.

9:45 Sponsored Presentation (Opportunity Available)

10:15 Coffee Break in the Exhibit Hall with Poster Viewing

 

PRENATAL/NEONATAL DIAGNOSTICS 

10:45 Panel Discussion

Panelists will first give a brief presentation and then convene for a panel discussion.

Meredith Salisbury, Senior Consultant, Bioscribe (Moderator)

Neonatal Genomic Medicine

Neil A. Miller, Director of Informatics, Center for Pediatric Genomic Medicine, Children’s Mercy Hospital

The causal gene is known for more than 3,500 monogenic diseases. Many of these can present in the neonatal period, causing up to 30% of neonatal intensive care unit admissions. In the last six months, we have started to offer very rapid diagnostic testing for these diseases at Children’s Mercy Hospital based on genome sequencing. The emerging indications and utility of neonatal genomic medicine will be discussed.

Screening Neonates by Targeted Next-Generation DNA Sequencing

Arindam Bhattacharjee, Ph.D., CEO and Founder, Parabase Genomics

We are developing a neonatal genome sequencing test that will allow screening and diagnosis of primarily newborns and infants affected with a disease or condition allowing prompt treatment. The current approach of DNA based genetic screening for symptomatic and high-risk is not focused around neonates, and so healthcare providers and parents are unable to understand the cause and treatment of the condition in absence of clear symptoms. Our test is unique in that it simultaneously screens and/or diagnoses hundreds of these conditions at once from a single sample, providing more comprehensive information to families and their physicians. It is yet affordable, and provides access to the high-resolution sequence data.

Using NGS Sequencing to Improve the Standard of Care for Routine Genetic Carrier Screening

Don Hardison, President & CEO, Good Start Genetics, Inc.

11:45 Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

NUTRITION, FOOD GENETICS & COSMETICS

1:00 The Importance of Genetic Testing-Directed Vitamin Use

Dennis Gilbert, Ph.D., Founder, President and CEO, VitaPath Genetics

VitaPath Genetics, Inc. has developed a platform for genomic-based tests that determine the need for vitamin therapy in medically actionable conditions. Using its platform, VitaPath can develop specific vitamin-remediated risk assays that help manage the use of the $30 billion spent on supplements in the U.S. each year. The first test developed by VitaPath measures genetic risk factors associated with the spina bifida to identify women who would benefit from low-risk, prescription strength folic acid supplementation.

1:20 Using Weight Management Genetic Testing in Nutrition Counseling:
A Dietitian Weighs in on the Matter

Cindy Crowninshield, RD, LDN, Licensed Registered Dietitian, Body Therapeutics & Sodexo; Founder, Eat2BeWell & Eat4YourGenes; Conference Director, Cambridge Healthtech Institute

Between January-July 2012, 15 patients took a weight management genetic test to support their weight loss efforts. An individualized nutrition plan based on their eating and lifestyle habits and test results was created for each person. Data and several case studies will be presented to show how successful these patients were in achieving their weight loss goals. Challenges and opportunities will be discussed. Also presented will be tips and suggestions for genetic testing companies on how they can work best with a private practitioner’s office.

1:40 How Microfluidics is Changing the Landscape of Personalized Cosmetics

David A. Weitz, Ph.D., Mallinckrodt Professor of Physics and Applied Physics, Harvard School of Engineering and Applied Sciences

2:00 Refreshment Break in the Exhibit Hall with Poster Viewing

 

DISEASE DIAGNOSTICS

2:30 Clinical Sequencing and Mitochondrial Disease

Kevin McKernan, CSO, Courtagen Life Sciences

We describe the results from sequencing 64 patients’ Mitochondrial genomes in conjunction with 1,100 nuclear genes. Complementing this data with multiplex ELISA assays to monitor protein levels in the blood can provide additional insight to variants of unknown significance and aid therapeutic decisions.

2:50 A Paradigm Shift: Universal Screening Test

Samuil R. Umansky, M.D., Ph.D., D.Sc., Co-founder, CSO, and President, DiamiR LLC

We will present a fundamentally new approach to the development of a screening test aimed at diseases of various organ systems, organs and tissues. The test is non-invasive and cost efficient. The data we will present demonstrate the potential of our approach for early detection of neurodegenerative diseases, cancer and inflammatory diseases of gastrointestinal and pulmonary systems.

 

THE EMPOWERED PATIENT

3:10 Genomes R Us – How Personalized Medicine is Reshaping the Role of Patients, and Why It Matters

Craig Martin, CEO, Feinstein Kean Healthcare

Much has been said about the advancements in science underlying the genomic revolution. We are beginning now to see the impact at the clinical level, and there’s more to come in the pipeline. But what does this shift in medicine do to change the role of the patient? This presentation provides insights into how best to engage with patient communities to expedite research, commercialization and market impact of innovative technologies, diagnostics and treatments, and to help validate the relative efficacy of such advancements in a value-driven world.

3:40 Consumer Empowerment in Health Care and Personal Genomics: Ethical, Societal and Regulatory Considerations

Gholson Lyon, M.D., Ph.D., Assistant Professor in Human Genetics, Cold Spring Harbor Laboratory; Research Scientist, Utah Foundation for Biomedical Research

The pace of exome and genome sequencing is accelerating with the identification of many new disease-causing mutations in research settings, and it is likely that whole exome or genome sequencing could have a major impact in the clinical arena in the relatively near future. However, the human genomics community is currently facing several challenges, including phenotyping, sample collection, sequencing strategies, bioinformatics analysis, biological validation of variant function, clinical interpretation and validity of variant data, and delivery of genomic information to various constituents. I will review these challenges, with an eye toward consumer genetics.

4:10 It Hurts Less If You Know More: An Empowered Patient’s Diagnostic Odyssey

Richard Kellner, Co-Founder and President, Genome Health Solutions, Inc.

For the early detection, diagnosis and treatment of cancer, there is a wide gap between current “standards of care” and what is possible through the use of advanced genomic technologies. Over the past two years I learned this lesson first hand through personal experiences involving myself, close friends and family members. My story is one of serendipity, frustration and then hope. I learned that, unfortunately, where you live and who you know can greatly influence your quality of care. I also learned that you can overcome these limitations by becoming an “empowered patient” who actively seeks out doctors who are willing to get outside of their comfort zones and practice “participatory medicine,” sometimes at the cutting edge of new precision diagnostics. I will present a new roadmap that both patients and doctors can follow toward a new era of personalized genomic medicine.

 

COMPANIES THAT EMPOWER THE PATIENT

4:40 23andMe’s DTC Exome

Brian T. Naughton, Ph.D., Founding Scientist, 23andMe

In October 2011, 23andMe launched a $999 direct-to-consumer exome product to a limited group of customers.This talk presents findings from this project, including the ubiquitous issue of variants of unknown significance.

5:10 Winding the Asklepian Wand: The Advent of Whole Genomes in Healthcare

Nathan Pearson, Ph.D., Director of Research, Knome, Inc.

With ever cheaper sequencing, richer reference data, and sharper interpretation methods, the clinical use of whole genomes is taking root in pediatrics, oncology, and beyond. Our genomes will ultimately join other cornerstones of clinical care, helping us stay healthier from birth to old age. But that prospect will require fast, robust pipelines that smartly interpret genomes, in the context of good phenotype data, and feed decisive insights back to patients and caregivers. Learn how Knome is making that happen.

5:40 Close of Conference

Source:

http://www.consumergeneticsconference.com/cgc_content.aspx?id=117407&libID=117355

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

New Life – The Healing Promise of Stem Cells

View VIDEO

 

 

 

 

Diseases and conditions where stem cell treatment is promising or emerging. Source: Wikipedia
Since the late 1990s, the Technion has been at the forefront of stem-cell research. Stem cells are the master keys because they can be converted into many different kinds of cells, opening many different doors to potential cures and treatments. Beating heart tissue is one of the major stem cell achievements from the Technion.
Healing the Heart
 
Technion scientists showed this year that they can turn skin tissue from heart attack patients into fresh, beating heart cells in a first step towards a new therapy for the condition. The procedure may eventually help scores of people who survive heart attacks but are severely debilitated by damage to the organ.
By creating new heart cells from a patient’s own tissues, doctors avoid the risk of the cells being rejected by the immune system once they are transplanted.Though the cells were not considered safe enough to put back into patients, they appeared healthy in the laboratory and beat in time with other cells in animal models.
“We have shown that it’s possible to take skin cells from an elderly patient with advanced heart failure and end up with his own beating cells in a laboratory dish that are healthy and young – the equivalent to the stage his heart cells were in when he was just born,” Prof. Lior Gepstein told the British national paper The Guardian.

Pancreatic Tissue for Diabetes

Prof. Shulamit Levenberg of the Technion, who has spent many years trying to create replacement human organs by building them up on a “scaffold,” has created tissue from the insulin-producing islets of Langerhans in the pancreas surrounded by a three-dimensional network of blood vessels.The tissue she and her team created has significant advantages over traditional transplant material that has been harvested from healthy pancreatic tissue.

“We have shown that the three-dimensional environment and the engineered blood vessels support the islets – and this support is important for the survival of the islets and for their insulin secretion activity”, says Prof. Levenberg of the Department of Biomedical Engineering.

In the Bones

BonusBio - Health News - Israel


In collaboration with industry and global research partners, Technion scientists have grown human bone from stem cells in a laboratory. The development opens the way for patients to have broken bones repaired or even replaced with entire new ones grown outside the body from a patient’s own cells. The researchers started with stem cells taken from fat tissue. It took around a month to grow them into sections of fully-formed living human bone up to a couple of inches long. The success was reported by the UK national paper The Telegraph.

Stem Cell Proliferation

““These are our next generation of scientists and Nobel Laureates,” says Prof. Dror Seliktar, of the Department of Biomedical Engineering. “The future of the Technion relies on that.”

Seliktar and his research team at the Lokey Center for Biomaterials and Tissue Regeneration at Technion is working on a new material for the mass production of stem cells to make their commercial use viable on an industrial scale.

“In the biotechnology industries, there is an inherent need for expanding populations of stem cells for therapeutic purposes,” says Seliktar, who has published over 50 papers in the field, won over 14 awards and launched one of Israel’s promising biotech startups, Regentis Biomaterials.

Read more.

Prof. Joseph Itskovitz-Eldor of the Faculty of Medicine was on the international team that in 1998 first discovered the potential of stem cells to form any kind of tissue and pioneered stem-cell technology. The breakthrough garnered headlines around the world. He is the Director of the Technion Stem Cell Center.

Source:

Other posts on this Scientific Web Site about innovations completed on this topic at the Technion are cited below:

Read Full Post »

« Newer Posts - Older Posts »