Feeds:
Posts
Comments

Posts Tagged ‘Massachusetts General Hospital’

Reporter: Aviva Lev-Ari, PhD, RN

Inaugural Genomics in Medicine

Individualized Care for Improved Outcomes

February 11-12

Moscone North Convention Center • San Francisco, CA

Organized by

Cambridge Healthtech Institute

Monday, February 11

7:30 am Registration and Morning Coffee

8:25 Chairperson’s Opening Remarks

Screening for Rare and

Difficult to Diagnose Diseases

8:30 KEYNOTE PRESENTATION:

Genomically-Supported Diagnostic and

Drug Reposition Strategies out

of Academia

Hakon Hakonarson, M.D., Ph.D., Director, Center for Applied

Genomics, Children’s Hospital of Philadelphia

This talk will discuss genomic strategies applied in academia

to identify subsets of patients who, based on their genetic

make-up, are predicted to have a favorable response profile to

drugs that come from reposition opportunities.

9:00 Evolving Approaches to Mutation Detection in

Rare Diseases

Tom Scholl, Vice President, Research & Development,

Integrated Genetics, LabCorp

Emerging trends in this field that include the expansion of

content in clinical tests to include many loci and increased

clinical sensitivity by expanding numbers of mutations detected

or whole gene sequencing will be presented.

9:30 From Raw Sequencing Data to

Functional Interpretation

Daniel MacArthur, Ph.D., Group Leader, Analytic and

Translational Genetics Unit, Massachusetts General Hospital

This presentation will discuss the key lessons learned

from large-scale sequencing studies in both common and

rare diseases with a particular focus on finding mutations

underlying severe muscle diseases.

10:00 Coffee Break with Exhibit and Poster Viewing

10:30 Providing Whole Genome Sequencing in the Clinic

David Dimmock, M.D., Assistant Professor, Pediatrics,

Medical College of Wisconsin

This presentation will focus on advances in the implementation

of genome wide sequencing in clinical practice. It will address

counseling and consent issues specific to testing children.

Specifically, it will highlight the challenges of execution in the

acute care setting.

11:00 Clinical Utility of Whole Exome Sequencing

Christine M. Eng, M.D., Professor, Department of Molecular

and Human Genetics, Baylor College of Medicine

This presentation will discuss the role of whole exome

sequencing in the diagnostic evaluation of patients with

challenging phenotypes of genetic etiology. Examples of

clinical utility, directed medical care, and cost-effectiveness

of the whole exome approach to clinical diagnostics will be

presented.

11:30 A Neuronal Carnitine Deficiency Hypothesis

for Autism

Arthur L. Beaudet, M.D., Henry and Emma Meyer Professor

and Chair, Department of Molecular and Human Genetics,

Baylor College of Medicine

We have published a paper entitled “A common X-linked

inborn error of carnitine biosynthesis may be a risk factor for

nondysmorphic autism” (PMID: 22566635). We propose a

neuronal carnitine deficiency hypothesis as one risk factor

or cause for autism whereby 10-20% of autism might

be preventable.

12:00 pm Luncheon Presentation

(Sponsorship Opportunity Available) or Lunch on

Your Own

Predictive Tests for

Improved Patient Outcomes

1:25 Chairperson’s Remarks

1:30 Implementation of Personalized Healthcare into

Clinical Practice: Lessons Learned

Kathryn Teng, M.D., FACP, Director, Center for Personalized

Healthcare, Cleveland Clinic

Integrating a pharmacogenetics program into clinical practice

requires a vision for the future of healthcare and a roadmap to

reach that vision. Pioneering the road to achieving this vision

has brought challenges and has allowed for the creation of

solutions that might be applied universally.

2:00 Molecular Profiling of Tumors to Select Therapy

in Patients with Advanced Refractory Tumors

Ramesh Ramanathan, M.D., Medical Director, The Virginia

G. Piper Cancer Center Clinical Trials

This presentation will discuss molecular profiling of tumors

using IHC, CGH and whole genome/exome sequencing of

tumors to find actionable targets for therapy. Clinical trials

and case reports of patients treated by this approach will

be presented.

2:30 Sponsored Presentations (Opportunities Available)

3:00 Refreshment Break with Exhibit and

Poster Viewing

3:30 Gene Panels vs. Whole Exome Sequencing in

Cancer Molecular Testing

Madhuri Hegde, Ph.D., FACMG, Associate Professor, Senior

Director, Emory Genetics Laboratory, Department of Human

Genetics, Emory University School of Medicine

TriConference.com 5

Individualized Care for Improved Outcomes

4:00 Next Generation Sequencing and

Cancer Diagnostics

Phil Stephens, Ph.D., Vice President, Cancer Genomics,

Foundation Medicine

Foundation Medicine has developed FoundationOne™, a

CLIA-certified, comprehensive cancer genomic test that

analyzes routine clinical specimens for somatic alterations in

189 relevant cancer genes. Experience with the initial 1,000

consecutive patients will be presented.

4:30 KEYNOTE PRESENTATION:

Clinical Cancer Genotyping – Snapshot

John Iafrate, M.D., Ph.D., Assistant Professor,

Pathology, Harvard Medical School; Assistant

Pathologist, Massachusetts General Hospital

The challenges and opportunities of implementing a broad

genotyping assay in routine clinical management of cancer

patients will be discussed. Snapshot was launched over 3

years ago at the Massachusetts General Hospital, with the

goal of providing all cancer patients with a genetic fingerprint

to guide therapeutic decisions. Lessons learned will be

outlined, and a roadmap to effectively move testing forward

into the Next Gen sequencing era.

5:00 Breakout Discussions (See Web for Details)

6:00 Close of Day

Tuesday, February 12

8:00 am Morning Coffee

Data Management and Analysis

8:10 Chairperson’s Remarks

8:15 Under the Hood of the 1000 Genomes Project

Mark A. DePristo, Ph.D., Associate Director, Medical

and Population Genetics Analysis, Broad Institute of MIT

and Harvard (on behalf of The 1000 Genomes Project

Consortium)

This presentation discusses the evolution of the nextgeneration

sequencing (NGS) data underlying the public

1000 Genomes Project resource, from some of the earliest

technologies of 2009 to today’s state-of-the-art data. It

will also highlight key NGS analytic advances originating

from the Project.

8:45 Delivering Genomic Medicine: Challenges

and Opportunities

Heidi L. Rehm, Ph.D., FACMG, Assistant Professor,

Pathology, BWH and Harvard Medical School; Director,

Laboratory for Molecular Medicine, Partners Healthcare

Center for Personalized Genetic Medicine

This talk will cover the speaker’s experience in offering clinical

sequencing to patients, from disease-targeted panels to whole

genome analyses as well as supporting the interpretation

and delivery of those results to physicians. It will also cover

approaches to data sharing within the community.

9:15 From Sequence Files to

Sponsored by

Physicians Report and the Tools

Needed to Get There

Martin Seifert, Ph.D., CEO,

Genomatix Software

Providing actionable biology from NGS data in a report useful

to the practicing clinician is difficult. Ensuring the report is

accurate, reproducible, and reflects the biology of the patient

is an even larger task. We will show examples of Genomatix’

approach to these issues and how we successfully ensure a

secure, accurate, and reproducible report, bridging the gap

from sequencer to clinician.

9:30 Rapid Identification of

Sponsored by

Disease Causative Mutations

Ali Torkamani, Ph.D., Co-Founder & CSO,

Cypher Genomics

Recent successes in clinical genome sequencing have

highlighted the potential for sequencing to greatly improve

molecular diagnosis and clinical decision-making. However,

these successes have relied upon large bioinformatics teams

and in-depth literature surveys. We will demonstrate how the

Cypher Genomics software service can quickly return a small

set of well-annotated genetic variants most likely to contribute

to a patient’s disease.

10:00 Coffee Break with Exhibit and

Poster Viewing

Getting Genomic Testing to Clinic

10:30 Sequence Data on Demand: Access,

Visualization and Communication of Genome

Sequence Data between Physicians, Researchers,

and Patients

Sitharthan Kamalakaran, Ph.D., Senior Member, Research

Staff, Philips Research North America

Patients’ genome sequences are informative for clinical care

over the patient’s lifetime and not just for the diagnosis at

hand. We present a web-accessible interface for clinicians to

integrate relevant patient genome data in their routine practice

through clinically-framed queries.

11:00 Targeted Next Generation Sponsored by

Sequencing in FFPE Tumor Samples:

Distilling High Quality Information from Low

Quality Samples

Sachin Sah, Senior Scientist, Diagnostics Research

Development, Asuragen, Inc.

SuraSeq™ PCR-based enrichment procedures enable accurate

and sensitive mutation detection from nanogram inputs of

challenging FFPE tumor DNA. Case studies will be presented

that highlight the use of complementary NGS platforms and

novel bioinformatics for discovery and confirmation studies.

11:30 Transitioning New Technologies from the Bench to the Bedside: Direct Fetal Testing Using Circulating

Cell-Free DNA

Allan T. Bombard, M.D., CMO, Sequenom

This presentation will address clinical test implementation of new tests in the US, using circulating

cell-free DNA for noninvasive

prenatal testing (NIPT) of fetal aneuploidy from maternal plasma as an example.

12:00 Moving Genomic Screening to the Clinic: Next Steps

Bruce R. Korf, M.D., Ph.D., Wayne H. and Sara Crews Finley Chair in Medical Genetics; Professor and Chair,

Department of Genetics; Director, Heflin Center for Genomic Sciences, University of Alabama at Birmingham

Since the sequencing of the human genome there has been an expectation that a flood of advances would find their

way to the clinic, and, indeed, the pace of translation of genomics to clinical application is accelerating. It is likely that the future of

medical care will evolve by the convergence of two disruptive technologies – that of information science and genomics, which, in a

sense can be viewed as one and the same.

12:30 pm Close of Symposium

Featured Presentations

Genomically-Supported Diagnostic and

Drug Reposition Strategies out of Academia

Hakon Hakonarson, M.D., Ph.D., Director, Center

for Applied Genomics, Children’s Hospital of

Philadelphia

Clinical Cancer Genotyping – Snapshot

John Iafrate, M.D., Ph.D., Assistant Professor,

Pathology, Harvard Medical School; Assistant

Pathologist, Massachusetts General Hospital

Moving Genomic Screening to the Clinic:

Next Steps

Bruce R. Korf, M.D., Ph.D., Wayne H. and Sara

Crews Finley Chair in Medical Genetics;

Professor and Chair, Department of Genetics;

Director, Heflin Center for Genomic Sciences,

University of Alabama at Birmingham

Reasons to Attend

• Hear keynote presentations from Dr. Hakon

Hakonarson of CHOP and Dr. John Iafrate of MGH

• Find out how to transition genomic

screening to the clinic

• Discover evolving approaches to

mutation detection

• Explore data management and analysis solutions

• Learn the role of pharmacogenomics in

patient care

• Network with genomic thought leaders

• Par ticipate in interactive, problem-solving

breakout discussions

TriConference.com

February 11-15 • Moscone North Convention Center • San Francisco, CA

2013

Molecular Med

Tri-Con

Premier Sponsors:

2 Genomics in Medicine

Plenary Keynotes 2013 Sponsors

Wednesday, February 13 8:00 – 9:40 am

Personalized Oncology – Fulfilling the Promise for

Today’s Patients

In honor of the 20th anniversary of the Molecular Medicine Tri-conference, CHI and

Cancer Commons will present a plenary panel on Personalized Oncology. Innovations

such as NGS and The Cancer Genome Atlas have revealed that cancer comprises

hundreds of distinct molecular diseases. Early clinical successes with targeted

therapies suggest that cancer might one day be managed as a chronic disease using

an evolving cocktail of drugs. Representing all five conference channels, Diagnostics,

Therapeutics, Clinical, Informatics, and Cancer, a panel of experts will lead a highly

interactive exploration of what it will take to realize this vision in the near future.

Moderator: Marty Tenenbaum, Ph.D., Founder and Chairman, Cancer

Commons; Prominent AI Researcher; Cancer Survivor

Tony Blau, M.D., Professor, Department of Medicine/Hematology and

Adjunct Professor, Department of Genome Sciences, University of

Washington; Attending Physician, Seattle Cancer Care Alliance; Co-

Director, Institute for Stem Cell and Regenerative Medicine, University of

Washington and the Program for Stem and Progenitor Cell Biology at the

UW/FHCRC Cancer Consortium; Founder and Scientific Officer, Partners

in Personal Oncology

Sarah Greene, Executive Director, Cancer Commons

Laurence Marton, M.D., Adjunct Professor, Department of Laboratory

Medicine, University of California San Francisco; former Dean of

Medicine, University of Wisconsin

Jane Reese-Coulbourne, MS, ChE, Executive Director, Reagan-Udall

Foundation for the FDA; former Board Chair, Lung Cancer Alliance;

Cancer Survivor

Anil Sethi, CEO, Pinch Bio; HL7 Pioneer and Health Informatics

Entrepreneur

Joshua Stuart, Ph.D., Associate Professor, Department of Biomolecular

Engineering, University of California Santa Cruz

Thursday, February 14 8:00 – 9:40 am

Plenary Keynote Panel: Emerging Technologies &

Industry Perspectives

This session features a series of presentations on emerging and hot technologies in

diagnostics, drug discovery & development, informatics, and oncology. Interactive

Q&A discussion with the audience will be included.

Moderator: To be Announced

Gregory Parekh, Ph.D., CEO, Biocartis

Kevin Bobofchak, Ph.D., Pathway Studio Product Manager, Elsevier

Jeremy Bridge-Cook, Ph.D., Senior Vice President, Research &

Development, Luminex Corporation

Panelist to be Announced, Remedy Informatics

Harry Glorikian, Managing Partner, Scientia Advisors, LLC

Lynn R. Zieske, Ph.D., Vice President, Commercial Solutions, Singulex, Inc.

Sponsored by

Premier Sponsors:

Corporate Sponsors:

Molecular

Corporate Support Sponsors:

TriConference.com 3

Conference Programs:

Feb 13-15

Diagnostics Channel

Molecular Diagnostics

Personalized Diagnostics

Cancer Molecular Markers

Circulating Tumor Cells

Digital Pathology – NEW

Companion Diagnostics – NEW

Therapeutics Channel

Mastering Medicinal Chemistry

Cancer Biologics

Clinical and Translational Science

Clinical Channel

Oncology Clinical Trials

Clinical and Translational Science

Clinical Sequencing – NEW

Informatics Channel

Bioinformatics in the Genome Era

Integrated R&D Informatics and Knowledge Management

Cancer Channel

Cancer Molecular Markers

Circulating Tumor Cells

Predictive Pre-Clinical Models in Oncology – NEW

Oncology Clinical Trials

Cancer Biologics

Symposia*:

Feb 11-12

Targeting Cancer Stem Cells

Genomics in Medicine – NEW

Point-of-Care Diagnostics

Quantitative Real-Time PCR – NEW

Next Generation Pathology

Partnering Forum*:

Feb 11-12

Emerging Molecular Diagnostics

Short Courses*:

Feb 12

1:30-4:30pm

SC1 Identification & Characterization of Cancer Stem Cells

SC2 Commercialization Boot Camp: Manual for Success in

the Molecular Diagnostics Marketplace

SC3 NGS Data and the Cloud

SC4 Best Practices in Personalized and Translational

Medicine

SC5 Latest Advances in Molecular Pathology

SC6 Regulatory Approval of a Therapeutic & Companion

Diagnostic: Nuts & Bolts

SC7 PCR Part I: qPCR in Molecular Diagnostics

SC8 Data Visualization

SC9 Methods for Synthesis & Screening of Macrocyclic

Compound Libraries

5:00-8:00pm (Dinner)

SC10 PCR Part II: Digital PCR Applications and Advances

SC11 Sample Prep and Biorepositories for Cancer Research

SC12 Next-Generation Sequencing in Molecular Pathology:

Challenges and Applications

SC13 Strategies for Companion Diagnostics Development

SC14 Patient-Derived Cancer Tissue Xenograph Models

SC16 Microfluidics Technology and Market Trends

SC17 Open Cloud & Data Science

Get the best 5-day value! Our All Access

Packages is a convenient, cost-effective way

to attend each aspect of Molecular Med

TRI-CON 2013. Package includes access to

1 Symposium or Partnering Forum, 2 Short

Courses and 1 Conference Program.

TRI-CON All Access Package

*Separate reg required with a la carte pricing

Co-located Event

4 Genomics in Medicine

Inaugural Genomics in Medicine

Monday, February 11

7:30 am Registration and Morning Coffee

8:25 Chairperson’s Opening Remarks

Screening for Rare and

Difficult to Diagnose Diseases

8:30 KEYNOTE PRESENTATION:

Genomically-Supported Diagnostic and

Drug Reposition Strategies out

of Academia

Hakon Hakonarson, M.D., Ph.D., Director, Center for Applied

Genomics, Children’s Hospital of Philadelphia

This talk will discuss genomic strategies applied in academia

to identify subsets of patients who, based on their genetic

make-up, are predicted to have a favorable response profile to

drugs that come from reposition opportunities.

9:00 Evolving Approaches to Mutation Detection in

Rare Diseases

Tom Scholl, Vice President, Research & Development,

Integrated Genetics, LabCorp

Emerging trends in this field that include the expansion of

content in clinical tests to include many loci and increased

clinical sensitivity by expanding numbers of mutations detected

or whole gene sequencing will be presented.

9:30 From Raw Sequencing Data to

Functional Interpretation

Daniel MacArthur, Ph.D., Group Leader, Analytic and

Translational Genetics Unit, Massachusetts General Hospital

This presentation will discuss the key lessons learned

from large-scale sequencing studies in both common and

rare diseases with a particular focus on finding mutations

underlying severe muscle diseases.

10:00 Coffee Break with Exhibit and Poster Viewing

10:30 Providing Whole Genome Sequencing in the Clinic

David Dimmock, M.D., Assistant Professor, Pediatrics,

Medical College of Wisconsin

This presentation will focus on advances in the implementation

of genome wide sequencing in clinical practice. It will address

counseling and consent issues specific to testing children.

Specifically, it will highlight the challenges of execution in the

acute care setting.

11:00 Clinical Utility of Whole Exome Sequencing

Christine M. Eng, M.D., Professor, Department of Molecular

and Human Genetics, Baylor College of Medicine

This presentation will discuss the role of whole exome

sequencing in the diagnostic evaluation of patients with

challenging phenotypes of genetic etiology. Examples of

clinical utility, directed medical care, and cost-effectiveness

of the whole exome approach to clinical diagnostics will be

presented.

11:30 A Neuronal Carnitine Deficiency Hypothesis

for Autism

Arthur L. Beaudet, M.D., Henry and Emma Meyer Professor

and Chair, Department of Molecular and Human Genetics,

Baylor College of Medicine

We have published a paper entitled “A common X-linked

inborn error of carnitine biosynthesis may be a risk factor for

nondysmorphic autism” (PMID: 22566635). We propose a

neuronal carnitine deficiency hypothesis as one risk factor

or cause for autism whereby 10-20% of autism might

be preventable.

12:00 pm Luncheon Presentation

(Sponsorship Opportunity Available) or Lunch on

Your Own

Predictive Tests for

Improved Patient Outcomes

1:25 Chairperson’s Remarks

1:30 Implementation of Personalized Healthcare into

Clinical Practice: Lessons Learned

Kathryn Teng, M.D., FACP, Director, Center for Personalized

Healthcare, Cleveland Clinic

Integrating a pharmacogenetics program into clinical practice

requires a vision for the future of healthcare and a roadmap to

reach that vision. Pioneering the road to achieving this vision

has brought challenges and has allowed for the creation of

solutions that might be applied universally.

2:00 Molecular Profiling of Tumors to Select Therapy

in Patients with Advanced Refractory Tumors

Ramesh Ramanathan, M.D., Medical Director, The Virginia

G. Piper Cancer Center Clinical Trials

This presentation will discuss molecular profiling of tumors

using IHC, CGH and whole genome/exome sequencing of

tumors to find actionable targets for therapy. Clinical trials

and case reports of patients treated by this approach will

be presented.

2:30 Sponsored Presentations (Opportunities Available)

3:00 Refreshment Break with Exhibit and

Poster Viewing

3:30 Gene Panels vs. Whole Exome Sequencing in

Cancer Molecular Testing

Madhuri Hegde, Ph.D., FACMG, Associate Professor, Senior

Director, Emory Genetics Laboratory, Department of Human

Genetics, Emory University School of Medicine

TriConference.com 5

Individualized Care for Improved Outcomes

4:00 Next Generation Sequencing and

Cancer Diagnostics

Phil Stephens, Ph.D., Vice President, Cancer Genomics,

Foundation Medicine

Foundation Medicine has developed FoundationOne™, a

CLIA-certified, comprehensive cancer genomic test that

analyzes routine clinical specimens for somatic alterations in

189 relevant cancer genes. Experience with the initial 1,000

consecutive patients will be presented.

4:30 KEYNOTE PRESENTATION:

Clinical Cancer Genotyping – Snapshot

John Iafrate, M.D., Ph.D., Assistant Professor,

Pathology, Harvard Medical School; Assistant

Pathologist, Massachusetts General Hospital

The challenges and opportunities of implementing a broad

genotyping assay in routine clinical management of cancer

patients will be discussed. Snapshot was launched over 3

years ago at the Massachusetts General Hospital, with the

goal of providing all cancer patients with a genetic fingerprint

to guide therapeutic decisions. Lessons learned will be

outlined, and a roadmap to effectively move testing forward

into the Next Gen sequencing era.

5:00 Breakout Discussions (See Web for Details)

6:00 Close of Day

Tuesday, February 12

8:00 am Morning Coffee

Data Management and Analysis

8:10 Chairperson’s Remarks

8:15 Under the Hood of the 1000 Genomes Project

Mark A. DePristo, Ph.D., Associate Director, Medical

and Population Genetics Analysis, Broad Institute of MIT

and Harvard (on behalf of The 1000 Genomes Project

Consortium)

This presentation discusses the evolution of the nextgeneration

sequencing (NGS) data underlying the public

1000 Genomes Project resource, from some of the earliest

technologies of 2009 to today’s state-of-the-art data. It

will also highlight key NGS analytic advances originating

from the Project.

8:45 Delivering Genomic Medicine: Challenges

and Opportunities

Heidi L. Rehm, Ph.D., FACMG, Assistant Professor,

Pathology, BWH and Harvard Medical School; Director,

Laboratory for Molecular Medicine, Partners Healthcare

Center for Personalized Genetic Medicine

This talk will cover the speaker’s experience in offering clinical

sequencing to patients, from disease-targeted panels to whole

genome analyses as well as supporting the interpretation

and delivery of those results to physicians. It will also cover

approaches to data sharing within the community.

9:15 From Sequence Files to

Sponsored by

Physicians Report and the Tools

Needed to Get There

Martin Seifert, Ph.D., CEO,

Genomatix Software

Providing actionable biology from NGS data in a report useful

to the practicing clinician is difficult. Ensuring the report is

accurate, reproducible, and reflects the biology of the patient

is an even larger task. We will show examples of Genomatix’

approach to these issues and how we successfully ensure a

secure, accurate, and reproducible report, bridging the gap

from sequencer to clinician.

9:30 Rapid Identification of

Sponsored by

Disease Causative Mutations

Ali Torkamani, Ph.D., Co-Founder & CSO,

Cypher Genomics

Recent successes in clinical genome sequencing have

highlighted the potential for sequencing to greatly improve

molecular diagnosis and clinical decision-making. However,

these successes have relied upon large bioinformatics teams

and in-depth literature surveys. We will demonstrate how the

Cypher Genomics software service can quickly return a small

set of well-annotated genetic variants most likely to contribute

to a patient’s disease.

10:00 Coffee Break with Exhibit and

Poster Viewing

Getting Genomic Testing to Clinic

10:30 Sequence Data on Demand: Access,

Visualization and Communication of Genome

Sequence Data between Physicians, Researchers,

and Patients

Sitharthan Kamalakaran, Ph.D., Senior Member, Research

Staff, Philips Research North America

Patients’ genome sequences are informative for clinical care

over the patient’s lifetime and not just for the diagnosis at

hand. We present a web-accessible interface for clinicians to

integrate relevant patient genome data in their routine practice

through clinically-framed queries.

11:00 Targeted Next Generation Sponsored by

Sequencing in FFPE Tumor Samples:

Distilling High Quality Information from Low

Quality Samples

Sachin Sah, Senior Scientist, Diagnostics Research

Development, Asuragen, Inc.

SuraSeq™ PCR-based enrichment procedures enable accurate

and sensitive mutation detection from nanogram inputs of

challenging FFPE tumor DNA. Case studies will be presented

that highlight the use of complementary NGS platforms and

novel bioinformatics for discovery and confirmation studies.

NEW

TriConference.com 6

Recommended Programs:

Main Conference

• Personalized Diagnostics

Short Courses

• NGS Data and the Cloud

• PCR Part I: qPCR in Molecular Diagnostics

• NGS in Molecular Pathology

• PCR Part II: Digital PCR Applications and Advances

11:30 Transitioning New Technologies from the Bench to the Bedside: Direct Fetal Testing Using Circulating

Cell-Free DNA

Allan T. Bombard, M.D., CMO, Sequenom

This presentation will address clinical test implementation of new tests in the US, using circulating cell-free DNA for noninvasive

prenatal testing (NIPT) of fetal aneuploidy from maternal plasma as an example.

12:00 Moving Genomic Screening to the Clinic: Next Steps

Bruce R. Korf, M.D., Ph.D., Wayne H. and Sara Crews Finley Chair in Medical Genetics; Professor and Chair,

Department of Genetics; Director, Heflin Center for Genomic Sciences, University of Alabama at Birmingham

Since the sequencing of the human genome there has been an expectation that a flood of advances would find their

way to the clinic, and, indeed, the pace of translation of genomics to clinical application is accelerating. It is likely that the future of

medical care will evolve by the convergence of two disruptive technologies – that of information science and genomics, which, in a

sense can be viewed as one and the same.

12:30 pm Close of Symposium

7 Genomics in Medicine

Hotel Information

Reserve your hotel and save $100 off

your conference registration*

*You must book your reservation under the Tri-Conference

room block for a minimum of 4 nights at the Marriott or the

Intercontinental Hotel. One discount per hotel room.

Conference Venue:

The Moscone North Convention Center

747 Howard Street

San Francisco, CA 94103

http://www.moscone.com

Please visit TriConference.com to make your

reservations online or call the hotel directly to

reserve your sleeping accommodations.You will

need to identify yourself as a Molecular Med Tri-Con

attendee to receive the discounted room rate with

the host hotel. Reservations made after the cut-off

date or after the group room block has been filled

(whichever comes first) will be accepted on a spaceand

rate-availability basis. Rooms are limited, so

please book early.

Sponsorship &

Exhibit Opportunities

CHI offers comprehensive sponsorship packages which include presentation

opportunities, exhibit space and branding, as well as the use of

the pre and post-show delegate lists. Signing on early will allow you to

maximize exposure to hard-to-reach decision makers.

Breakfast & Luncheon Presentations

Opportunities may include a 15 or 30-minute podium presentation

during the main agenda. Boxed lunches are delivered into the main

session room, which guarantees audience attendance and participation.

Packages include: exhibit space, on-site branding, and more.

Invitation-Only VIP Dinner/Private Receptions

Sponsors will select their top prospects from the conference preregistration

list for an evening of networking at the hotel or at a choice

local venue. CHI will extend invitations and deliver prospects. Evening

will be customized according to sponsor’s objectives.

Exhibit

Exhibitors will enjoy facilitated networking opportunities with 3,000

highly-targeted delegates at the overall event. Speak face-to-face with

prospective clients and showcase your latest product, service, or solution.

Inquire about additional branding

opportunities, including our

Valentine’s Day Soiree sponsorship!

Looking for additional ways to drive leads to

your sales team? CHI can help!

We offer clients numerous options for custom lead generation programs

to address their marketing and sales needs, including:

• Live Webinars

• White Papers

• Market Surveys

• Podcasts and More!

For sponsorship & exhibit information, please

contact:

Companies A-K

Jon Stroup, Manager, Business Development

781-972-5483 • jstroup@healthtech.com

Companies L-Z

Joseph Vacca, Manager, Business Development

781-972-5431 • jvacca@healthtech.com

How to Register: TriConference.com

reg@healthtech.com • P: 781.972.5400 or Toll-free in the U.S. 888.999.6288

Please use

keycode GDX F

when registering!

short Courses (Tuesday, Feb 12)

1 Short Course $695 $395

2 Short Courses $995 $695

Diagnostics Channel

P1 Molecular Diagnostics

P2 Personalized Diagnostics

P3 Cancer Molecular Markers

P4 Circulating Tumor Cells

P5 Digital Pathology– NEW

P6 Companion Diagnostics– NEW

Informatics Channel

P13 Bioinformatics

P14 Integrated R&D

Informatics &

Knowledge Management

Cancer Channel

P3 Cancer Molecular Markers

P4 Circulating Tumor Cells

P15 Predictive Pre-Clinical Models

in Oncology – NEW

P10 Oncology Clinical Trials

P9 Cancer Biologics

Clinical Channel

P10 Oncology Clinical Trials

P11 Clinical and

Translational Science

P12 Clinical Sequencing– NEW

Therapeutics Channel

P7 Mastering Medicinal

Chemistry Summit

P9 Cancer Biologics

P11 Clinical and

Translational Science

S1 Targeting Cancer Stem Cells S2 Genomics in Medicine S3 Point-of-Care Diagnostics S4 Quantitative Real-Time PCR S5 Next Generation Pathology

SC10 PCR Part II: Digital PCR Applications and Advances

SC11 Sample Prep and Biorepositories for Cancer Research

SC12 Next-Generation Sequencing in Molecular Pathology:

Challenges and Applications

SC13 Strategies for Companion Diagnostics Development

SC14 Patient-Derived Cancer Tissue Xenograph Models

SC16 Microfluidics Technology and Market Trends

SC17 Open Cloud & Data Science

Afternoon

SC1 Identification & Characterization of Cancer Stem Cells

SC2 Commercialization Boot Camp: Manual for Success in the Molecular Diagnostics Marketplace

SC3 NGS Data and the Cloud

SC4 Best Practices in Personalized and Translational Medicine

SC5 Latest Advances in Molecular Pathology

SC6 Regulatory Approval of a Therapeutic & Companion Diagnostic: Nuts & Bolts

SC7 PCR Part I: qPCR in Molecular Diagnostics

SC8 Data Visualization

SC9 Methods for Synthesis & Screening of Macrocyclic Compound Libraries

SOURCE:

http://www.triconference.com/uploadedFiles/MMTC/13/MMTC_Symposium_Final_GDX.pdf

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Genomics and the State of Science Clarity

Projects supported by the US National Institutes of Health will have produced 68,000 total human genomes — around 18,000 of those whole human genomes — through the end of this year, National Human Genome Research Institute estimates indicate. And in his book, The Creative Destruction of Medicine, the Scripps Research Institute‘s Eric Topol projects that 1 million human genomes will have been sequenced by 2013 and 5 million by 2014.

“There’s a lot of inventory out there, and these things are being generated at a fiendish rate,” says Daniel MacArthur, a group leader in Massachusetts General Hospital‘s Analytic and Translational Genetics Unit. “From a capacity perspective … millions of genomes are not that far off. If you look at the rate that we’re scaling, we can certainly achieve that.”

The prospect of so many genomes has brought clinical interpretation into focus — and for good reason. Save for regulatory hurdles, it seems to be the single greatest barrier to the broad implementation of genomic medicine.

But there is an important distinction to be made between the interpretation of an apparently healthy person’s genome and that of an individual who is already affected by a disease, whether known or unknown.

In an April Science Translational Medicine paper, Johns Hopkins University School of Medicine‘s Nicholas Roberts and his colleagues reported that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals. The researchers then concluded that whole-genome sequencing was not likely to be clinically useful for that purpose. (See sidebar, story end.)

“The Roberts paper was really about the value of omniscient interpretation of whole-genome sequences in asymptomatic individuals and what were the likely theoretical limits,” says Isaac Kohane, chair of the informatics program at Children’s Hospital Boston. “That was certainly an important study, and it was important to establish what those limits of knowledge are in asymptomatic populations. But, in fact, the major and most important use cases [for whole-genome sequencing] may be in cases of disease.”

Still, targeted clinical interpretations are not cut and dried. “Even in cases of disease, it’s not clear that we know now how to look across multiple genes and figure out which are relevant, which are not,” Kohane adds.

While substantial progress has been made — in particular, for genetic diseases, including certain cancers — ambiguities have clouded even the most targeted interpretation efforts to date. Technological challenges, meager sample sizes, and a need for increased, fail-safe automation all have hampered researchers’ attempts to reliably interpret the clinical significance of genomic variation. But perhaps the greatest problem, experts say, is a lack of community-wide standards for the task.

Genes to genomes

When scientists analyzed James Watson’s genome — his was the first personal sequence, completed in 2007 and published in Nature in 2008 — they were surprised to find that he harbored two putative homozygous SNPs matching Human Gene Mutation Database entries that, were they truly homozygous, would have produced severe clinical pheno-types.

But Watson was not sick.

As researchers search more and more genomes, such inconsistencies are increasingly common.

“My take on what has happened is that the people who were doing the interpretation of the raw sequence largely were coming from a SNPs world, where they were thinking about sequence variants that have been observed before, or that have an appreciable frequency, and weren’t thinking very much about the single-ton sequence variants,” says Sean Tavtigian, associate professor of oncology at the University of Utah.

“There is a qualitative difference between looking at whole-genome sequences and looking at single genes or, even more typically, small numbers of variants that have been previously implicated in a disease,” Boston’s Kohane adds.
“Previously, because of the cost and time limitations around sequencing and genotyping, we only looked at variants in genes for which we had a clinical indication. Now, since we can essentially see that in the near future we will be able to do a full genome sequence for essentially the same cost as just a focused set-of-variants test, all of the sudden we have to ask ourselves: What is the meaning of variants that fall outside where we would have ordinarily looked for a given disease or, in fact, if there is no disease at all?”

Mass General’s MacArthur says it has been difficult to pinpoint causal variants because they are enriched for both sequencing and annotation errors. “In the genome era, we can generate those false positives at an amazing rate, and we need to work hard to filter them back out,” he says.

“Clinical geneticists have been working on rare diseases for a long time, and have identified many genes, and are used to working in a world where there is sequence data available only from, say, one gene with a strong biological hypothesis. Suddenly, they’re in this world where they have data from patients on all 20,000 genes,” MacArthur adds. “There’s a fundamental mind-shift there, in shifting from one gene through to every gene. My impression is that the community as a whole hasn’t really internalized that shift; people still have a sense in their head that if you see a strongly damaging variant that segregates with the disease, and maybe there’s some sort of biological plausibility around it as well, that that’s probably the causal variant.”

Studies have shown that that’s not necessarily so. Because of this, “I do worry that in the next year or so we’ll see increasing numbers of mutations published that later prove to just be benign polymorphisms,” MacArthur adds.

“The meaning of whole-genome -sequence I think is very much front-and-center of where genomics is going to go. What is the true, clinical meaning? What is the interpretation? And, there’s really a double-edged sword,” Kohane says. On one hand, “if you only focus on the genes that you believe are relevant to the condition you’re studying, then you might miss some important findings,” he says. Conversely, “if you look at every-thing, the likelihood of a false positive becomes very, very high. Because, if you look at enough things, invariably you will find something abnormal,” he adds.

False positives are but one of the several challenges scientists working to analyze genomes in a clinical context face.

Technical difficulties

That advances in sequencing technologies are far outstripping researchers’ abilities to analyze the data they produce has become a truism of the field. But current sequencing platforms are still far from perfect, making most analyses complicated and nuanced. Among other things, improvements in both read length and quality are needed to enable accurate and reproducible interpretations.

“The most promising thing is the rate at which the cost-per-base-pair of massively parallel sequencing has dropped,” Utah’s Tavtigian says. Still, the cost of clinical sequencing is not inconsequential. “The $1,000, $2,000, $3,000 whole-genome sequences that you can do right now do not come anywhere close to 99 percent probability to identify a singleton sequence variant, especially a biologically severe singleton sequence variant,” he says. “Right now, the real price of just the laboratory sequencing to reach that quality is at least $5,000, if not $10,000.”

However, Tavtigian adds, “techniques for multiplexing many samples into a channel for sequencing have come along. They’re not perfect yet, but they’re going to improve over the next year or so.”

Using next-generation sequencing platforms, researchers have uncovered a variety of SNPs, copy-number variants, and small indels. But to MacArthur’s mind, current read lengths are not up to par when it comes to clinical-grade sequencing, and they have made supernumerary quality-control measures necessary.

“There’s no question that we’re already seeing huge improvements. … And as we add in to that changes in technology — for instance much, much longer sequencing reads, more accurate reads, possibly combining different platforms — I think these sorts of [quality-control] issues will begin to go away over the next couple of years,” MacArthur says. “But at this stage, there is still a substantial quality-control component in any sort of interpretation process. We don’t have perfect genomes.”

In a 2011 Nature Biotechnology paper, Stanford University’s Michael Snyder and his colleagues sought to examine the accuracy and completeness of single-nucleotide variant and indel calls from both the Illumina and Complete Genomics platforms by sequencing the genome of one individual using both technologies. Though the researchers found that more than 88 percent of the unique single-nucleotide variants they detected were concordant between the two platforms, only around one-quarter of the indel calls they generated matched up. Overall, the authors reported having found tens of thousands of platform-specific variant calls, around 60 percent of which they later validated by genotyping array.

For clinical sequencing to ever become widespread, “we’re going to have to be able to show the same reproducibility and test characteristic modification as we have for, let’s say, an LDL cholesterol level,” Boston’s Kohane says. “And if you measure it in one place, it should not be too different from another place. … Even before we can get to the clinical meaning of the genomes, we’re going to have to get some industry-wide standards around quality of sequencing.”
Scripps’ Topol adds that when it comes to detecting rare variants, “there still needs to be a big upgrade in accuracy.”

Analytical issues

Beyond sequencing, technological advances must also be made on the analysis end. “The next thing, of course, is once you have better -accuracy … being able to do all of the analytical work,” Topol says. “We’re getting better at the exome, but every-thing outside of protein-coding -elements, there’s still a tremendous challenge.”

Indeed, that challenge has inspired another — a friendly competition among bioinformaticians working to analyze pediatric genomes in a pedigree study.

With enrollment closed and all sequencing completed, participants in the Children’s Hospital Boston-sponsored CLARITY Challenge have rolled up their shirtsleeves and begun to dig into the data — de-identified clinical summaries and exome or whole-genome sequences generated by Complete Genomics and Life Technologies for three children affected by rare diseases of unknown genetic basis, and their parents. According to its organizers, the competition aims to help set standards for genomic analysis and interpretation in a clinical setting, and for returning actionable results to clinicians and patients.

“A bunch of teams have signed up to provide clinical-grade reports that will be checked by a blue-ribbon panel of judges later this year to compare and contrast the different forms of clinical reporting at the genome-wide level,” Kohane says. The winning team will be announced this fall and will receive a $25,000 prize, he adds.

While the competition covers all aspects of clinical sequencing — from readout to reporting — it is important to recognize that, more generally, there may not be one right answer and that the challenges are far-reaching, affecting even the most basic aspects of analysis.

“There is a lot of algorithm investment still to be made in order to get very good at identifying the very rare or singleton sequence variants from the massively parallel sequencing reads efficiently, accurately, [and with] sensitivity,” Utah’s Tavtigian says.

Picking up a variant that has been seen before is one thing, but detecting a potentially causal, though as-yet-unclassified variant is a beast of another nature.

“Novel mutations usually need extensive knowledge but also validation. That’s one of the challenges,” says Zhongming Zhao, associate professor of biomedical informatics at Vanderbilt University. “Validation in terms of a disease study is most challenging right now, because it is very time-consuming, and usually you need to find a good number of samples with similar disease to show this is not by chance.”

Search for significance

Much like sequencing a human genome in the early- to mid-2000s was more laborious than it is now, genome interpretation has also become increasingly automated.

Beyond standard quality-control checks, the process of moving from raw data to calling variants is now semiautomatic. “There’s essentially no manual intervention required there, apart from running our eyes over [the calls], making sure nothing has gone horribly wrong,” says Mass General’s MacArthur. “The step that requires manual intervention now is all about taking that list of variants that comes out of that and looking at all the available biological data that exists on the Web, [coming] up with a short-list of genes, and then all of us basically have a look at all sorts of online resources to see if any of them have some kind of intuitive biological profile that fits with the disease we’re thinking about.”

Of course, intuitive leads are not foolproof, nor are current mutation data-bases. (See sidebar, story end.) And so, MacArthur says, “we need to start replacing the sort of intuitive biological approach with a much more data-informed approach.”

Developing such an approach hinges in part on having more genomes. “If we get thousands — tens of thousands — of people sequenced with various different phenotypes that have been crisply identified, that’s going to be so important because it’s the coupling of the processing of the data with having rare variants, structural variants, all the other genomic variations to understand the relationship of whole-genome sequence of any particular phenotype and a sequence variant,” Scripps’ Topol says.

Vanderbilt’s Zhao says that sample size is still an issue. “Right now, the number of samples in each whole-genome sequencing-based publication is still very limited,” he says. At the same time, he adds, “when I read peers’ grant applications, they are proposing more and more whole-genome sequencing.”

When it comes to disease studies, sequencing a whole swath of apparently healthy people is not likely to ever be worthwhile. According to Utah’s Tavtigian, “the place where it is cost-effective is when you test cases and then, if something is found in the case, go on and test all of the first-degree relatives of the case — reflex testing for the first-degree relatives,” he says. “If there is something that’s pathogenic for heart disease or colon cancer or whatever is found in an index case, then there is a roughly 50 percent chance that the first-degree relatives are going to carry the same thing, whereas if you go and apply that same test to someone in the general population, the probability that they carry something of interest is a lot lower.”

But more genomes, even familial ones, are not the only missing elements. To fill in the functional blanks, researchers require multiple data types.

“We’ve been pretty much sequence-centric in our thinking for many years now because that was where are the attention [was],” Topol says. “But that leaves the other ‘omes out there.”

From the transcriptome to the proteome, the metabolome, the microbiome, and beyond — Topol says that because all the ‘omes contribute to human health, they all merit review.

“The ability to integrate information about the other ‘omics will probably be a critical direction to understand the underpinnings of disease,” he says. “I call it the ‘panoromic’ view — that is really going to become a critical future direction once we can do those other ‘omics readily. We’re quite a ways off from that right now.”

Mass General’s MacArthur envisages “rolling in data from protein-protein interaction networks and tissue expression data — pulling all of these together into a model that predicts, given the phenotype, given the systems that appear to be disrupted by this variant, what are the most likely set of genes to be involved,” he says. From there, whittling that set down to putative causal variants would be simpler.

“And at the end of that, I think we’ll end up with a relatively small number of variants, each of which has a probability score associated with it, along with a whole host of additional information that a clinician can just drill down into in an intuitive way in making a diagnosis in that individual,” he adds.

According to MacArthur, “we’re already moving in this direction — in five years I think we will have made substantial progress toward that.” He adds, “I certainly think within five years we will be diagnosing the majority of severe genetic disease patients; the vast majority of those we’ll be able to assign a likely causal variant using this type of approach.”

Tavtigian, however, highlights a potential pitfall. While he says that “integration of those [multivariate] data helps a lot with assessing unclassified variants,” it is not enough to help clinicians ascertain causality. Functional assays, which can be both inconclusive and costly, will be needed for some unclassified variant hits, particularly those that are thought to be clinically meaningful.

“I don’t see how you’re going to do a functional assay for less than like $1,000,” he says. “That means that unless the cost of the sequencing test also includes a whole bunch of money for assessing the unclassified variants, a sequencing test is going to create more of a mess than it cleans up.”

Rare, common

Despite the challenges, there have been plenty of clinical sequencing success stories. Already, Scripps’ Topol says there have been “two big fronts in 2012: One is the unknown diseases [and] the other one, of course, is cancer.” But scientists say that despite the challenges, whole–genome sequencing might also become clinically useful for asymptomatic individuals in the future.

Down the line, scientists have their sights set on sequencing asymptomatic individuals to predict disease risk. “The long-term goal is to have any person walk off the street, be able to take a look at their genome and, without even looking at them clinically, say: ‘This is a person who will almost certainly have phenotype X,'” MacArthur says. “That is a long way away. And, of course, there are many phenotypes that can’t be predicted from genetic data alone.”

Nearer term, Boston’s Kohane imagines that newborns might have their genomes screened for a number of neonatal or pediatric conditions.

Overall, he says, it’s tough to say exactly where all of the chips might fall. “It’s going to be an interesting few years where the sequencing companies will be aligning themselves with laboratory testing companies and with genome interpretation companies,” Kohane says.

Even if clinical sequencing does not show utility for cases other than genetic diseases, it could still become common practice.

“Worldwide, there are certainly millions of people with severe diseases that would benefit from whole–genome sequencing, so the demand is certainly there,” MacArthur says. “It’s just a question of whether we can develop the infrastructure that is required to turn the research-grade genomes that we’re generating at the moment into clinical-grade genomes. Given the demand and the practical benefit of having this information … I don’t think there is any question that we will continue to drive, pretty aggressively, towards large-scale -genome sequencing.”

Kohane adds that “although rare diseases are rare, in aggregate they’re actually not — 5 percent of the population, or 1 in 20, is beginning to look common.”

Despite conflicting reports as to its clinical value, given the rapid declines in cost, Kohane says it’s possible that a whole-genome sequence could be less expensive than a CT scan in the next five years. Confident that many of the interpretation issues will be worked out by then, he adds, “this soon-to-be-very-inexpensive test will actually have a lot of clinical value in a variety of situations. I think it will become part the decision procedure of most doctors.”


[Sidebar] ‘Predictive Capacity’ Challenged

In Science Translational Medicine in April, Johns Hopkins University School of Medicine’s Nicholas Roberts and his colleagues showed that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals and concluded that whole-genome sequencing was unlikely to be useful for that purpose.

As the Scripps Research Institute’s Eric Topol says, that Roberts and his colleagues examined the predictive capacity of personal genome sequencing “without any genome sequences” was but one flaw of their interpretation.

In a comment appearing in the same journal in May, Topol elaborated on this criticism, and noted that the Roberts et al. study essentially showed nothing new. “We cannot know the predictive capacity of whole-genome sequencing until we have sequenced a large number of individuals with like conditions,” Topol wrote.

Elsewhere in the journal, Tel Aviv University’s David Golan and Saharon Rosset noted that slightly tweaking the gene-environment parameters of the mathematical model used by Roberts et al. showed that the “predictive capacity of genomes may be higher than their maximal estimates.”

Colin Begg and Malcolm Pike from Memorial Sloan-Kettering Cancer Center also commented on the study in Science Translational Medicine, reporting their -alternative calculation of the predictive capacity of personal sequencing and their analysis of cancer occurrence in the second breast of breast cancer patients, both of which, they wrote, “offer a more optimistic view of the predictive value of genetic data.”

In response to those comments, Bert Vogelstein — who co-authored the Roberts et al. study — and his colleagues wrote in Science Translational Medicine that their “group was the first to show that unbiased genome-wide sequencing could illuminate the basis for a hereditary disease,” adding that they are “acutely aware of its immense power to elucidate disease pathogenesis.” However, Vogelstein and his colleagues also said that recognizing the potential limitations of personal genome sequencing is important to “minimize false expectations and foster the most fruitful investigations.”


[Sidebar] ‘The Single Biggest Problem’

That there is currently no comprehensive, accurate, and openly accessible database of human disease-causing mutations “is the single greatest failure of modern human genetics,” Massachusetts General Hospital’s Daniel MacArthur says.

“We’ve invested so much effort and so much money in researching these Mendelian diseases, and yet we have never managed as a community to centralize all of those mutations in a single resource that’s actually useful,” MacArthur says. While he notes that several groups have produced enormously helpful resources and that others are developing more, currently “none covers anywhere close to the whole of the literature with the degree of detail that is required to make an accurate interpretation.”

Because of this, he adds, researchers are pouring time and resources into rehashing one another’s efforts and chasing down false leads.

“As anyone at the moment who is sequencing genomes can tell you, when you look at a person’s genome and you compare it to any of these databases, you find things that just shouldn’t be there — homozygous mutations that are predicted to be severe, recessive, disease-causing variants and dominant mutations all over the place, maybe a dozen or more, that they’ve seen in every genome,” MacArthur says. “Those things are clearly not what they claim to be, in the sense that a person isn’t sick.” Most often, he adds, the researchers who reported that variant as disease-causing were mistaken. Less commonly, the database moderators are at fault.

“The single biggest problem is that the literature contains a lot of noise. There are things that have been reported to be mutations that just aren’t. And, of course, a lot of the databases are missing a lot of mutations as well,” MacArthur adds. “Until we have a complete database of severe disease mutations that we can trust, genome interpretation will always be far more complicated than it should be.”

Tracy Vence is a senior editor of Genome Technology.

Source: 

http://www.genomeweb.com/node/1098636/

NIST Consortium Embarks on Developing ‘Meter Stick of the Genome’ for Clinical Sequencing

September 05, 2012

The National Institute of Standards and Technology has founded a consortium, called “Genome in a Bottle,” to develop reference materials and performance metrics for clinical human genome sequencing.

Following an initial workshop in April, consortium members – which include stakeholders from industry, academia, and the government – met at NIST last month to discuss details and timelines for the project.

The current aim is to have the first reference genome — consisting of genomic DNA for a specific human sample and whole-genome sequencing data with variant calls for that sample — available by the end of next year, and another, more complete version by mid-2014.

“At present, there are no widely accepted genomics standards or quantitative performance metrics for confidence in variant calling,” the consortium wrote in its work plan, which was discussed at the meeting. Its main motivation is “to develop widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.”

“This is like the meter stick of the genome,” said Marc Salit, leader of the Multiplexed Biomolecular Science group in NIST’s Materials Measurement Laboratory and one of the consortium’s organizers. He and his colleagues were approached by several vendors of next-generation sequencing instrumentation about the possibility of generating standards for assessing the performance of next-gen sequencing in clinical laboratories. The project, he said, will focus on whole-genome sequencing but will also include targeted sequencing applications.

The consortium, which receives funding from NIST and the Food and Drug Administration, is open for anyone to participate. About 100 people, representing 40 to 50 organizations, attended last month’s meeting, among them representatives from Illumina, Life Technologies, Pacific Biosciences, Complete Genomics, the FDA, the Centers for Disease Control and Prevention, commercial and academic clinical laboratories, and a number of large-scale sequencing centers.

Four working groups will be responsible for different aspects of the project: a group led by Andrew Grupe at Celera will select and design the reference materials; a group headed by Elliott Margulies at Illumina will characterize the reference materials experimentally, using multiple sequencing platforms; Steve Sherry at the National Center for Biotechnology Information is heading a bioinformatics, data integration, and data representation group to analyze and represent the experimental data; and Justin Johnson from EdgeBio is in charge of a performance metrics and “figures of merit” group to help laboratories use the reference materials to characterize their own performance.

The reference materials will include both human genomic DNA and synthetic DNA that can be used as spike-in controls. Eventually, NIST plans to release the references as Standard Reference Materials that will be “internationally recognized as certified reference materials of higher order.”

According to Salit, there was some discussion at the meeting about what sample to select for a national reference genome. The initial plan was to use a HapMap sample – NA12878, a female from the CEPH pedigree from Utah – but it turned out that HapMap samples are consented for research use only and not for commercial use, for example in an in vitro diagnostic or for potential re-identification from sequence data.

The genome of NA12878 has already been extensively characterized, and the CDC is developing it as a reference for clinical laboratories doing targeted sequencing. “We were going to build on that momentum and make our first reference material the same genome,” Salit said. But because of the consent issues, NIST’s institutional review board and legal experts are currently evaluating whether the sample can be used.

In the meantime, consortium members have been “quite enthusiastic” about using samples from the Harvard University’s Personal Genome Project, which are broadly consented, Salit said.

The reference material working group issued a recommendation to develop a set of genomes from eight ethnically diverse parent-child trios as references, he said. For cancer applications, the references may also potentially include a tumor-normal pair.

The consortium will characterize all reference materials by several sequencing platforms. Several instrument vendors, as well as a couple of academic labs, have offered to contribute to data production. According to Justin Zook, a biomedical engineer at NIST and another organizer of the consortium, the current plan is to use sequencing technology from Illumina, Life Technologies, Complete Genomics, and – at least for the first genome – PacBio. Some of the sequencing will be done internally at NIST, which has Life Tech’s 5500 and Ion Torrent PGM available. In addition, the consortium might consider fosmid sequencing, which would provide phasing information and lower the error rate, as well as optical mapping to gain structural information, Zook said.

He and his colleagues have developed new methods for calling consensus variants from different data sets already available for the NA12878 sample, which they are planning to submit for publication in the near future. A fraction of the genotype calls will be validated using other methods, such as microarrays and Sanger sequencing. Consensus genotypes with associated confidence levels will eventually be released publicly as NIST Reference Data.

An important part of NIST’s work on the data analysis will be to develop probabilistic confidence estimates for the variant calls. It will also be important to distinguish between homozygous reference genotypes and areas in the genome “where you’re not sure what the genotype is,” Zook said, adding that this will require new data formats.

Coming up with confidence estimates for the different types of variants will be challenging, Zook said, particularly for indels and structural variants. Also, representing complex variants has not been standardized yet.

Several meeting participants called for “reproducible research and transparency in the analysis,” Salit said, and there were discussions about how to implement that at the technical level, including data archives so anyone can re-analyze the reference data.

One of the challenges will be to establish the infrastructure for hosting the reference data, which will require help from the NCBI, Salit said. Also, analyzing the data collaboratively is “not a solved problem,” and the consortium is looking into cloud computing services for that.

The consortium will also develop methods that describe how to use the reference materials to assess the performance of a particular sequencing method, including both experimental protocols and open source software for comparing genotypes. “We could throw this over the fence and tell someone, ‘Here is the genome and here is the variant table,'” Salit said, but, he noted, the consortium would like to help clinical labs use those tools to understand their own performance.

Edge Bio’s Johnson, who is chairing the working group in charge of this effort, is also involved in developing bioinformatic tools to judge the quality of genomes for the Archon Genomics X Prize (CSN 11/2/2011). Salit said that NIST is “leveraging some excellent work coming out of the X Prize” and is collaborating with a member of the X Prize team on the consensus genotype calling project.

By the end of 2013, the consortium wants to have its first “genome in a bottle” and reference data with SNV and maybe indel calls available, which will not yet include all confidence estimates. Another version, to be released in mid-2014, will include further analysis of error rates and uncertainties, as well as additional types of variants, such as structural variation.

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

At AACC, NHGRI’s Green Lays out Vision for Genomic Medicine

July 16, 2012

LOS ANGELES – The age of genomic medicine is within “striking distance,” Eric Green, director of the National Human Genome Research Institute, told attendees of the American Association of Clinical Chemistry’s annual meeting here on Sunday.

Speaking at the conference’s opening plenary session, Green discussed NHGRI’sroadmap for moving genomic findings into clinical practice. While this so-called “helix to healthcare” vision may take many years to fully materialize, “I predict absolutely that it’s coming,” he said.

Green noted that rapid advances in DNA sequencing have put genomics on a similar development path as clinical chemistry, which is also a technology-driven field. “If you look over the history of clinical chemistry, whenever there were technology advances, it became incredibly powerful and new opportunities sprouted up left and right,” he said.

Green likened next-gen sequencing to the autoanalyzers that “changed the face of clinical chemistry” by providing a generic platform that enabled a range of applications. In a similar fashion, low-cost sequencing is becoming a “general purpose technology” that can not only read out DNA sequence but can also provide information about RNA, epigenetic modifications, and other associated biology, he said.

The “low-hanging fruit” for genomic medicine is cancer, where molecular profiling is already being used alongside traditional histopathology to provide information on prognosis and to help guide treatment, he said.

Another area where Green said that genomic medicine is already bearing fruit is pharmacogenomics, where genomic data is proving useful in determining which patients will respond to specific drugs.

Nevertheless, while it’s clear that “sequencing is already altering the clinical landscape,” Green urged caution. “We have to manage expectations and realize it’s going to be many years from going from the most basic information about our genome sequence to actually changing medical care in any serious way,” he said.

In particular, he noted that the clinical interpretation of genomic data is still a challenge. Not only are the data volumes formidable, but the functional role of most variants is still unknown, he noted.

This knowledge gap should be addressed over the next several years as NHGRI and other organizations worldwide sequence “hundreds of thousands” of human genomes as part of large-scale research studies.

“We’re increasingly thinking about how to use that data to actually do clinical care, but I want to emphasize that the great majority of this data being generated will and should be part of research studies and not part of primary clinical care quite yet,” Green said.

Source:

http://www.genomeweb.com/sequencing/aacc-nhgris-green-lays-out-vision-genomic-medicine

Startup Aims to Translate Hopkins Team’s Cancer Genomics Expertise into Patient Care

May 16, 2012

Researchers at Johns Hopkins University who helped pioneer cancer genome sequencing have launched a commercial effort intended to translate their experience into clinical care.

Personal Genome Diagnostics, founded in 2010 by Victor Velculescu and Luis Diaz, aims to commercialize a number of cancer genome analysis methods that have been developed at Hopkins over the past several decades. Velculescu, chief scientific officer of PGDx, is director of cancer genetics at the Ludwig Center for Cancer Genetics and Therapeutics at Hopkins; while Diaz, chief medical officer of the company, is director of translational medicine at the Ludwig Center.

Other founders include Ludwig Center Director Bert Vogelstein as well as Hopkins researchers Ken Kinzler, Nick Papadopoulos, and Shibin Zhou. The team has led a number of seminal cancer sequencing projects, including the first effort to apply large-scale sequencing to cancer genomes, one of the first cancer exome sequencingstudies, and the discovery of a number of cancer-related genes, including TP53, PIK3CA, APC, IDH1 and IDH2.

Velculescu told Clinical Sequencing News that the 10-person company, headquartered in the Science and Technology Park at Johns Hopkins in Baltimore, is a natural extension of the Hopkins group’s research activities.

Several years ago, “we began receiving requests from other researchers, other physicians, collaborators, and then actually patients, family members, and friends, wanting us to do these whole-exome analyses on cancer samples,” he said. “We realized that doing this in the laboratory wasn’t really the best place to do it, so for that reason we founded Personal Genome Diagnostics.”

The goal of the company, he said, “is to translate this history of our group’s experience of cancer genetics and our understanding of cancer biology, together with the technology that has now become available, and to ultimately perform these analyses for individual patients.”

The fledgling company has reached two commercial milestones in the last several weeks. First, it gained CLIA certification for cancer exome sequencing using the HiSeq 2000. In addition, it secured exclusive licensing rights from Hopkins for a technology called digital karyotyping, developed by Velculescu and colleagues to analyze copy number changes in cancer genomes.

PGDx offers a comprehensive cancer genome analysis service that combines exome sequencing with digital karyotyping, which isolates short sequence tags from specific genomic loci in order to identify chromosomal changes as well as amplifications and deletions.

The company sequences tumor-normal pairs and promises a turnaround time of six to 10 weeks, though Velculescu said that ongoing improvements in sequencing technology and the team’s analysis methods promise to reduce that time “significantly.” It is currently seeing turnaround times of under a month.

To date, the company has focused solely on the research market. Customers have included pharmaceutical and biotech companies, individual clinicians and researchers, and contract research organizations, while the scale of these projects has ranged from individual patients to thousands of exomes for clinical trials.

While the company performs its own sequencing for smaller projects, it relies on third-party service providers for larger studies.

PGDx specializes in all aspects of cancer genome analyses, but has a particular focus on the front and back end of the workflow, Velculescu said, including “library construction, pathologic review of the samples, dissection of tumor samples to enrich tumor purity, next generation sequencing, identification of tumor-specific alterations, and linking of these data to clinical and biologic information about human cancer.”

The sequencing step in the middle, however, “is really almost becoming a commodity,” he noted. “Although we’ve done it in house, we typically do outsource it and that allows us to scale with the size of these projects.”

He said that PGDx typically works with “a number of very high-quality sequence partners to do that part of it,” but he declined to disclose these partners.

On the front end, PGDx has developed “a variety of techniques that we’ve licensed and optimized from Hopkins that have allowed us to improve extraction of DNA from both frozen tissue and [formalin-fixed, paraffin-embedded] tissue, even at very small quantities,” Diaz said. The team has also developed methods “to maximize our ability to construct libraries, capture, and then perform exomic sequencing with digital karyotyping.”

Once the sequence data is in hand, “we have a pipeline that takes that information and deciphers the changes that are most likely to be related to the cancer and its genetic make-up,” he said. “That’s not trivial. It requires inspection by an experienced cancer geneticist.”

While the firm is working on automating the analysis, “it’s not something that is entirely automatable at this time and therefore cannot be commoditized,” Diaz said.

The firm issues a report for its customers that “provides information not only on the actual sequence changes which are of high quality, but what these changes are likely to do,” Velculescu said, including “information about diagnosis, prognosis, therapeutic targeting [information] or predictive information about the therapy, and clinical trials.”

So far, the company has relied primarily on word of mouth to raise awareness of its offerings. “We’ve literally been swamped with requests from people who just know us,” Velculescu said. “I think one of the major reasons people have been coming to us for either these small or very large contracts is that people are getting this type of NGS data and they don’t know what to do with it — whether it’s a researcher who doesn’t have a lot of experience in cancer or a clinician who hasn’t seen this type of data before.”

While there’s currently “a wealth in the ability to get data, there’s an inadequacy in being able to understand and interpret the data,” he said.

Pricing for the company’s services is on a case-by-case basis, but Diaz estimated that retail costs are currently between $5,000 and $10,000 per tumor-normal pair for research purposes. Clinical cases are more costly because the depth of coverage is deeper and additional analyses are required, as well as a physician interpretation.

A Cautious Approach

While the company’s ultimate goal is to help oncologists use genomic information to inform treatment for their patients, PGDx is “proceeding cautiously” in that direction, Diaz said.

The firm has so far sequenced around 50 tumor-normal pairs for individual patients, but these have been for “informational purposes,” he said, stressing that the company believes the field of cancer genomics is still in the “discovery” phase.

“I think we’re really at the beginning of the genomic revolution in cancer,” Diaz said. “We are partnering with pharma, with researchers, and with certain clinicians to start bringing this forward — not only as a discovery tool but eventually as a clinical application.”

“We do think that rushing into this right now is too soon, but we are building the infrastructure — for example our recent CLIA approval for cancer genome analyses — to do that,” he added.

This cautious approach sets the firm apart from some competitors, including Foundation Medicine, which is about to launch a targeted sequencing test that it is marketing as a diagnostic aid to help physicians tailor therapy for their patients. Diagnostic firm Asuragen is also offering cancer sequencing services based on a targeted approach (CSN 1/12/12), as are a number of academic labs.

Diaz said that PGDx’s comprehensive approach also sets it apart from these groups. “We think there’s a lot of clinically actionable information in the genome … and we don’t want to limit ourselves by just looking at a set of genes and saying that these may or may not have importance.”

While the genes in targeted panels “may have some data surrounding them with regard to prognosis, or in relation to a therapy, that’s really only a small part of the story when it comes to the patient’s cancer,” Diaz said.

“That’s why we would like to remain the company that looks at the entire cancer genome in a comprehensive fashion, because we don’t know enough yet to break it down to a few genes,” he said.

The company’s proprietary use of digital karyotyping to find copy number alterations is another differentiator, Velculescu said, because many cancer-associated genes — such as p16, EGFR, MYC, and HER2/neu — are only affected by copy number changes, not point mutations.

Ultimately, “we want to develop something that has value for the clinician,” Diaz said. “A clinician currently sees 20 to 30 patients a day and may have only a few minutes to look at a report. If [information from sequencing] doesn’t have immediate high-impact value, it’s going to be very hard to justify its use down the road.”

He added that the company is “thinking very hard about what we can squeeze out of the cancer genome to provide that high-impact clinical value — something that isn’t just going to improve the outcome of patients by a few months or weeks, but actually change the outlook of that patient substantially.”

Source:

http://www.genomeweb.com/sequencing/startup-aims-translate-hopkins-teams-cancer-genomics-expertise-patient-care

 
Bernadette Toner is editorial director for GenomeWeb’s premium content. E-mail her here or follow her GenomeWeb Twitter account at @GenomeWeb.

In Educational Symposium, Illumina to Sequence, Interpret Genomes of 50 Participants for $5K Each

June 27, 2012

This story was originally published June 25.

As part of a company-sponsored symposium this fall to “explore best practices for deploying next-generation sequencing in a clinical setting,” Illumina plans to sequence and analyze the genomes of around 50 participants for $5,000 each, Clinical Sequencing News has learned.

According to Matt Posard, senior vice president and general manager of Illumina’s translational and consumer genomics business, the event is part of a “multi-step process to engage experts in the field around whole-genome sequencing, and to support the conversation.”

The “Understand your Genome” symposium will take place Oct. 22-23 at Illumina’s headquarters in San Diego.

The company sent out invitations to the event over the last few months, targeting individuals with a professional interest in whole-genome sequencing, including medical geneticists, pathologists, academics, and industry or business leaders, Posard told CSN this week. To provide potential participants with more information about the symposium, Illumina also hosted a webinar this month that included a Q&A session.

Registration closed June 14 and has exceeded capacity — initially 50 spots, a number that may increase slightly, Posard said. Everyone else is currently waitlisted, and Illumina plans to host additional symposia next year.

“There has been quite a bit of unanticipated enthusiasm around this from people who are speaking at the event or planning to attend the event,” including postings on blogs and listservs, Posard said.

As part of their $5,000 registration fee, which does not include travel and lodging, participants will have their whole genome sequenced in Illumina’s CLIA-certified and CAP-accredited lab prior to the event. It is also possible to participate without having one’s genome sequenced, but only as a companion to a full registrant, according to Illumina’s website. The company prefers that participants submit their own sample, but as an alternative, they may submit a patient sample instead.

The general procedure is very similar to Illumina’s Individual Genome Sequencing, or IGS, service in that it requires a prescription from a physician, who also receives the results to review them with the participant. However, participants pay less than they would through IGS, where a single human genome currently costs $9,500.

Participants will also have a one-on-one session with an Illumina geneticist prior to being sequenced, and they can choose to not receive certain medical information as part of the genome interpretation.

Doctors will receive the results and review them with the participants sometime before the event. “There will be no surprises for these participants when they come to the symposium,” Posard said.

Results will include not only a list of variants but also a clinical interpretation of the data by Illumina geneticists. This is currently not part of IGS, which requires an interpretation of the data by a third party, but Illumina plans to start offering interpretation services for IGS before the symposium, Posard said.

“Our stated intent has always been that we want to fill in all of the pieces that the physicians require, so we are building a human resource, as well as an informatics team, to provide that clinical interpretation, and we are using that apparatus for the ‘Understand your Genome’ event,” Posard said.

The interpretation will include “a specified subset of genes relating to Mendelian conditions, drug response, and complex disease risks,” according to the website, which notes that “as with any clinical test, the patient and physician must discuss any medically significant results.”

The first day of the symposium will feature presentations on clinical, laboratory, ethical, legal, and social issues around whole-genome sequencing by experts in the field. Speakers include Eric Topol from the Scripps Translational Science Institute, Matthew Ferber from the Mayo Clinic, Robert Green from Brigham and Women’s Hospital and Harvard Medical School, Heidi Rehm from the Harvard Partners Center for Genetics and Genomics, Gregory Tsongalis from the Dartmouth Hitchcock Medical Center, Robert Best from the University of South Carolina School of Medicine, Kenneth Chahine from Ancestry.com, as well as Illumina’s CEO Jay Flatley and chief scientist David Bentley.

On the second day, participants will receive their genome data on an iPad and learn how to analyze their results using the iPad MyGenome application that Illumina launched in April.

The planned symposium stirred some controversy at the European Society of Human Genetics annual meeting in Nuremberg, Germany, this week. During a presentation in a session on the diagnostic use of next-generation sequencing, Gert Matthijs, head of the Laboratory for Molecular Diagnostics at the Center for Human Genetics in Leuven, Belgium, said he was upset because the invitation to Illumina’s event apparently not only reached selected individuals but also patient organizations.

“To me, personally, [the event] tells that some people are really exploring the limits of business, and business models, to get us to genome sequencing,” he said.

“We have to be very careful when we put next-generation sequencing direct to the consumer, or to patient testing, but it’s a free world,” he added later.

Posard said that Illumina welcomes questions about and criticism of the symposium. “This is another example of us being extremely responsible and transparent in how we’re handling this novel application that everybody acknowledges is the wave of the future,” he said. “We want to responsibly introduce that wave, and I believe we’re doing so, through such things as the ‘Understand your Genome’ event, but not limited to this event.”

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

Federal Court Rules Helicos Patent Invalid; Company Reaches Payment Agreement with Lenders

August 30, 2012

NEW YORK (GenomeWeb News) – A federal court has ruled in Illumina’s favor in a lawsuit filed by Helicos BioSciences that had alleged patent infringement.

In a decision dated Aug. 28, District Judge Sue Robinson of the US District Court for the District of Delaware granted Illumina’s motion for summary judgment declaring US Patent No 7,593,109 held by Helicos invalid for “lack of written description.”

Titled “Apparatus and methods for analyzing samples,” the patent relates to an apparatus, systems, and methods for biological sample analysis.

The ‘109 patent was the last of three patents that Helicos accused Illumina of infringing, following voluntary dismissal by Helicos earlier this year with prejudice of the other two patents. In October 2010 Helicos included Illumina and Life Technologies in a lawsuit that originally accused Pacific Biosciences of patent infringement.

Helicos dropped its lawsuit against Life Tech and settled with PacBio earlier this year, leaving Illumina as the sole defendant.

In seeking a motion for summary judgment, Illumina argued that the ‘109 patent does not disclose “a focusing light source operating with any one of the analytical light sources to focus said optical instrument on the sample.” Illumina’s expert witness further said that the patent “does not describe how focusing light source works” nor does it provide an illustration of such a system, according to court documents.

In handing down her decision, Robinson said, “In sum, and based on the record created by the parties, the court concludes that Illumina has demonstrated, by clear and convincing evidence, that the written description requirement has not been met.”

In a statement, Illumina President and CEO Jay Flatley said he was pleased with the court’s decision.

“The court’s ruling on the ‘109 patent, and Helicos’ voluntary dismissal of the other patents in the suit, vindicates our position that we do not infringe any valid Helicos patent,” he said. “While we respect valid and enforceable intellectual property rights of others, Illumina will continue to vigorously defend against unfounded claims of infringement.”

After the close of the market Wednesday, Helicos also disclosed that it had reached an agreement with lenders to waive defaults arising from Helicos’ failure to pay certain risk premium payments in connection with prior liquidity transactions. The transactions are part of risk premium payment agreement Helicos entered into with funds affiliated with Atlas Venture and Flagship Ventures in November 2010.

The lenders have agreed to defer the risk premium payments “until [10] business days after receipt of a written notice from the lenders demanding the payment of such risk premium payments,” Helicos said in a document filed with the US Securities and Exchange Commission.

The Cambridge, Mass.-based firm also disclosed that Noubar Afeyan and Peter Barrett have resigned from its board.

Helicos said two weeks ago that its second-quarter revenues dipped 29 percent year over year to $577,000. In an SEC document, it also warned that existing funds were not sufficient to support its operations and related litigation expenses through the planned September trial date for its dispute with Illumina.

In Thursday trade on the OTC market, shares of Helicos closed down 20 percent at $.04.

Source:

http://www.genomeweb.com/sequencing/federal-court-rules-helicos-patent-invalid-company-reaches-payment-agreement-len

State of the Science: Genomics and Cancer Research

April 2012
Basic research allows for a better understanding of cancer and, eventually, improved patient outcomes. Zhu Chen, China’s minister of health, and Shanghai Jiao Tong University’s Zhen-Yi Wang received the seventh annual Szent-Györgyi prize from the National Foundation for Cancer Research for their work on a treatment for acute promyelocytic leukemia. Genome Technology‘s Ciara Curtin spoke to Chen, Wang, and past prize winners about the state of cancer research.

Genome Technology: Doctors Wang and Chen, can you tell me a bit about the work you did that led to you receiving the Szent-Györgyi prize?

Zhen-Yi Wang: I am a physician. I am working in the clinic, so I have to serve the patients. … I know the genes very superficially, not very deeply, but the question raised to me is: There are so many genes, but how are [we] to judge what is the most important?

Zhu Chen: The work that is recognized by this year’s Szent-Györgyi Prize concerns … acute promyelocytic leukemia. Over the past few decades, we have been involved in developing new treatment strategies against this disease.

You have two [therapies — all-trans retinoic acid and arsenic trioxide] — that target the same protein but with slightly different mechanisms, so we call this synergistic targeting. When the two drugs combine together for the induction therapy, then we see very nice response in terms of the complete remission rate. But more importantly, we see that this synergistic targeting, together with the effect of the chemotherapy, can achieve a very high five-year disease-free survival — as high as 90 percent.

But we were more interested in the functional aspects of the genome, to understand what each gene does and also to particularly understand the network behavior of the genes.

GT: There are a number of consortiums looking at the genome sequences of many cancer types. What do you hope to see from such studies?

Webster Cavenee: This is a way that tumors are being sequenced in a rational kind of way. It would have been done anyway by labs individually, which would have taken a lot more money and taken a lot longer, too. The human genome sequence, everybody said, ‘Why are you going to do that?’ … But that now turns out to be a tremendous resource. … From the point of view of The Cancer Genome Atlas, having the catalog of all of the kinds of mutations which are present in tumors can be very useful because you can see patterns. For example, in the glioblastoma cancer genome project, they found an unexpected association of some mutations and combinations of mutations with drug sensitivity. Nobody would have thought that.

The problem, of course, is that when you are sequencing all these tumors, it’s a very static thing. You get one point in time and you sequence whatever comes out of this big lump of tissue. That big lump is made up of a lot of different kinds of pieces, so when you see a mutation, you can’t know where it came from and you don’t know whether it actually does anything. That then leads into what’s going to be the functionalizing of the genome. Because in the absence of knowing that it has a function, it’s not going to be of very much use to develop drugs or anything like that. And that’s a much bigger exercise because that involves a lot of experiments, not just stuffing stuff into a sequencer.Peter Vogt: [The genome] has to be used primarily to determine function. Without function, there’s not much you can do with these mutations, because the distinction between a driver mutation and a passenger mutation can’t be made just on the basis of sequence.

Carlo Croce: After that, you have to be able to validate all of the genetic operations in model systems where you can reproduce the same changes and see whether there are the same consequences. Otherwise, without validation, to develop therapy doesn’t make much sense because maybe those so-called driver mutations will turn out to be something else.

GT: Will sequencing of patient’s tumors come to the clinic?

CC: It is inevitable. Naturally, there are a lot of bottlenecks. To do the sequencing is the, quote, trivial part and it is going to cost less and less. But then interpreting the data might be a little bit more cumbersome.

Sujuan Ba: Dr. Chen, there is an e-health card in China right now. Do you think some day gene sequencing will be stored in that card?

ZC: We are developing a digital healthcare in China. We started with electronic health records and now by providing the e-health card to the people, that will facilitate the individualized health management and also the supervision of our healthcare system. In terms of the use of genetic information for clinical purposes, as Professor Croce said, it’s going to happen.

GT: What do you think are the major questions in cancer research that still need to be addressed?

PV: There are increasingly two schools of thought on cancer. One is that it is all an engineering problem: We have all the information we need, we just need to engineer the right drugs. The other school says it’s still a basic knowledge problem. I think more and more people think it’s just an engineering problem — give us the money and we’ll do it all. A lot of things can be done, but we still don’t have complete knowledge.

Roundtable Participants
Sujuan Ba, National Foundation for Cancer Research
Webster Cavenee, University of California, San Diego
Zhu Chen, Ministry of Health, China
Carlo Croce, Ohio State University
Peter Vogt, Scripps Research Institute
Zhen-Yi Wang, Shanghai Jiao Tong University

Source:

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

MRI cortical thickness biomarker predicts AD-like CSF and cognitive decline in normal adults

Bradford C. Dickerson, MD and David A. Wolk, MD On behalf of the Alzheimer’s Disease Neuroimaging Initiative

Author Affiliations

From the Frontotemporal Dementia Unit, Department of Neurology, Massachusetts Alzheimer’s Disease Research Center, and Athinoula A. Martinos Center for Biomedical Imaging (B.C.D.), Massachusetts General Hospital and Harvard Medical School, Boston; and Department of Neurology, Alzheimer’s Disease Core Center, and Penn Memory Center (D.A.W.), University of Pennsylvania, Philadelphia.

Correspondence & reprint requests to Dr. Dickerson: bradd@nmr.mgh.harvard.edu

ABSTRACT

Objective: New preclinical Alzheimer disease (AD) diagnostic criteria have been developed using biomarkers in cognitively normal (CN) adults. We implemented these criteria using an MRI biomarker previously associated with AD dementia, testing the hypothesis that individuals at high risk for preclinical AD would be at elevated risk for cognitive decline.

Methods: The Alzheimer’s Disease Neuroimaging Initiative database was interrogated for CN individuals. MRI data were processed using a published set of a priori regions of interest to derive a single measure known as the AD signature (ADsig). Each individual was classified as ADsig-low (≥1 SD below the mean: high risk for preclinical AD), ADsig-average (within 1 SD of mean), or ADsig-high (≥1 SD above mean). A 3-year cognitive decline outcome was defined a priori using change in Clinical Dementia Rating sum of boxes and selected neuropsychological measures.

Results: Individuals at high risk for preclinical AD were more likely to experience cognitive decline, which developed in 21% compared with 7% of ADsig-average and 0% of ADsig-high groups (p = 0.03). Logistic regression demonstrated that every 1 SD of cortical thinning was associated with a nearly tripled risk of cognitive decline (p = 0.02). Of those for whom baseline CSF data were available, 60% of the high risk for preclinical AD group had CSF characteristics consistent with AD while 36% of the ADsig-average and 19% of the ADsig-high groups had such CSF characteristics (p = 0.1).

Conclusions: This approach to the detection of individuals at high risk for preclinical AD—identified in single CN individuals using this quantitative ADsig MRI biomarker—may provide investigators with a population enriched for AD pathobiology and with a relatively high likelihood of imminent cognitive decline consistent with prodromal AD.

 

Copyright © 2011 by AAN Enterprises, Inc.

http://www.neurology.org/content/early/2011/12/21/WNL.0b013e31823efc6c.abstract

 

 

Read Full Post »

Reporter: Prabodh Kandala, PhD.

Zebrafish, popular as aquarium fish, now have an important place in research labs as a model organism for studying human diseases.

At the 2012 International Zebrafish Development Conference, held June 20-24 in Madison, Wisconsin, numerous presentations highlighted the utility of the zebrafish for examining the basic biological mechanisms underlying human disorders and identifying potential treatment approaches for an impressive array of organ and systemic diseases.

Inflammatory Bowel Disease

Inflammatory bowel disease (IBD), while rarely fatal, can have a substantial negative impact on an individual’s quality of life due to abdominal pain, diarrhea, vomiting, bleeding, and severe cramps. The causes of this chronic inflammatory disorder are largely unknown and existing treatments, usually anti-inflammatory drugs, are often not effective. In addition, IBD is often associated with increased risk of developing intestinal cancer.

Researchers from the University of Pittsburgh are using zebrafish to study the biological mechanisms that lead to intestinal inflammation, as often seen in IBD, providing additional understanding that may allow development of better therapies. Prakash Thakur, a research associate working with Nathan Bahary, M.D., Ph.D., described a mutant zebrafish strain that shows many pathological characteristics similar to IBD, including inflammation, abnormal villous architecture, disorganized epithelial cells, increased bacterial growth and high numbers of dying cells in the intestine. “Most of the hallmark features of the disease are seen in this mutant. We are utilizing this fish as a tool to unravel fundamental mechanisms of intestinal pathologies that may contribute to intestinal inflammatory disorders, ” Mr. Thakur said.

The fish have a genetic mutation that disrupts de novo synthesis of an important signaling molecule called phosphatidylinositol (PI). The lack of de novo PI synthesis, Mr. Thakur and his colleagues found, leads to chronic levels of cellular stress, particularly the endoplasmic reticum stress and, ultimately, inflammation. Drugs or other interventions targeting the cellular stress response pathway, rather than just inflammation, helped restore a healthy intestinal structure and increase cell survival in the fish intestine, suggesting this mechanism as a potential therapeutic target for patients with inflammatory disorders, including IBD.

Doxorubicin-Induced Heart Failure

Doxorubicin is a potent chemotherapy drug used to treat many types of cancer, including leukemia, lymphoma, carcinoma, soft tissue sarcoma, and bladder, breast, lung, stomach and ovarian cancers. Unfortunately, drug-induced cardiomyopathy is a common side effect and can lead to heart failure in cancer patients, not only during treatment, but months or years later.

“We hope to identify some drug which only blocks the side effect of doxorubicin but preserves the therapeutic effect,” said Yan Liu, Ph.D., a postdoctoral researcher working in Dr. Randall Peterson’s lab at the Massachusetts General Hospital.

Dr. Liu developed a zebrafish model of doxorubicin-induced cardiomyopathy. The fish experience heart failure within two days of treatment with symptoms similar to those seen in humans, including fewer heart muscle cells, ventricular collapse, and ineffective heartbeats.

The researchers used the model to screen through thousands of potential drug compounds and identified two — visnagin and diphenylurea — that both improved cardiac function and reduced doxorubicin-induced cell death in the heart. Importantly, both compounds specifically protected heart tissue, but not tumor cells, from the toxic effects of doxorubicin. Both seem to act through the suppression of a particular signaling pathway, the c-Jun N-terminal kinase pathway, in the heart cells but not tumor cells.

Dr. Liu also reported promising preliminary results with mice showing reduced cell death and improved cardiac function, indicating that these compounds may also be active in mammals and giving hope for therapies that specifically treat doxorubicin’s side effects without negating its anti-tumor activity.

Spinal Muscular Atrophy

Spinal muscular atrophy (SMA) is a group of progressive neurodegenerative diseases that affect the nerves in the spinal cord that control muscles, leading to weakness, movement difficulties, poor posture, and trouble breathing and eating.

SMA is linked to mutations in a specific motor neuron survival gene, SMN1. Though mouse studies have reported immature and ineffective synaptic connections between motor neurons and muscles, little is known about the molecular mechanisms leading to those problems or how they might be fixed.

Graduate student Kelvin See, working with Associate Professor Christoph Winkler, Ph.D., at the National University of Singapore used zebrafish with activity-sensitive fluorescence to provide a visual readout of motor neuron activation. They confirmed that low SMN1 levels are associated with low neuronal influx of calcium ions, which play a critical role in triggering neurotransmitter release and thus stimulating the muscles. With their zebrafish model, Mr. See and Dr. Winkler also identified another gene with a similar effect, neurexin, which is important in synaptic structure but had never been implicated in SMA.

In a surprise discovery, the researchers found they could use the same sensor to see activation of a neighboring cell type called Schwann cells. “This gives us the unique opportunity to look at the role of SMN1 not just in motor neurons but also in the surrounding tissue,” said Mr. See.

They saw reduced excitability in Schwann cells also, suggesting that a full understanding of SMA will require a broader view of the affected cell populations. Their results provide several new insights into the fundamental processes disrupted in SMA.

Acute T-cell Lymphoblastic Leukemia and Lymphoma (T-ALL/T-LBL)

Human acute T-cell lymphoblastic leukemias (ALL) and lymphomas (LBL) have high relapse rates in pediatric patients and high mortality rates in adults. Hui Feng, M.D., Ph.D., currently at the Pharmacology Department and Center for Cancer Research at Boston University School of Medicine, is using a zebrafish model of leukemia to search for promising targets for new molecular treatments for these diseases.

To date, studies have identified several biological pathways involved in ALL and LBL, all with a known oncogene in common called c-Myc. However, Myc is so common, involved in regulating more than 15 percent of all genes, that it is very hard to study.

“Because this is a huge list of downstream targets, it is very challenging to predict which genes in the pathway to target to treat Myc-related cancers,” said Dr. Feng.

In work performed in collaboration with Thomas Look, M.D., at the Dana-Farber Cancer Institute, Dr. Feng is combining the power of zebrafish genetics with human clinical studies to hone in on potential genes of interest.

Using a fish strain that reliably develops T-cell lymphoma by two months of age, they identified a novel gene called DLST that is involved in metabolism and energy production in cells. Evidence from human cancer cell lines and patients indicate that abnormally high levels of the protein may be involved in the human disease as well.

Reducing DLST activity in the fish significantly delayed tumor progression and growth, suggesting it is a promising target for developing new therapies for ALL and LBL.

Ref:

http://www.sciencedaily.com/releases/2012/07/120706184348.htm

Read Full Post »

New England Journal of Medicine an Interview with Allan M. Brandt, Ph.D.

N Engl J Med 2012; 366:1-7 January 5, 2012

http://www.nejm.org/doi/full/10.1056/NEJMp1112812

Reporter: Aviva Lev-Ari PhD, RN

With this issue, the New England Journal of Medicine marks its 200th anniversary. In January 1812, as the first issue came off the handset letterpress, few of its founders could have predicted such continuity and success. (See Figure 1FIGURE 1Illustration from “Cases of Organic Diseases of the Heart and Lungs,” by John C. Warren, April 1, 1812, Issue of the Journal., from an 1812 issue.) John Collins Warren, the renowned Boston surgeon, his colleague James Jackson, a founder of Massachusetts General Hospital, and the small group of distinguished colleagues who joined them in starting the New England Journal of Medicine and Surgery, and the Collateral Branches of Science expressed modest and largely local aspirations for the enterprise. Boston, a growing urban center, and the wider New England environs had no medical journal of their own, although much medical knowledge and practice was considered region-specific. Although the name and format of the Journal would vary until 1928, 7 years after its ownership passed to the Massachusetts Medical Society, it remains the longest continuously published medical periodical in the world. The prospectus for the Journal, a call for papers issued in late 1811, explained the goals of Warren and his collaborators: “The editors have been encouraged to attempt this publication by the opinion, that a taste for medical literature has greatly increased in New England within a few years past. New methods of practice, good old ones which are not sufficiently known, and occasional investigations of the modes in common use, when thus distributed among our medical brethren in the country, will promote a disposition for inquiry and reflection, which cannot fail to produce the most happy results.”1

At a time of intense debate and controversy regarding the causes of disease, the nature of therapeutics, and the basis of professional authority, the young Journal worked to steer a middle course. This was certainly advisable from a commercial point of view, since it could easily alienate diverse medical readers by endorsing a particular therapeutic system or theory. But this approach also established the ecumenical temper of theJournal, which based its early publications on a commitment to empirical observation and an outlook skeptical of conventional medical wisdoms. As the editors explained in 1837, “It has been a point of ambition with us . . . to make these pages the vehicle of useful intelligence, rather than the field of warfare. . . . The Journal is to all intents and purposes, designed to be a record of medical and surgical facts. It is the medium through which the profession may interchange sentiments and publish the results of their experience” (see Historical Journal Articles Cited).

THE ENDURING PROBLEM OF DISEASE

The observation and investigation of disease is perhaps the most salient consistent feature of theJournal. From the meticulous description of angina pectoris in the first issue to the early descriptions of AIDS in the early 1980s, there has been an ongoing recognition that therapeutic approaches must await the sharp articulation of symptoms. The first decades of the Journal‘s history reflected the intensive concern with the epidemics affecting New England and the new nation, and it was not unusual during the early years for authors to direct attention to the environment as a critical variable in the production of disease. John Gorham, an editor writing in 1828, offered a “Medical Report of the Weather and Prevalent diseases for the last Three months.” Such articles may appear both quaint and humorous from our contemporary scientific perch, but they reveal a serious commitment to understanding more fully the vagaries of epidemic disease that could devastate town and country in short order. Furthermore, they offer a complex notion of causality that characterized much 19th-century medicine, in which disease was seen as the result of interactions of the patient’s individual “constitution” with an ever-changing and often dangerous environment.2 By the late 20th century, many observers would renew concerns voiced more than a century earlier about the environment’s relationship to disease.

DOCUMENTING THERAPEUTIC INNOVATION

The Journal provides a powerful record of the course taken by medical science and its applications over a 200-year period. It quickly became a conduit for reporting new investigations and findings and for summarizing and disseminating evolving medical knowledge across the widest range of practice. After issuing favorable reports on bloodletting, herbal treatments, and other “heroic” practices of the early 19th century, the Journal began to reflect a growing skepticism toward such approaches. Authors increasingly pointed to the benefits of the healing powers of nature — vis medicatrix naturae — as physicians came to recognize some of the iatrogenic effects of their interventions that had previously been difficult to differentiate from the course of serious disease.3Therapeutics based on ancient notions of humoral excess and depletion gave way to a renewed emphasis on empirical observation and experiment. The first demonstration of surgical anesthesia, conducted at Massachusetts General Hospital in 1846 in an amphitheater soon to be renamed the “Ether Dome,” was first reported in the Journal (Figure 2FIGURE 2“First Operation under Ether,” 1846, with Related Journal Report.). Others quickly began using ether in their practices. One surgeon wrote in the Journal, “I performed the amputation of an arm, the second under the use of ether, while the patient was dreaming of her harvest labors in Ireland, and felt grating but not painful sensations, `as if a reaping-hook was in her arm’” (1850).

EDUCATION AND THE DISSEMINATION OF MEDICAL KNOWLEDGE

From the beginning, the Journal has critically covered essential debates about the character and quality of medical education. The editors considered one of their primary goals to be educating the profession, so assessment of medical school programs was in harmony with their mission; after all, these schools produced their readers. In the late 19th century, the Journal frequently noted the great inconsistencies in educational standards and quality. A decade before the publication of the Flexner reforms, prominent Boston physician Henry Bowditch anticipated many key aspects of the report when he called for linking medical education to universities, lengthening the course of study, and demanding deeper preparation in the sciences and wider domains of knowledge (1900). He argued for active learning to replace didactics, a theme that would echo through the debates about medical education. As late as 1900, when Bowditch proposed his reforms in the Journal, less than half the students at Harvard Medical School had completed a college education. After the publication of the Flexner Report in 1910 and the massive changes that followed, the Journalapplauded the consolidation of medical education on a new scientific foundation.

TOO MUCH TO KNOW

With the radical expansion and shifting of the scientific basis of medicine at the turn of the 20th century, the Journal recorded growing interest in and concern about specialization. From a largely undifferentiated notion of medical training and expertise, many new and specific divisions of the medical profession developed.6 Whereas the Journal came to view specialization as the inevitable result of exploding medical knowledge, the creation of medical “specializm” was viewed with considerable skepticism and lamentation, if not outright hostility. Much ink was spilled in attempts to determine the relationship of general knowledge and practice to increasingly specific (and limited) areas of expertise. How would the “whole patient” be treated when specialties had divided the body into organ systems and medicine into categories of disease and authority over various technologies and techniques?

THE PERMEABLE BOUNDARIES OF SCIENCE AND MEDICINE

Despite the Journal‘s deep commitment to empirical reasoning and scientific rationality, cultural and political beliefs and values are ever apparent in its pages. In some instances, professional prerogatives and social assumptions are exposed. For example, when the introduction of women students at Harvard Medical School was debated in 1878, the Journal expressed concern: “It would . . . be impossible to avoid an indiscriminate mingling of the sexes in the dissecting or autopsy rooms, and in the amphitheatres, to witness exercises which justly have hitherto been thought of a character to be witnessed by one sex alone.” Harvard would ultimately admit women in 1945, when the war caused a shortage of male candidates. In the 1950s, the Journal expressed regret that some women physicians with children “have found it impossible to carry on their practices” (1954).

REFLECTIONS ON THE JOURNAL AT 200

While the Journal embraced new science and the critical apparatus of peer review, it rejected a narrow notion of specialism, continuing to cover the widest range of contributions to medical knowledge. In an increasingly atomized medical world, the commitment to publish on cross-cutting educational, professional, ethical, and policy issues pulled together diverse readers, including physicians and other health care providers, public health experts, and policymakers, around issues that were often beyond their immediate expertise. The radical growth of teaching hospitals, federal funding for basic science and clinical research, and academic medical centers (all developments reflected in the Journal) have been crucially linked to the Journal‘s growth, stability, and success.

During the Journal’s first 200 years of publication, medicine and health care moved from the social periphery to become dominant aspects of our science, culture, and economy. The Journalunquestionably owes its success and stability to this monumental shift in the status, authority, and impact of biomedicine. But the Journal has also played a critical role in these developments. By combining a commitment to publishing papers of scrupulous scientific merit across wide-ranging domains, with a recognition of the central questions and values uniting the profession, the Journalhas remained true to its founders’ vision. It has recognized that advances in medical science can finally be assessed only in the context of delivery, care, and outcome. The Journal reflects today, as at its inception, a view that medical science and its applications are fundamentally tied to patient care and public health. It therefore continues to draw a range of readers wider than Warren could have imagined. Today, the ability to disseminate publications so widely through digital technologies promises to bring innovations in medical knowledge to a new set of global constituents. The first hundred issues of Warren’s journal were, of course, distributed on horseback.

HISTORICAL JOURNAL ARTICLES CITED.

New England Journal of Medicine and Surgery, and the Collateral Branches of Science

1812. Warren J. Remarks on Angina Pectoris. 1:1-11.

The Boston Medical and Surgical Journal

1828. Gorham J. Medical report of the weather and prevalent diseases for the last three months. 1:10-12.

1832. Editorials and Medical Intelligence. 6:401-2.

1837. Editorials and Medical Intelligence. 16:16-17.

1846. Bigelow HJ. Insensibility during surgical operations produced by inhalation. 35:309-17.

1850. Peirson AL. Anæsthetic agents. 42:229-32.

1871. Seaverns J. Recent advances in medicine and their influence on therapeutics. 85:113-20.

1878. Reports of Meetings. Female medical students at Harvard. 98:786-7.

1891. Ernst HC. Records for cases of tuberculosis treated with Koch’s parataloid. 124:75.

1900. Bowditch HP. The medical school of the future. 142:445-53.

1919. Editorial. Science and medical teaching. 180:108-9.

1923. Phippen WG. The relation of the specialist to the general practitioner. 189:204-6.

1924. Specialism versus Competence. 190:475-6.

1926. Editorial. The teaching of medicine. 195:1124-5.

1928. Appel KE. Medical education: the retrospect of a recent graduate. 197:1265-7.

The New England Journal of Medicine

1928. Lombard HL, Doering CR. Cancer studies in Massachusetts: habits, characteristics and environment of individuals with and without cancer. 198:481-7.

1928. Editorial. Sterilization of defectives. 199:1225-6.

1934. Editorial. Sterilization and its possible accomplishments. 211:379-80.

1935. Henderson LJ. Physician and patient as a social system. 212:819-23.

1939. Mallory TB. Richard Clarke Cabot and the clinicopathologic conference. 220:880.

1948. The Case Records of the Massachusetts General Hospital. 239:690.

1949. Alexander L. Medical science under dictatorship. 241:39-47.

1954. Editorial. Practice of medicine by married women. 250:486.

1966. Beecher HK. Ethics and clinical research. 274:1354-60.

1970. Swan HJC, Ganz W, Forrester J, et al. Catheterization of the heart in man with use of a flow-directed balloon-tipped catheter. 283:447-51.

1981. Gottlieb MS, Schroff R, Schanker HM, et al. Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men. 305:1425-31.

1981. Masur H, Michelis MA, Greene JB, et al. An outbreak of community-acquired Pneumocystis carinii pneumonia. 305:1431-8.

1981. Siegal FP, Lopez C, Hammer GS, et al. Severe acquired immunodeficiency in male homosexuals, manifested by chronic perianal ulcerative herpes simplex lesions. 305:1439-44.

Special Anniversary Articles

We are publishing a series of engaging Review and Perspective articles from established authors who are preeminent in their fields. Each article explores a story of progress in medicine over the past 200 years. These articles will appear every other week during 2012 and be collected here. Check the News & Eventssection of this site for announcements about upcoming articles.
http://nejm200.nejm.org/explore/special-anniversary-articles/

Read Full Post »

Reporter: Prabodh Kandala, PhD

Screen Shot 2021-07-19 at 6.21.05 PM

Word Cloud By Danielle Smolyar

A study from Massachusetts General Hospital (MGH) researchers suggests that specific populations of tumor cells have different roles in the process by which tumors make new copies of themselves and grow. In their report in the May 15 issue of Cancer Cell, researchers identify a tumor-propagating cell required for the growth of a pediatric muscle tumor in a zebrafish model and also show that another, more-differentiated tumor cell must first travel to sites of new tumor growth to prepare an environment that supports metastatic growth.

“Most investigators have thought that tumor-propagating cells — what are sometimes called cancer stem cells — must be the first colonizing cells that travel from the primary tumor to start the process of local invasion and metastasis, but in this model, this is simply not the case,” says David Langenau, PhD, of the MGH Department of Pathology and Center for Cancer Research, who led the study. “Instead, the colonizing cells lack the ability to divide and instead prime newly infiltrated regions for the eventual recruitment of slow-moving cancer stem cells. It will be important to test how broadly this phenomenon is found in a diversity of animal and human cancers.”

Langenau’s team has long been using zebrafish to study rhabdomyosarcoma (RMS), an aggressive pediatric cancer. In embryonic zebrafish, RMS can develop within 10 days, and since the tiny fish are transparent at that stage, fluorescent markers attached to particular cellular proteins can easily be imaged. The current study used these properties to monitor how specific populations of tumor cells develop and their role in initiating new tumor growth.

Previous research from the MGH team had discovered that RMS cells expressing marker proteins also seen on muscle progenitor cells had significantly more tumor-propagating potential than did other tumor cells. Fluorescently labeling proteins associated with different stages of cellular differentiation revealed distinct populations of RMS cells in the zebrafish model. Cells expressing the progenitor cell marker myf5, were labeled green, and those expressing myogenin, a marker of mature muscle cells, were labeled red.

In a series of experiments, the research team confirmed that myf5-expressing RMS cells had powerful tumor-propagating potential, but the ability to visualize how tumor cells move in living fish produced a surprising observation. While myf5-expressing cells largely remained within the primary tumor itself, myogenin-expressing RMS cells easily moved out from the tumor, entering the vascular system and passing through usually impenetrable layers of collagen. Only after the more-differentiated but non-proliferative myogenin-expressing cells had colonized an area did the myf5-expressing tumor-propagating cells appear and start the growth a new tumor. Imaging the labeled tumor cells also revealed that different cellular populations tended to cluster in different areas of later-stage tumors.

“Our direct in-vivo imaging studies are the first to suggest such diverse cellular functions in solid tumors, based on differentiation and the propensity for self-renewal,” says Myron Ignatius, PhD, of MGH Pathology and Center for Cancer Research, the study’s first author. “I think we will find that this kind of division of labor is a common theme in cancer, especially given that the vast majority of cells within a tumor are not tumor-propagating cells. We suspect there will be molecularly defined populations that make niches for tumor-propagating cells, secrete factors to recruit vasculature and create boundaries to suppress immune cell invasion.”

Langenau adds, “Division of labor is a new and emerging concept in cancer research that we hope will lead to new targets for rationally designed therapies. In rhabdomyosarcoma it will be important to target both the tumor-propagating cells and the highly migratory colonizing cells for destruction — a major focus of ongoing studies in our group.” Langenau is an assistant professor of Genetics at Harvard Medical School and a principal faculty member at the Harvard Stem Cell Institute.

Additional co-authors author of the Cancer Cell article are Eleanor Chen, Adam Fuller, Ines Tenente Rayn Clagg, Sali Liu, Jessica Blackburn, MGH Pathology and Center for Cancer Research; Andrew Rosenberg, and Petur Neilsen, MGH Pathology; Natalie Elpek and Thorsten Mempel, MGH Center for Immunology and Inflammatory Diseases; and Corinne Linardic, Duke University Medical Center. The study was supported by grants from the National Institute of Health, the Alex’s Lemonade Stand Foundation, the Sarcoma Foundation of America, the American Cancer Society and the Harvard Stem Cell Institute.

http://www.sciencedaily.com/releases/2012/05/120515131756.htm

Read Full Post »

« Newer Posts