Feeds:
Posts
Comments

Archive for the ‘CANCER BIOLOGY & Innovations in Cancer Therapy’ Category

Author and Reporter: Ritu Saxena, Ph.D.  

Screen Shot 2021-07-19 at 7.41.17 PM

Word Cloud By Danielle Smolyar

Mitochondria is an important cell organelle that is associated with several key cellular functions as energy production, anabolism, calcium homeostasis and cell programmed death, and any abnormalities occurring in mitochondria would lead to alteration of normal cellular function.

Role of mitochondria in cancer has long been implicated. Post published on September 1, 2012 (http://pharmaceuticalintelligence.com/2012/09/01/mitochondria-and-cancer-an-overview/) presents a brief overview of the mechanisms by which mitochondrial defects could be associated with cancer. Different studies on various types of Cancers have tried to determine the mtDNA mutations and the mechanisms involved. An important aspect of cancerous progression is the cancer cell migration and it has been observed that mitochondrial dysfunction is involved in cancer cell migration. However, the molecular mechanism still needs to be deciphered.

A group from Taiwan recently published their findings in the Biochimica et Biophysica Acta journal stating that enhanced β5-integrin expression was involved in promoting cell migration in human gastric cancer cell line as a result of mitochondrial dysfunction.

The authors used human gastric cancer cell line, SC-M1 cells for their studies. The methodology followed was to first create mitochondrial dysfunction in the SC-M1 cells by the use of oxidative phosphorylation inhibitors: oligomycin (Complex V inhibitor) and antimycin A (Complex III inhibitor) thereby inhibiting mitochondrial function. The results indicated that impaired oxidative phosphorylation caused an increase in the intracellular Reactive Oxygen Species (ROS) that lead to an increased cell migration in SC-M1 cells.

Different types of integrin molecules have been implicated in cell migration. Hung et al extracted RNA and protein from SC-M1 cells in order to study the different types of integrins, and observed that the levels of β5-integrin were significantly upregulated in SC-M1 cells.  Simultaneously, the surface expression of the dimer- β5-integrin and αv-integrin, was studied in cancer cells with using FACS. The analysis revealed a higher surface expression of the dimer corresponding to the higher levels of the protein and RNA results of  β5-integrin expression in SC-M1 cells with mitochondrial dysfunction. Infact, a subpopulation of SC-M1 cells that showed higher migration capability (SC-M1-3rd) was observed to harbor a higher lever of β5-integrin expression, correlating β5-integrin expression with cell migration ability. The experiments supported the role of β5-integrin in cell migration in gastric cancer cells.

Finally, authors confirmed the in vitro results in the human gastric cancer samples. Immunohistochemical analysis revealed that β5-integrin was stained positive in around 73% of the cancer samples. Additionally, the higher expression levels of β5-integrin could be correlated with the invasive ability and more aggressive behavior of gastric cancer cells.

Authors stated “our study pinpoints another aspect that links the induction of intracellular ROS level in mitochondrial dysfunction gastric cancer cells with the activation of αvβ5-integrin. Taken together, the induction of β5-integrin is important to gastric cancer metastasis, especially in cancer cells that exhibit mitochondrial dysfunction.”

Thus, blockage of αvβ5-integrin function by antibodies might be tested as a potential therapy for preventing or delaying gastric cancer metastasis, especially in gastric cancers harboring mitochondrial dysfunction.

Sources:

Research article: http://www.ncbi.nlm.nih.gov/pubmed?term=22561002

Related posts: http://pharmaceuticalintelligence.com/2012/09/01/mitochondria-and-cancer-an-overview/

http://pharmaceuticalintelligence.com/2012/09/06/clinical-genetics-personalized-medicine-molecular-diagnostics-consumer-targeted-dna-consumer-genetics-conference-cgc-october-3-5-2012-seaport-hotel-boston-ma/

http://pharmaceuticalintelligence.com/2012/08/14/detecting-potential-toxicity-in-mitochondria/

Read Full Post »

Comprehensive Genomic Characterization of Squamous Cell Lung Cancers

Reporter: Aviva Lev-Ari, PhD, RN

Nature (2012) doi:10.1038/nature11404

Received 09 March 2012 
Accepted 09 July 2012 
Published online 09 September 2012
Correspondence to: 

The primary and processed data used to generate the analyses presented here can be downloaded by registered users fromThe Cancer Genome Atlas (https://tcga-data.nci.nih.gov/tcga/tcgaDownload.jsp,https://cghub.ucsc.edu/ and https://tcga-data.nci.nih.gov/docs/publications/lusc_2012/).

Lung squamous cell carcinoma is a common type of lung cancer, causing approximately 400,000 deaths per year worldwide. Genomic alterations in squamous cell lung cancers have not been comprehensively characterized, and no molecularly targeted agents have been specifically developed for its treatment. As part of The Cancer Genome Atlas, here we profile 178 lung squamous cell carcinomas to provide a comprehensive landscape of genomic and epigenomic alterations. We show that the tumour type is characterized by complex genomic alterations, with a mean of 360 exonic mutations, 165 genomic rearrangements, and 323 segments of copy number alteration per tumour. We find statistically recurrent mutations in 11 genes, including mutation of TP53 in nearly all specimens. Previously unreported loss-of-function mutations are seen in the HLA-A class I major histocompatibility gene. Significantly altered pathways included NFE2L2 andKEAP1 in 34%, squamous differentiation genes in 44%, phosphatidylinositol-3-OH kinase pathway genes in 47%, and CDKN2A and RB1 in 72% of tumours. We identified a potential therapeutic target in most tumours, offering new avenues of investigation for the treatment of squamous cell lung cancers.

Read Full Post »

Targeted Tumor-Penetrating siRNA Nanocomplexes for Credentialing the Ovarian Cancer Oncogene ID4

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Genome-scale studies of cancer samples have begun to provide a global depiction of genetic alterations in human cancers, but the complexity and volume of data that emerge from these efforts have made dissecting the underlying biology of cancer difficult, and little is known about the functions of most of the candidates that emerge. For example, in studies of 489 primary high-grade serous ovarian cancer genomes, 1825 genes were identified as targeted by recurrent amplification events. Systematic approaches to study the function of genes in cancer cell lines, such as genome-scale pooled short hairpin RNA (shRNA) screens, offer a means to assess the consequences of the genetic alterations found in such genome characterization efforts. The comprehensive characterization of a large number of cancer genomes will eventually lead to a compendium of genetic alterations in specific cancers. Unfortunately, the number and complexity of identified alterations complicate endeavors to identify biologically relevant mutations critical for tumor maintenance because many of these targets are not amenable to manipulation by small molecules or antibodies. RNA interference provides a direct way to study putative cancer targets; however, specific delivery of therapeutics to the tumor parenchyma remains an intractable problem.

Recently an shRNA-based approach was used to find genes that are both overexpressed in human primary tumors and essential for the proliferation of ovarian cancer cells. This approach identified 54 overexpressed and essential genes in ovarian cancer and 16 genes in non–small cell lung cancer that required further validation in vivo. Furthermore, many of these candidates represent targets that are not amenable to antibody-based therapeutics or traditional small molecule approaches. Thus, if one envisions a discovery pipeline that begins with cancer genomes and ends with novel therapeutics, there is a bottleneck at the point of in vivo validation of novel targets. Achieving silencing in the epithelial cells in the tumor parenchyma is especially critical to study the genetic alterations of interest. RNA interference (RNAi) is a potentially attractive means to silence the expression of candidate genes in vivo, particularly for undruggable gene products. However, systemic delivery of small interfering RNA (siRNA) to tumors has been challenging, owing to rapid clearance, susceptibility to serum nucleases, and endosomal entrapment of small RNAs, in addition to their inherent inadequate tumor penetration. Tumor penetration is also a problem for the delivery of siRNA and shRNA, among other cargos, and is characterized by limited transport into the extravascular tumor tissue beyond the perivascular region. This low penetration is thought to arise from the combination of dysfunctional blood vessels that are poorly perfused and a high interstitial pressure, especially in solid tumors, in part due to dysfunctional lymphatics. The leakiness of tumor vessels partially counteracts the poor penetration [the so-called enhanced permeability and retention (EPR) effect], but the size dependence and variability of this property can limit its usefulness. Desmoplastic stromal barriers can further impede transport of therapeutics through tumors. A new class of tumor-penetrating peptides has been described recently, which home to several types of tumors and leverage a consensus R/KXXR/K C-terminal peptide motif [the C-end rule (CendR)] to stimulate transvascular transport and rapidly deliver therapeutic cargo deep into the tumor parenchyma. These peptides are tumorspecific, unlike canonical cell-penetrating peptides (CPPs) that do not display cell- or tissue-type specificity, and are able to improve the delivery of small molecules, antibodies, and nanoparticles. Despite their promise, this class of peptides has not been successfully co-opted for siRNA delivery, in part owing to the additional challenges of delivering oligonucleotides across cell membranes, out of endosomes, and into the cytosol to achieve gene silencing. Here, an siRNA delivery vehicle has been designed that was tumorpenetrating and modular, so it could be easily assembled in a single step to accommodate different payloads to various genes of interest. It can be envisaged that such a technology would enable a platform wherein novel targets can be identified by structural and functional genomics and subsequently rapidly credentialed both in vitro and in vivo. Followup studies could then identify the mechanism of action underlying the observations and establish (and ultimately prioritize) novel oncogenes as therapeutic targets. To achieve this goal, a systematic effort was combined to identify genes that are both essential and genetically altered in human cancer cell lines and tumors with the development and deployment of a novel tumor-specific and tissue-penetrating siRNA delivery platform.

Current genome characterization efforts will eventually provide insight into the genetic alterations that occur in most cancers and may define new therapeutic targets. However, most epithelial cancers harbor hundreds of genetic alterations as a consequence of genomic instability. For example, whereas recurrent somatic alterations occur in a small number of genes in high-grade ovarian cancers, ovarian cancer genomes are characterized by multiple regions of copy number gain and loss involving at least 1825 genes. This genomic chaos complicates efforts to identify biologically relevant mutations critical for tumor maintenance.

To isolate which recurrent genetic alterations are involved in cancer initiation, tumor maintenance, and/or metastasis, functional assays can be performed after systematic manipulation of the candidate oncogenes. Results from Project Achilles was combined, a large scale screening effort to identify genes essential for proliferation and survival in human cancer cell lines with genome characterization of high-grade ovarian cancers. Using this approach, an oncogene candidate  was identified, ID4, which was amplified in 32% of high-grade serous ovarian cancers. ID4 is overexpressed in a large fraction of high-grade serous ovarian cancers, and ovarian cancer cell lines that overexpress ID4 are highly dependent on ID4 for survival and tumorigenicity. Expression of ID4 at levels corresponding to those observed in patient-derived samples induced transformation of immortalized ovarian and FT epithelial cells.

In summary, a targeted TPN was developed capable of precisely delivering siRNA into the tumor parenchyma, and have combined this technology with large-scale methods to credential ID4 as an oncogene target in ovarian cancer. Because large-scale efforts to characterize all cancer genomes accelerate, this capability illustrates a path to identify genes that are altered in tumors, validate those that are critical to cancer initiation and maintenance, and rapidly evaluate in vivo the subset of such genes amenable to RNAi therapies and clinical translation. These observations not only credential ID4 as an oncogene in 32% of high-grade ovarian cancers but also provide a framework for the identification, validation, and understanding of potential therapeutic cancer targets.

Source References:

http://www.ncbi.nlm.nih.gov/pubmed/22896676

Read Full Post »

Imaging: seeing or imagining? (Part 1)

Author and Curator: Dror Nir, PhD

That is the question…

We are all used to clichés such as “seeing is believing”, “seeing is knowing”, “don’t be blind” and so on. Out of our seven (natural and supernatural) senses we tend to use and trust our eyes the most. Especially, when it comes to learning, accumulation of experience and acceptance of information as correct. On the other hand, we are taught from childhood to be aware of illusions and not to judge according to looks but rather according to matter. The problem is, does one recognise the substance inside an image? To answer this, a wide-ranging discipline of image interpretation was developed alongside with imaging technology. In order not to fatigue the innocent reader, I’ll review the state of the art of imaging in medicine in subsequent posts, each dedicated to a specific modality. This post is dedicated to…

Current main trends in ultrasound imaging in cancer patients’ management;

The most used imaging modality in medicine is ultrasound. This is due to the fact that it is noninvasive, practically harmless, relatively inexpensive and fairly accessible; i.e. everyone can operate it, even a layman! No formal training or certification is required!

Interesting enough, ultrasound is labeled by the regulatory agencies, FDA and CE, as a diagnostic medical device! This is real demonstration of the aforementioned tendency to believe our eyes, even if these eyes do not see well or the brain behind them is lacking the experience required for ultrasound image interpretation.

Since “ultrasound imaging in medicine” is the subject of many text books and articles I found it  appropriate, for the sake of this post, simply  to refer the reader to Wikipedia’s page (http://en.wikipedia.org/wiki/Medical_ultrasonography) on ultrasound in medicine: “Diagnostic Sonography (ultrasonography) is an ultrasound-based diagnostic imaging technique used for visualizing subcutaneous body structures including tendonsmuscles, joints, vessels and internal organs for possible pathology or lesionsObstetric sonography is commonly used during pregnancy and is widely recognized by the public. In physics, the term “ultrasound” applies to all sound waves with a frequency above the audible range of normal human hearing, about 20 kHz. The frequencies used in diagnostic ultrasound are typically between 2 and 18 MHz.”

When it comes to cancer patients’ management, ultrasound provides real-time imaging of body organs at a relatively cost effective workflow. However, it suffers from lack of sensitivity and specificity, especially if the investigator is still fairly inex­perienced. Therefore, no diagnosis is confirmed without biopsy of the suspected lesion discovered during the ultrasound scan. As mentioned in my previous post; identification of suspicious lesions in the prostate during TRUS is so inconclusive that in order to reach diagnosis biopsies are taken randomly.

Did we hit the target?

To improve prostate cancer detection, various biopsy strategies to increase the diagnostic yield of prostate biopsy have been devised: sampling of visually abnormal areas; more lateral placement of biopsies; anterior biop­sies; and obtaining an increased number of cores, with up to 45 biopsy cores [1-5].

In recent years, new features such as 3D and contrast-enhanced sonography, elastography and HistoScanning were added to the basic video image in order to improve the quality of ultrasound based investiga­tion of cancer patients.

3-D Sonography.

3-D ultrasound allows si­multaneous biplanar imaging of the organ with com­puter reconstructions providing a coronal plane as well as a rendered 3-D image. This promises to improve the detection and pre-clinical grading of cancer lesions. Still, the interpretation is very much “image quality” and “user experience” dependent.

3D imaging of breast using ABUS by Siemens; using the coronal view to better investigate a lesion.

  

 

3D imaging of breast using Voluson 730 by GE; three planes are presented for review by the radiologist.

 

 

 Contrast-Enhanced Sonography.

Using intravenous micro-bubble agents in combination with color and pow­er Doppler imaging contributes to increase in the signal obtained in areas of increased vascularity. The underlying assumption is that vascularization in the tumor’s area will be more pronounced than in normal tissue. Hot off the press: The UK National Institute for Health and Clinical Excellence (NICE) has published guidance that supports the use of contrast-enhanced ultrasound with Bracco’s SonoVue ultrasound contrast agent for the diagnosis of liver cancer [6].  The main use of contrast-enhanced ultrasound is directing biopsies to the “most suspicious” areas; i.e. those who presents higher vascularity. Never­theless, in reported clinical studies [7] targeted biopsies’ sensitivity on contrast-enhanced ultrasound was only 68%.

 

Elastography.

Elastography is an imaging technique that evaluates the elasticity of the tissue. The underlying assumption is that tumors present greater stiffness than normal tissue and therefore will be characterized by limited compressibility. The first person to introduce this concept was  Professor Jonathan Ophir, University of Huston, Texas [http://www.uth.tmc.edu/schools/med/rad/elasto/]:
Estimation of differences in lesions’ stiffness relies  on computing the level of correlation between consecutive imaging frames while the tissue that is being imaged is subjected to changing compression, usually applied by the sonographer who manipulates the ultrasound probe. Since malignant and benign lesions exhibit similar elasticity, elastography is not suitable for lesion characterisation. Therefore, as in the previous example, elastography’s main use is identifying suspicious areas in which to take biopsies [8, 9].  Furthermore, users’ experiences related to elastography reveal a lot of controversy.  For example, according to Prof. Bruno Fornage of MD. Anderson [http://www.auntminnie.com/index.aspx?sec=sup&sub=wom&pag=dis&ItemID=99028]; “current commercially available scanners are confounded by a lack of intraobserver reliability, so that it’s not unusual to produce an opposite result on repeat testing a few seconds later”. “There are very few evidence-based non-industry sponsored studies reporting substantial superiority [of elastography] over standard grayscale ultrasound,” he said. “In fact, a sensitivity of 82% in the diagnosis of breast cancer has been reported for elastography, versus 94% for conventional grayscale ultrasound. More disturbing is that even if the technology of elastography worked flawlessly, the huge overlap in breast pathology between very firm solid benign lesions and less firm malignancies gives this technology no practical place in the differential diagnosis of solid breast masses.”

 

HistoScanning.

HistoScanning™ is a novel ultrasound-based software technology that utilizes advanced tissue characterization algorithms to address the clinical requirements for tissue characterization. It visualizes the position and extent of tissue suspected of being malignant in the target organ. In this respect its design is unique and superior to other ultrasound based-technologies [10, 11]. HistoScanning’s first clinically available application (since 2009) is in the management of prostate cancer patients.

 

 

HistoScanning indicating suspicious lesions superimposed on 3-D ultrasound of the prostate. The three imaging plans and 3D reconstruction of the segmented prostate are presented.

 

 

 To conclude; if we are looking to improve the current state of the art in ultrasound-based cancer patients’ management we should strive to introduce systems which will enable the medical practitioners to rule in or rule out suspicious lesions at imaging before they biopsy them. Using ultrasound just as a tool for directing biopsies as done today is not enough. Indeed, this requires capability of ultrasound-based tissue characterisation in addition to detection of ultrasound-based abnormality (i.e. circumstantial evidence for cancer). To-date, the only available system that bears the promise to provide such improvement is HistoScanning. Obviously, the level of confidence in the Negative Predictive Value of HistoScanning and future systems alike must be built to become high enough to provide the medical practitioner the reassurance and comfort that he is not missing any significant cancer by not taking a biopsy. Such confidence can only be built by subjecting these systems (i.e. HistoScanning and alike) to properly designed clinical studies and, not less important, by reporting the experience of early adopters who will test them in a controlled routine use.

 

References

  1. Flanigan RC, Catalona WJ, Richie JP, Ah-mann FR, Hudson MA, Scardino PT, de-Kernion JB, Ratliff TL, Kavoussi LR, Dalkin BL: Accuracy of digital rectal examination and transrectal ultrasonography in localiz­ing prostate cancer: results of a multicenter clinical trial of 6,630 men. J Urol 1994; 152: 1506–1509.
  2. Eichler K, Hempel S, Wilby J, Myers L, Bach­mann LM, Kleijnen J: Diagnostic value of systematic biopsy methods in the investiga­tion of prostate cancer: a systematic review. J Urol 2006; 175: 1605–1612.
  3. Delongchamps NB, de la Roza G, Jones R, Jumbelic M, Haas GP: Saturation biopsies on autopsied prostates for detecting and charac­terizing prostate cancer. BJU Int 2009; 10: 49–54.
  4. Rifkin MD, Dähnert W, Kurtz AB: State of the art: endorectal sonography of prostate gland. AJR Am J Roentgenol 1990; 154: 691– 700.
  5. Chrouser KL, Lieber MM: Extended and sat­uration needle biopsy. Curr Urol Rep 2004; 5: 226–230.
  6. http://www.auntminnieeurope.com/index.aspx?sec=nws&sub=rad&pag=dis&ItemID=607068&wf=284
  7. Yi A, Kim JK, Park SH, Kim KW, Kim HS, Kim JH, Eun HW, Cho KS: Contrast-en­hanced sonography for prostate cancer de­tection in patients with indeterminate clini­cal findings. Am J Roentgenol 2006; 186: 1431–1435.
  8. König K, Scheipers U, Pesavento A, Lorenz A, Ermert H, Senge T: Initial experiences with real-time elastography guided biopsies of the prostate. J Urol 2005; 174: 115–117.
  9. 32 Pallwein L, Mitterberger M, Struve P, Hor-ninger W, Aigner F, Bartsch G, Gradl J, Schurich M, Pedross F, Frauscher F: Com­parison of sonoelastography guided biopsy with systematic biopsy: impact on prostate cancer detection. Eur Radiol 2007; 17: 2278– 2285.
  10. SALOMON (G.), SPETHMANN (J.), BECKMANN (A.), AUTIER (P.), MOORE (C.), DURNER (L.), SANDMANN (M.), HASE (A.), SCHLOMM (T.), MICHL (U.), HEINZER (H.), GRAFEN (M.), STEUBER (T.).Accuracy of HistoScanning for the prediction of a negative surgical margin in patients undergoing radical prostatectomy. Published online in British Journal of Urology International (BJUI). 09/08/2012.
  11. SIMMONS (L.A.M.), AUTIER (P.), ZATURA (F.), BRAECKMAN (J.G.), PELTIER (A.), ROMICS (I.), STENZL (A.), TREURNICHT (K.), WALKER (T.), NM (D.), MOORE (C.M.), EMBERTON (M.).  Detection, localisation and characterisation of prostate cancer by Prostate Hist°Scanning; Published in British Journal of Urology International (BJUI). Issue 1 (July). Vol 110, P 28-35.

 

 Written by Dror Nir

Read Full Post »

Curator: Meg Baker, PhD, Reg Patent Agent

http://www.uphs.upenn.edu/news/News_Releases/2012/08/novartis/

Novartis provides funding for research of modified T cell treatments. Successful application of the technique was demonstrated in a clinical study led by Dr. Carl June, a pathologist at the University of Pennsylvania, for CLL (chronic lympocytic leukemia), one of the most common types of leukemia. The initial study report was in August 2011 http://www.uphs.upenn.edu/news/News_Releases/2011/08/t-cells/

The concept of doctoring T-cells genetically was first developed in the 1980s by Dr. Zelig Eshhar at the Weizmann Institute of Science in Rehovot, Israel. It involves adding gene sequences from different sources to enable the T-cells to produce what researchers call chimeric antigen receptors, or CARs — protein complexes that transform the cells into, in Dr. June’s words, “serial killers.” See http://www.nytimes.com/2011/09/13/health/13gene.html?pagewanted=all

Dr. June describes the new therapy as “ultrapersonalized” because  the treatments involve extracting a  patient’s immune cells and using deactivated HIV-1 to deliver genes into the cells, and later infusing in the re-educated cells back into patient’s system.  The treatment is characterized as “training the immune system” to attack cancer.  Thus, it is hoped that the technology can be applied more broadly to cancer therapy.

Read more: Novartis backs UPenn’s pioneering cancer immunotherapies – FierceBiotech http://www.fiercebiotech.com/story/novartis-backs-upenns-pioneering-cancer-immunotherapies/2012-08-06#ixzz25tGYGQ9N
Subscribe: http://www.fiercebiotech.com/signup?sourceform=Viral-Tynt-FierceBiotech-FierceBiotech

NYT article: http://www.nytimes.com/2012/08/06/business/novartis-and-penn-unite-on-anticancer-approach.html

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

 

ABOUT CGC

The Consumer Genetics Conference covers the key issues facing clinical genetics, personalized medicine, molecular diagnostics, and consumer-targeted DNA applications. It provides a unique outlet where all voices can be heard: pro & con, physician & consumer, research & clinical, academic & corporate, financial & regulatory. CGC is more than just another personalized medicine conference. Since the inaugural meeting in 2009, CGC has been the place where consumer companies learn genomics, and where genomics companies learn how to approach consumers. This year’s event is highlighted by keynote presentations from:

– Kenneth Chahine, Ph.D., J.D., ancestry.com
– Jay Flatley, President and CEO, Illumina
– Lee Silver, Ph.D., Princeton University

Spanning three days, the conference will focus on:
– Day 1: Technology
– Day 2: Business + Translation
– Day 3: Application

And 40+ Cutting-Edge Presentations on:
– Personal Genomics
– Third-Generation Sequencing
– Molecular Diagnostics
– Investment & Funding Opportunities
– Genome Interpretation
– The Future of Personalized Medicine
– Big Data
– Prenatal/Neonatal & Disease Diagnostics
– Empowering Patients
– Nutrition, Food Genetics & Cosmetics

SPEAKERS

Confirmed speakers to date include:

Sandy Aronson, Executive Director of IT, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM)

Arindam Bhattacharjee, Ph.D., CEO and Founder, Parabase Genomics

Diana Bianchi, M.D., Executive Director, Mother Infant Research Institute; Vice Chair for Research, Department of Pediatrics, Floating Hospital for Children, Tufts Medical Center

Cinnamon Bloss, Ph.D., Assistant Professor and Director, Social Sciences and Bioethics, Scripps Translational Science Institute

Alexis Borisy, Partner, Third Rock Ventures

John Boyce, President and CEO, GnuBio

Mike Cariaso, Founder, SNPedia; Author of Promethease

Kenneth Chahine, Ph.D., J.D., Senior Vice President and General Manager, DNA, ancestry.com

Michael Christman, CEO, Coriell Institute for Medical Research

Cindy Crowninshield, RD, LDN, Licensed Registered Dietitian, Body Therapeutics & Sodexo; Founder, Eat2BeWell & Eat4YourGenes; Conference Director, Cambridge Healthtech Institute

Kevin Davies, Ph.D., Editor-in-Chief, Bio-IT World

Chris Dwan, Principal Investigator and Director, Professional Services, BioTeam

Jay Flatley, President & CEO, Illumina

Andrew C. Fish, Executive Director, AdvaMedDx

Dennis Gilbert, Ph.D., Founder, President and CEO, VitaPath Genetics

Rosalynn Gill, Ph.D., Vice President, Clinical Affairs, Boston Heart Diagnostics

Steve Gullans, Managing Director, Excel Venture Management

Don Hardison, President & CEO, Good Start Genetics, Inc.

Richard Kellner, Founder and President, Genome Health Solutions, Inc.

Robert Klein, Ph.D., Chief Business Development Officer, Complete Genomics

Isaac S. Kohane, M.D., Ph.D., Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School; Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing; Co-Director, HMS Center for Biomedical Informatics

Stan Lapidus, President, CEO and Founder, SynapDx

Gholson Lyon, M.D., Ph.D., Assistant Professor in Human Genetics, Cold Spring Harbor Laboratory; Research Scientist, Utah Foundation for Biomedical Research

Daniel MacArthur, Ph.D., Assistant Professor, Massachusetts General Hospital; Co-founder, Genomes Unzipped

Craig Martin, Chief Executive Officer, Feinstein Kean Healthcare

James McCullough, CEO and Founder, Exosome Diagnostics

Kevin McKernan, CSO, Courtagen Life Sciences

Neil A. Miller, Director of Informatics, Center for Pediatric Genomic Medicine, Children’s Mercy Hospital

Paul Morrison, Ph.D., Laboratory Director, Molecular Biology Core Facilities, Dana-Farber Cancer Institute

Geert-Jan Mulder, M.D., General Partner, Forbion Capital

Steve Murphy, M.D., Managing Partner, Wellspring Total Health

Michael Murray, M.D., Clinical Chief, Genetics Division, Brigham and Women’s Hospital; Instructor, Harvard Medical School, The Harvard Clinical and Translational Science Center

Brian T. Naughton, Ph.D., Founding Scientist, 23andMe

Nathan Pearson, Ph.D., Director of Research, Knome, Inc.

Michael S. Phillips, Ph.D., Canada Research Chair in Translational Pharmacogenomics; Director, Molecular Diagnostic Laboratory, Montreal Heart Institute; Associate Professor, Université de Montréal

John Quackenbush, Ph.D., Professor, Biostatistics and Computational Biology, Cancer Biology Center for Cancer Computational Biology, Dana-Farber Cancer Institute

Martin G. Reese, President and CEO, Omicia

Heidi L. Rehm, Ph.D., FACMG, Chief Laboratory Director, Molecular Medicine, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM); Assistant Professor of Pathology, Harvard Medical School

Oliver Rinner, Ph.D., CEO, BiognoSYS AG

Meredith Salisbury, Senior Consultant, Bioscribe

Marc Salit, Group Leader, Biochemical Science and Multiplexed Biomolecular Science, National Institute of Standards and Technology

Lee Silver, Ph.D., Professor of Molecular Biology and Public Affairs; Faculty Associate, Science, Technology & Environmental Policy Program, Office of Population Research, and the Center for Health and Wellbeing, Woodrow Wilson School, Princeton University

Jamie Streator, Managing Director, Healthcare Investment Banking, Cowen & Company

Joseph V. Thakuria, M.D., MMSc, Attending Physician in Clinical and Biochemical Genetics Medical Genetics, Massachusetts General Hospital; Medical Director, Personal Genome Project; Harvard Catalyst Translational Genetics and Bioinformatics Program, MGH Center for Human Genetics Research

Samuil R. Umansky, M.D., Ph.D., D.Sc., Co-founder, CSO, and President, DiamiR LLC

David A. Weitz, Ph.D., Mallinckrodt Professor of Physics and Applied Physics, Harvard School of Engineering and Applied Sciences

Speaker to be Announced, Barclays

DAY 1: TECHNOLOGY

WEDNESDAY, OCTOBER 3

7:30 am Conference Registration

8:30 Opening Remarks

John Boyce, President and CEO, GnuBIO and Meredith Salisbury, Senior Consultant, Bioscribe

 

OPENING PLENARY SESSION

 

» 8:45 KEYNOTE PRESENTATION

Self-Discovery in the Age of Personal Genomes

Lee Silver, Ph.D., Professor of Molecular Biology and Public Affairs; Faculty Associate, Science, Technology & Environmental Policy Program, Office of Population Research, and the Center for Health and Wellbeing, Woodrow Wilson School, Princeton University

With blinding speed, the biomedical research enterprise is advancing the technology to read personal genomes with greater accuracy, in less time, and at less expense.Meanwhile, consumer genetics has blossomed from infancy to adolescence with an array of innovative consumer-facing products. This unanticipated cottage industry is struggling with growing pains in a mix of conflicted regulators, restless innovators, and demanding consumers. Genetic information, like all information, “wants to be free,” but the commercialization environment is not yet optimized for personal freedom.

 

9:40 The Era of Clinical Sequencing and Personalized Medicine

Michael Christman, CEO, Coriell Institute for Medical Research

Advances in understanding genomic variation and associated clinical phenotypes continue to increase while the cost of full genome sequencing rapidly declines. Having access to your genomic information will become increasingly important as physicians are progressively receptive to incorporating genomics into routine clinical practice. When you need a new prescription, it will be necessary for your physician to quickly and securely access your genetic data to understand drug efficacy prior to dosing. Who will patients and medical professionals trust to store and interpret the data? Coriell is positioned to significantly contribute to the research needed to accelerate the adoption and routine use of genomics in medicine.

 

10:20 FEATURED PRESENTATION

Stan Lapidus, President, CEO and Founder, SynapDx

 

10:50 Coffee Break

 

BIG DATA/ANALYSIS

11:20 IT Infrastructure Required to Manage Patient Genetic Test Results

Sandy Aronson, Executive Director of IT, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM)

There are many challenges associated with getting the maximum value out of a genetic test. This talk will focus on information technology infrastructure that can help.

11:50 Issues in Genomics at Scale

Chris Dwan, Principal Investigator and Director, Professional Services, BioTeam

2012 marks, in many respects, the beginning of the second decade of high-throughput DNA sequencing. Robust, well understood solutions exist for many of the major technical challenges involved in operating a high-throughput genomics facility. Petabyte scale data storage, well suited to research computing in this space, provides a clean example. Certainly it still requires careful planning and thorough engineering to deploy such infrastructure. However, we can now purchase robust systems from multiple vendors rather than having to stitch together solutions in-house. Perhaps more importantly, we can rely on the experience of a community of peers who have been through the exercise before. By contrast, the legal, regulatory, ethical, and privacy concerns in this space have only begun to be explored. As we plan for the coming years, we must certainly plan for technical uncertainty. Technologists find themselves in the role of guessing at the future. As translational medicine, clinical genome sequencing, and other practices become the norm, we must assume extreme and occasionally capricious changes to the social ecosystem. This talk will explore these issues in the context of nearly a decade supporting research computing and genomics for a broad variety of institutions.

12:20 pm Sponsored Presentation (Opportunity Available)

12:50 Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

MOLECULAR DIAGNOSTICS

2:05 Panel Discussion
Panelists will first give a brief presentation and then convene for a panel discussion.

Michael S. Phillips, Ph.D., Canada Research Chair in Translational Pharmacogenomics; Director, Molecular Diagnostic Laboratory, Montreal Heart Institute; Associate Professor, Université de Montréal (Moderator)

Molecular Diagnostics and the Patient/Consumer

Andrew C. Fish, Executive Director, AdvaMedDx

This presentation will envision a future in which molecular diagnostics are widely utilized not only for decision making by health professionals, but also for the development and use of a wide range of consumer products that include genetic tests themselves. The speaker will discuss various policy implications of this convergence of patient and consumer interests driven by the expanding availability of molecular diagnostics.

Bridging the Gap between Genetic Risk and Blood Diagnostics by Personalized Health Monitoring

Oliver Rinner, Ph.D., CEO, Biognosys AG

Biognosys has developed a solution to quantify and track protein levels over time from a drop of blood. With a novel mass spectrometric technology, we can record protein signals from 1000s of proteins in a single instrument run and store such digital protein maps in a digital bio-bank that can be screened in silico for known and novel biomarkers. We will provide this technology as personalized health monitoring to patients and consumers that seek actionable information about their state of health.

Measuring Disease Treatment and Progression at the Molecular Level without Biopsy

James McCullough, CEO and Founder, Exosome Diagnostics

Exosome has developed a solution that has the ability to measure, at the molecular level without biopsy, the dynamic nature of both treatment and disease progression. The company has developed a means of isolating exosomes: exosomes are shed into all biofluids, including blood, urine, and CSF, forming a stable source of intact, disease-specific nucleic acids. From these, the company is able to develop predictive gene expression profiles to achieve high sensitivity for rare gene transcripts and the expression of genes responsible for cancers and other diseases. This technology obviates the need for biopsy, and provides a means for detection at a much earlier stage of treatment.

3:20 Refreshment Break

3:50 Sponsored Presentation (Opportunity Available)

 

SEQUENCING

4:20 Panel Discussion

Like a double helix, the future growth of consumer genetics is intimately entwined with technology advances in next-generation sequencing. While the industry excitedly awaits the commercial debut of potentially disruptive nanopore sequencing platforms, existing platforms continue to roll out new enhancements and sequencing strategies that bring us within striking distance of clinical-grade whole genome sequencing. This panel discussion brings together leaders from existing and emerging sequencing providers to present and debate a range of questions including the pros and cons of targeted versus whole-genome sequencing, the emergence of third-generation sequencing platforms, and the challenges of integrating genome sequencing into the clinic.

Paul Morrison, Ph.D., Laboratory Director, Molecular Biology Core Facilities, Dana-Farber Cancer Institute (Moderator)

Panelists:

John Boyce, President and CEO, GnuBIO
Robert Klein, Ph.D., Chief Business Development Officer, Complete Genomics Inc.
Speaker to be Announced, Life Technologies
Speaker to be Announced, Illumina

5:50-6:50 Welcome Reception in the Exhibit Hall with Poster Viewing

 

DAY 2: BUSINESS + TRANSLATION

THURSDAY, OCTOBER 4

7:45 am Morning Coffee

 

TRANSLATIONAL GENOMICS

8:15 Panel Discussion
Panelists will first give a brief presentation and then convene for a panel discussion.

Kevin Davies, Ph.D., Editor-in-Chief, Bio-IT World (Moderator)

All Genomes are Dysfunctional: The Challenges of Interpreting Whole-Genome Data from Healthy Individuals

Daniel MacArthur, Ph.D., Assistant in Genetics, Massachusetts General Hospital; Co-founder, Genomes Unzipped

Recent advances in DNA sequencing technology have made cheap, rapid interrogation of complete genome and exome sequences an almost mundane exercise, and have resulted in significant progress in the discovery of disease-causing sequence changes from the genomes of individuals with rare diseases or cancers. However, such successes do not necessarily translate into an improved ability to use genome-scale data to predict future disease probability for currently healthy individuals. In this presentation I will highlight some of the major technical and analytical challenges associated with developing predictive genomic medicine for the healthy majority.

Consumer Genomics: What do People do with Their Genomes?

Cinnamon Bloss, Ph.D., Assistant Professor and Director, Social Sciences and Bioethics, Scripps Translational Science Institute

Direct-to-consumer personalized genomic testing is controversial, and there are few empirical data to inform the debate regarding use and regulation. The Scripps Genomic Health Initiative is a large longitudinal cohort study of over 2,000 adults who have undergone testing with a commercially available genomic test. Findings from this initiative regarding the psychological, behavioral and clinical impacts of genomic testing on consumers will be presented.

Advances in Noninvasive Prenatal Genetic Testing: Does this Mean “Designer” Babies for All?

Diana Bianchi, M.D., Executive Director, Mother Infant Research Institute; Vice Chair for Research, Department of Pediatrics, Floating Hospital for Children, Tufts Medical Center

Noninvasive prenatal testing for Down syndrome and other chromosome disorders using massively parallel DNA sequencing techniques is now available on a clinical basis in the US. With expected advances in sequencing techniques it will soon be possible to take a blood sample from a pregnant woman and determine if her fetus has a chromosome abnormality or a single gene disorder. How much information do prospective couples want and how do these technical advances affect well-established algorithms for prenatal care?

Translating Genomics into Clinical Care

Heidi L. Rehm, Ph.D., FACMG, Chief Laboratory Director, Molecular Medicine, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM); Assistant Professor of Pathology, Harvard Medical School

This talk will focus on approaches to integrate clinical sequencing into genomic medicine. It will cover next generation sequencing test development from disease panels to whole genomes and the interpretation and reporting of genetic variants identified in patients.

Impact of Genomic Sequencing on Public Health and Preventive Medicine

Joseph V. Thakuria, M.D., MMSc, Attending Physician in Clinical and Biochemical Genetics and Medical Director, Personal Genome Project, Massachusetts General Hospital Center for Human Genetics Research

Early findings in the Personal Genome Project (established by George Church) suggest significant impact for public health and preventive medicine. Solutions to accelerate clinical adoption and address large molecular data challenges will be explored.

9:30 FEATURED PRESENTATION
Genome-in-a-Bottle: Reference Materials and Methods for Confidence in Whole Genome Sequencing

Marc Salit, Group Leader, Biochemical Science and Multiplexed Biomolecular Science, National Institute of Standards and Technology

Genome-in-a-Bottle: Reference Materials and Methods for Confidence in Whole Genome Sequencing Clinical application of ultra high throughput sequencing (UHTS) or “Next Generation Sequencing” for hereditary genetic diseases and oncology is rapidly emerging.  At present, there are no widely accepted genomic standards or quantitative performance metrics for confidence in variant calling. These are needed to achieve the confidence in measurement results expected for sound, reproducible research and regulated applications in the clinic.  NIST has convened the “Genome-in-a-Bottle Consortium” to develop the reference materials, reference methods, and reference data needed to assess confidence in human whole genome variant calls. A principal motivation for this consortium is to develop an infrastructure of widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.

10:00 Coffee Break in the Exhibit Hall with Poster Viewing

 

VENTURE CAPITAL & INVESTMENT BANKING

10:30 Panel Discussion

This “Funding to IPO Panel” consists of some of the top venture capitalists and investment bankers in therapeutics, diagnostics, and consumer genetics. This series of presentations and follow-on panel, will take attendees through the financial cycle – from funding to IPO, with VC’s and bankers highlighting the corporate criteria most important to them, and the metrics by which they make their decisions.
Panelists:

Geert-Jan Mulder, M.D., General Partner, Forbion Capital

Alexis Borisy, Partner, Third Rock Ventures

Steve Gullans, Managing Director, Excel Venture Management

Jamie Streator, Managing Director, Healthcare Investment Banking, Cowen & Company

Speaker to be Announced, Barclays

12:15 pm Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

GENOME DATA: THE PHYSICIAN’S PERSPECTIVE

1:45 Panel Discussion

While making the effort to deploy genomics and sequence data in preventative and clinical care is a noble cause, it is also one that requires pragmatic solutions. This panel discussion will address practical issues related to the day-to-day use of genomic technologies in the clinic — from hospital to private practice to academia.

Steve Murphy, M.D., Managing Partner, Wellspring Total Health (Moderator)
Panelists:

Michael Murray, M.D., Clinical Chief, Genetics Division, Brigham and Women’s Hospital; Instructor, Harvard Medical School, The Harvard Clinical and Translational Science Center

Isaac Samuel Kohane, M.D., Ph.D., Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School; Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing; Co-Director, HMS Center for Biomedical Informatics

3:00 Refreshment Break in the Exhibit Hall with Poster Viewing

 

GENOME INTERPRETATION

3:30 Omicia: Interpreting Genomes for Clinical Relevance

Martin G. Reese, President and CEO, Omicia

Automatic annotation of variants and integration of disparate data sources is just the first step in the eventual adoption of genomes into clinical practice. The next step is reducing this complexity into the very few, actionable clinically relevant findings. We will show how we integrate such methods within an automated, comprehensive and easy-to-use platform for the interpretation of individual genome data. This system allows for prioritizing variants with respect to its potential clinical impact and is preloaded with clinical gene sets and proprietary annotations to enhance discovery and reporting of personal genes and variants. Furthermore, it is extensible and allows the integration of the user’s proprietary gene and variants sets. We will show several exome and genome analyses.

3:50 Personalized Genomic Interpretation with SNPedia and Promethease

Mike Cariaso, Founder, SNPedia; Author of Promethease

With whole genome prices falling and microarray genotyping accessible to ordinary people over the internet, the challenge is no longer in acquiring the raw data, but in interpreting and using it. In this talk, I will outline a freely available database of literature, organized by the relevant DNA position and phenotypic effects. A complementary analysis program reads raw genomic data and produces a hyperlinked and searchable report of known associations. It can also perform special processing of family trios (child, mother, father), make predictions about offspring, and identify shared ancestry.

4:10 GenoSpace: Creating an Information Ecosystem for 21st Century Genomic Medicine

John Quackenbush, Ph.D., Professor, Biostatistics and Computational Biology, Cancer Biology Center for Cancer Computational Biology, Dana-Farber Cancer Institute

New sequencing technologies are driving the cost of genomic data generation to unprecedented lows, making sequencing available as a potentially valuable clinical and diagnostic tool. The challenge is solving “the last 100 yards” problem–delivering the data to those who need to access it in a manner in which they can use it effectively. GenoSpace has developed technology to connect the diverse consumers and producers of genomic data, creating an ecosystem in which we have the potential to advance genomic medicine.

 

VISIONS FOR PERSONALIZED MEDICINE

 

» 4:30 KEYNOTE PRESENTATION

The Big Picture: Visions for Personalized Medicine

Jay Flatley, President and CEO, Illumina

 

Illumnia logo small5:30 Social Event and Party

 

DAY 3: APPLICATIONS

FRIDAY, OCTOBER 5

8:00 am Morning Coffee

» 8:30 KEYNOTE PRESENTATION 

An Inside Look at How AncestryDNA Uses Population Genetics to Enrich Its Online Family History Experience

Kenneth Chahine, Ph.D., J.D., Senior Vice President & General Manager, DNA, ancestry.com

Ancestry.com is the world’s largest online resource for family history with an extensive collection of over 10 billion historical records that are digitized, indexed and made available online over the past 13 years. In May 2012, AncestryDNA launched a direct-to-consumer genealogical DNA test that delivers two results to customers. The first result predicts identity-by-descent and allows the customer to find genetic relatives within the AncestryDNA customer database. The second determines the customer’s admixture to provide a predicted genetic ethnicity using a state-of-the-art algorithm. The AncestryDNA team leverages pedigrees, documents, geographical information and its extensive biobank of worldwide DNA samples to conduct innovative research in population genetics and translates the complexities of genetic science into a simple, understandable, and meaningful user experience.

 

9:15 Past, Present and Future of Consumer Genetics, a Pioneer’s Perspective

Rosalynn Gill, Ph.D., Vice President, Clinical Affairs, Boston Heart Diagnostics

The first consumer genetics company, Sciona, founded by Rosaylnn Gill, launched its services in April 2001 in the UK in what was either a breakthrough in innovation or an act of incredible naiveté. Twelve years later, many lessons have been learned, but the jury is still out on the appropriate regulatory framework, the necessary industry standards and what constitutes a sustainable business model.

9:45 Sponsored Presentation (Opportunity Available)

10:15 Coffee Break in the Exhibit Hall with Poster Viewing

 

PRENATAL/NEONATAL DIAGNOSTICS 

10:45 Panel Discussion

Panelists will first give a brief presentation and then convene for a panel discussion.

Meredith Salisbury, Senior Consultant, Bioscribe (Moderator)

Neonatal Genomic Medicine

Neil A. Miller, Director of Informatics, Center for Pediatric Genomic Medicine, Children’s Mercy Hospital

The causal gene is known for more than 3,500 monogenic diseases. Many of these can present in the neonatal period, causing up to 30% of neonatal intensive care unit admissions. In the last six months, we have started to offer very rapid diagnostic testing for these diseases at Children’s Mercy Hospital based on genome sequencing. The emerging indications and utility of neonatal genomic medicine will be discussed.

Screening Neonates by Targeted Next-Generation DNA Sequencing

Arindam Bhattacharjee, Ph.D., CEO and Founder, Parabase Genomics

We are developing a neonatal genome sequencing test that will allow screening and diagnosis of primarily newborns and infants affected with a disease or condition allowing prompt treatment. The current approach of DNA based genetic screening for symptomatic and high-risk is not focused around neonates, and so healthcare providers and parents are unable to understand the cause and treatment of the condition in absence of clear symptoms. Our test is unique in that it simultaneously screens and/or diagnoses hundreds of these conditions at once from a single sample, providing more comprehensive information to families and their physicians. It is yet affordable, and provides access to the high-resolution sequence data.

Using NGS Sequencing to Improve the Standard of Care for Routine Genetic Carrier Screening

Don Hardison, President & CEO, Good Start Genetics, Inc.

11:45 Luncheon Presentation (Sponsorship Opportunity Available)
or Lunch on Your Own

 

NUTRITION, FOOD GENETICS & COSMETICS

1:00 The Importance of Genetic Testing-Directed Vitamin Use

Dennis Gilbert, Ph.D., Founder, President and CEO, VitaPath Genetics

VitaPath Genetics, Inc. has developed a platform for genomic-based tests that determine the need for vitamin therapy in medically actionable conditions. Using its platform, VitaPath can develop specific vitamin-remediated risk assays that help manage the use of the $30 billion spent on supplements in the U.S. each year. The first test developed by VitaPath measures genetic risk factors associated with the spina bifida to identify women who would benefit from low-risk, prescription strength folic acid supplementation.

1:20 Using Weight Management Genetic Testing in Nutrition Counseling:
A Dietitian Weighs in on the Matter

Cindy Crowninshield, RD, LDN, Licensed Registered Dietitian, Body Therapeutics & Sodexo; Founder, Eat2BeWell & Eat4YourGenes; Conference Director, Cambridge Healthtech Institute

Between January-July 2012, 15 patients took a weight management genetic test to support their weight loss efforts. An individualized nutrition plan based on their eating and lifestyle habits and test results was created for each person. Data and several case studies will be presented to show how successful these patients were in achieving their weight loss goals. Challenges and opportunities will be discussed. Also presented will be tips and suggestions for genetic testing companies on how they can work best with a private practitioner’s office.

1:40 How Microfluidics is Changing the Landscape of Personalized Cosmetics

David A. Weitz, Ph.D., Mallinckrodt Professor of Physics and Applied Physics, Harvard School of Engineering and Applied Sciences

2:00 Refreshment Break in the Exhibit Hall with Poster Viewing

 

DISEASE DIAGNOSTICS

2:30 Clinical Sequencing and Mitochondrial Disease

Kevin McKernan, CSO, Courtagen Life Sciences

We describe the results from sequencing 64 patients’ Mitochondrial genomes in conjunction with 1,100 nuclear genes. Complementing this data with multiplex ELISA assays to monitor protein levels in the blood can provide additional insight to variants of unknown significance and aid therapeutic decisions.

2:50 A Paradigm Shift: Universal Screening Test

Samuil R. Umansky, M.D., Ph.D., D.Sc., Co-founder, CSO, and President, DiamiR LLC

We will present a fundamentally new approach to the development of a screening test aimed at diseases of various organ systems, organs and tissues. The test is non-invasive and cost efficient. The data we will present demonstrate the potential of our approach for early detection of neurodegenerative diseases, cancer and inflammatory diseases of gastrointestinal and pulmonary systems.

 

THE EMPOWERED PATIENT

3:10 Genomes R Us – How Personalized Medicine is Reshaping the Role of Patients, and Why It Matters

Craig Martin, CEO, Feinstein Kean Healthcare

Much has been said about the advancements in science underlying the genomic revolution. We are beginning now to see the impact at the clinical level, and there’s more to come in the pipeline. But what does this shift in medicine do to change the role of the patient? This presentation provides insights into how best to engage with patient communities to expedite research, commercialization and market impact of innovative technologies, diagnostics and treatments, and to help validate the relative efficacy of such advancements in a value-driven world.

3:40 Consumer Empowerment in Health Care and Personal Genomics: Ethical, Societal and Regulatory Considerations

Gholson Lyon, M.D., Ph.D., Assistant Professor in Human Genetics, Cold Spring Harbor Laboratory; Research Scientist, Utah Foundation for Biomedical Research

The pace of exome and genome sequencing is accelerating with the identification of many new disease-causing mutations in research settings, and it is likely that whole exome or genome sequencing could have a major impact in the clinical arena in the relatively near future. However, the human genomics community is currently facing several challenges, including phenotyping, sample collection, sequencing strategies, bioinformatics analysis, biological validation of variant function, clinical interpretation and validity of variant data, and delivery of genomic information to various constituents. I will review these challenges, with an eye toward consumer genetics.

4:10 It Hurts Less If You Know More: An Empowered Patient’s Diagnostic Odyssey

Richard Kellner, Co-Founder and President, Genome Health Solutions, Inc.

For the early detection, diagnosis and treatment of cancer, there is a wide gap between current “standards of care” and what is possible through the use of advanced genomic technologies. Over the past two years I learned this lesson first hand through personal experiences involving myself, close friends and family members. My story is one of serendipity, frustration and then hope. I learned that, unfortunately, where you live and who you know can greatly influence your quality of care. I also learned that you can overcome these limitations by becoming an “empowered patient” who actively seeks out doctors who are willing to get outside of their comfort zones and practice “participatory medicine,” sometimes at the cutting edge of new precision diagnostics. I will present a new roadmap that both patients and doctors can follow toward a new era of personalized genomic medicine.

 

COMPANIES THAT EMPOWER THE PATIENT

4:40 23andMe’s DTC Exome

Brian T. Naughton, Ph.D., Founding Scientist, 23andMe

In October 2011, 23andMe launched a $999 direct-to-consumer exome product to a limited group of customers.This talk presents findings from this project, including the ubiquitous issue of variants of unknown significance.

5:10 Winding the Asklepian Wand: The Advent of Whole Genomes in Healthcare

Nathan Pearson, Ph.D., Director of Research, Knome, Inc.

With ever cheaper sequencing, richer reference data, and sharper interpretation methods, the clinical use of whole genomes is taking root in pediatrics, oncology, and beyond. Our genomes will ultimately join other cornerstones of clinical care, helping us stay healthier from birth to old age. But that prospect will require fast, robust pipelines that smartly interpret genomes, in the context of good phenotype data, and feed decisive insights back to patients and caregivers. Learn how Knome is making that happen.

5:40 Close of Conference

Source:

http://www.consumergeneticsconference.com/cgc_content.aspx?id=117407&libID=117355

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Genomics and the State of Science Clarity

Projects supported by the US National Institutes of Health will have produced 68,000 total human genomes — around 18,000 of those whole human genomes — through the end of this year, National Human Genome Research Institute estimates indicate. And in his book, The Creative Destruction of Medicine, the Scripps Research Institute‘s Eric Topol projects that 1 million human genomes will have been sequenced by 2013 and 5 million by 2014.

“There’s a lot of inventory out there, and these things are being generated at a fiendish rate,” says Daniel MacArthur, a group leader in Massachusetts General Hospital‘s Analytic and Translational Genetics Unit. “From a capacity perspective … millions of genomes are not that far off. If you look at the rate that we’re scaling, we can certainly achieve that.”

The prospect of so many genomes has brought clinical interpretation into focus — and for good reason. Save for regulatory hurdles, it seems to be the single greatest barrier to the broad implementation of genomic medicine.

But there is an important distinction to be made between the interpretation of an apparently healthy person’s genome and that of an individual who is already affected by a disease, whether known or unknown.

In an April Science Translational Medicine paper, Johns Hopkins University School of Medicine‘s Nicholas Roberts and his colleagues reported that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals. The researchers then concluded that whole-genome sequencing was not likely to be clinically useful for that purpose. (See sidebar, story end.)

“The Roberts paper was really about the value of omniscient interpretation of whole-genome sequences in asymptomatic individuals and what were the likely theoretical limits,” says Isaac Kohane, chair of the informatics program at Children’s Hospital Boston. “That was certainly an important study, and it was important to establish what those limits of knowledge are in asymptomatic populations. But, in fact, the major and most important use cases [for whole-genome sequencing] may be in cases of disease.”

Still, targeted clinical interpretations are not cut and dried. “Even in cases of disease, it’s not clear that we know now how to look across multiple genes and figure out which are relevant, which are not,” Kohane adds.

While substantial progress has been made — in particular, for genetic diseases, including certain cancers — ambiguities have clouded even the most targeted interpretation efforts to date. Technological challenges, meager sample sizes, and a need for increased, fail-safe automation all have hampered researchers’ attempts to reliably interpret the clinical significance of genomic variation. But perhaps the greatest problem, experts say, is a lack of community-wide standards for the task.

Genes to genomes

When scientists analyzed James Watson’s genome — his was the first personal sequence, completed in 2007 and published in Nature in 2008 — they were surprised to find that he harbored two putative homozygous SNPs matching Human Gene Mutation Database entries that, were they truly homozygous, would have produced severe clinical pheno-types.

But Watson was not sick.

As researchers search more and more genomes, such inconsistencies are increasingly common.

“My take on what has happened is that the people who were doing the interpretation of the raw sequence largely were coming from a SNPs world, where they were thinking about sequence variants that have been observed before, or that have an appreciable frequency, and weren’t thinking very much about the single-ton sequence variants,” says Sean Tavtigian, associate professor of oncology at the University of Utah.

“There is a qualitative difference between looking at whole-genome sequences and looking at single genes or, even more typically, small numbers of variants that have been previously implicated in a disease,” Boston’s Kohane adds.
“Previously, because of the cost and time limitations around sequencing and genotyping, we only looked at variants in genes for which we had a clinical indication. Now, since we can essentially see that in the near future we will be able to do a full genome sequence for essentially the same cost as just a focused set-of-variants test, all of the sudden we have to ask ourselves: What is the meaning of variants that fall outside where we would have ordinarily looked for a given disease or, in fact, if there is no disease at all?”

Mass General’s MacArthur says it has been difficult to pinpoint causal variants because they are enriched for both sequencing and annotation errors. “In the genome era, we can generate those false positives at an amazing rate, and we need to work hard to filter them back out,” he says.

“Clinical geneticists have been working on rare diseases for a long time, and have identified many genes, and are used to working in a world where there is sequence data available only from, say, one gene with a strong biological hypothesis. Suddenly, they’re in this world where they have data from patients on all 20,000 genes,” MacArthur adds. “There’s a fundamental mind-shift there, in shifting from one gene through to every gene. My impression is that the community as a whole hasn’t really internalized that shift; people still have a sense in their head that if you see a strongly damaging variant that segregates with the disease, and maybe there’s some sort of biological plausibility around it as well, that that’s probably the causal variant.”

Studies have shown that that’s not necessarily so. Because of this, “I do worry that in the next year or so we’ll see increasing numbers of mutations published that later prove to just be benign polymorphisms,” MacArthur adds.

“The meaning of whole-genome -sequence I think is very much front-and-center of where genomics is going to go. What is the true, clinical meaning? What is the interpretation? And, there’s really a double-edged sword,” Kohane says. On one hand, “if you only focus on the genes that you believe are relevant to the condition you’re studying, then you might miss some important findings,” he says. Conversely, “if you look at every-thing, the likelihood of a false positive becomes very, very high. Because, if you look at enough things, invariably you will find something abnormal,” he adds.

False positives are but one of the several challenges scientists working to analyze genomes in a clinical context face.

Technical difficulties

That advances in sequencing technologies are far outstripping researchers’ abilities to analyze the data they produce has become a truism of the field. But current sequencing platforms are still far from perfect, making most analyses complicated and nuanced. Among other things, improvements in both read length and quality are needed to enable accurate and reproducible interpretations.

“The most promising thing is the rate at which the cost-per-base-pair of massively parallel sequencing has dropped,” Utah’s Tavtigian says. Still, the cost of clinical sequencing is not inconsequential. “The $1,000, $2,000, $3,000 whole-genome sequences that you can do right now do not come anywhere close to 99 percent probability to identify a singleton sequence variant, especially a biologically severe singleton sequence variant,” he says. “Right now, the real price of just the laboratory sequencing to reach that quality is at least $5,000, if not $10,000.”

However, Tavtigian adds, “techniques for multiplexing many samples into a channel for sequencing have come along. They’re not perfect yet, but they’re going to improve over the next year or so.”

Using next-generation sequencing platforms, researchers have uncovered a variety of SNPs, copy-number variants, and small indels. But to MacArthur’s mind, current read lengths are not up to par when it comes to clinical-grade sequencing, and they have made supernumerary quality-control measures necessary.

“There’s no question that we’re already seeing huge improvements. … And as we add in to that changes in technology — for instance much, much longer sequencing reads, more accurate reads, possibly combining different platforms — I think these sorts of [quality-control] issues will begin to go away over the next couple of years,” MacArthur says. “But at this stage, there is still a substantial quality-control component in any sort of interpretation process. We don’t have perfect genomes.”

In a 2011 Nature Biotechnology paper, Stanford University’s Michael Snyder and his colleagues sought to examine the accuracy and completeness of single-nucleotide variant and indel calls from both the Illumina and Complete Genomics platforms by sequencing the genome of one individual using both technologies. Though the researchers found that more than 88 percent of the unique single-nucleotide variants they detected were concordant between the two platforms, only around one-quarter of the indel calls they generated matched up. Overall, the authors reported having found tens of thousands of platform-specific variant calls, around 60 percent of which they later validated by genotyping array.

For clinical sequencing to ever become widespread, “we’re going to have to be able to show the same reproducibility and test characteristic modification as we have for, let’s say, an LDL cholesterol level,” Boston’s Kohane says. “And if you measure it in one place, it should not be too different from another place. … Even before we can get to the clinical meaning of the genomes, we’re going to have to get some industry-wide standards around quality of sequencing.”
Scripps’ Topol adds that when it comes to detecting rare variants, “there still needs to be a big upgrade in accuracy.”

Analytical issues

Beyond sequencing, technological advances must also be made on the analysis end. “The next thing, of course, is once you have better -accuracy … being able to do all of the analytical work,” Topol says. “We’re getting better at the exome, but every-thing outside of protein-coding -elements, there’s still a tremendous challenge.”

Indeed, that challenge has inspired another — a friendly competition among bioinformaticians working to analyze pediatric genomes in a pedigree study.

With enrollment closed and all sequencing completed, participants in the Children’s Hospital Boston-sponsored CLARITY Challenge have rolled up their shirtsleeves and begun to dig into the data — de-identified clinical summaries and exome or whole-genome sequences generated by Complete Genomics and Life Technologies for three children affected by rare diseases of unknown genetic basis, and their parents. According to its organizers, the competition aims to help set standards for genomic analysis and interpretation in a clinical setting, and for returning actionable results to clinicians and patients.

“A bunch of teams have signed up to provide clinical-grade reports that will be checked by a blue-ribbon panel of judges later this year to compare and contrast the different forms of clinical reporting at the genome-wide level,” Kohane says. The winning team will be announced this fall and will receive a $25,000 prize, he adds.

While the competition covers all aspects of clinical sequencing — from readout to reporting — it is important to recognize that, more generally, there may not be one right answer and that the challenges are far-reaching, affecting even the most basic aspects of analysis.

“There is a lot of algorithm investment still to be made in order to get very good at identifying the very rare or singleton sequence variants from the massively parallel sequencing reads efficiently, accurately, [and with] sensitivity,” Utah’s Tavtigian says.

Picking up a variant that has been seen before is one thing, but detecting a potentially causal, though as-yet-unclassified variant is a beast of another nature.

“Novel mutations usually need extensive knowledge but also validation. That’s one of the challenges,” says Zhongming Zhao, associate professor of biomedical informatics at Vanderbilt University. “Validation in terms of a disease study is most challenging right now, because it is very time-consuming, and usually you need to find a good number of samples with similar disease to show this is not by chance.”

Search for significance

Much like sequencing a human genome in the early- to mid-2000s was more laborious than it is now, genome interpretation has also become increasingly automated.

Beyond standard quality-control checks, the process of moving from raw data to calling variants is now semiautomatic. “There’s essentially no manual intervention required there, apart from running our eyes over [the calls], making sure nothing has gone horribly wrong,” says Mass General’s MacArthur. “The step that requires manual intervention now is all about taking that list of variants that comes out of that and looking at all the available biological data that exists on the Web, [coming] up with a short-list of genes, and then all of us basically have a look at all sorts of online resources to see if any of them have some kind of intuitive biological profile that fits with the disease we’re thinking about.”

Of course, intuitive leads are not foolproof, nor are current mutation data-bases. (See sidebar, story end.) And so, MacArthur says, “we need to start replacing the sort of intuitive biological approach with a much more data-informed approach.”

Developing such an approach hinges in part on having more genomes. “If we get thousands — tens of thousands — of people sequenced with various different phenotypes that have been crisply identified, that’s going to be so important because it’s the coupling of the processing of the data with having rare variants, structural variants, all the other genomic variations to understand the relationship of whole-genome sequence of any particular phenotype and a sequence variant,” Scripps’ Topol says.

Vanderbilt’s Zhao says that sample size is still an issue. “Right now, the number of samples in each whole-genome sequencing-based publication is still very limited,” he says. At the same time, he adds, “when I read peers’ grant applications, they are proposing more and more whole-genome sequencing.”

When it comes to disease studies, sequencing a whole swath of apparently healthy people is not likely to ever be worthwhile. According to Utah’s Tavtigian, “the place where it is cost-effective is when you test cases and then, if something is found in the case, go on and test all of the first-degree relatives of the case — reflex testing for the first-degree relatives,” he says. “If there is something that’s pathogenic for heart disease or colon cancer or whatever is found in an index case, then there is a roughly 50 percent chance that the first-degree relatives are going to carry the same thing, whereas if you go and apply that same test to someone in the general population, the probability that they carry something of interest is a lot lower.”

But more genomes, even familial ones, are not the only missing elements. To fill in the functional blanks, researchers require multiple data types.

“We’ve been pretty much sequence-centric in our thinking for many years now because that was where are the attention [was],” Topol says. “But that leaves the other ‘omes out there.”

From the transcriptome to the proteome, the metabolome, the microbiome, and beyond — Topol says that because all the ‘omes contribute to human health, they all merit review.

“The ability to integrate information about the other ‘omics will probably be a critical direction to understand the underpinnings of disease,” he says. “I call it the ‘panoromic’ view — that is really going to become a critical future direction once we can do those other ‘omics readily. We’re quite a ways off from that right now.”

Mass General’s MacArthur envisages “rolling in data from protein-protein interaction networks and tissue expression data — pulling all of these together into a model that predicts, given the phenotype, given the systems that appear to be disrupted by this variant, what are the most likely set of genes to be involved,” he says. From there, whittling that set down to putative causal variants would be simpler.

“And at the end of that, I think we’ll end up with a relatively small number of variants, each of which has a probability score associated with it, along with a whole host of additional information that a clinician can just drill down into in an intuitive way in making a diagnosis in that individual,” he adds.

According to MacArthur, “we’re already moving in this direction — in five years I think we will have made substantial progress toward that.” He adds, “I certainly think within five years we will be diagnosing the majority of severe genetic disease patients; the vast majority of those we’ll be able to assign a likely causal variant using this type of approach.”

Tavtigian, however, highlights a potential pitfall. While he says that “integration of those [multivariate] data helps a lot with assessing unclassified variants,” it is not enough to help clinicians ascertain causality. Functional assays, which can be both inconclusive and costly, will be needed for some unclassified variant hits, particularly those that are thought to be clinically meaningful.

“I don’t see how you’re going to do a functional assay for less than like $1,000,” he says. “That means that unless the cost of the sequencing test also includes a whole bunch of money for assessing the unclassified variants, a sequencing test is going to create more of a mess than it cleans up.”

Rare, common

Despite the challenges, there have been plenty of clinical sequencing success stories. Already, Scripps’ Topol says there have been “two big fronts in 2012: One is the unknown diseases [and] the other one, of course, is cancer.” But scientists say that despite the challenges, whole–genome sequencing might also become clinically useful for asymptomatic individuals in the future.

Down the line, scientists have their sights set on sequencing asymptomatic individuals to predict disease risk. “The long-term goal is to have any person walk off the street, be able to take a look at their genome and, without even looking at them clinically, say: ‘This is a person who will almost certainly have phenotype X,'” MacArthur says. “That is a long way away. And, of course, there are many phenotypes that can’t be predicted from genetic data alone.”

Nearer term, Boston’s Kohane imagines that newborns might have their genomes screened for a number of neonatal or pediatric conditions.

Overall, he says, it’s tough to say exactly where all of the chips might fall. “It’s going to be an interesting few years where the sequencing companies will be aligning themselves with laboratory testing companies and with genome interpretation companies,” Kohane says.

Even if clinical sequencing does not show utility for cases other than genetic diseases, it could still become common practice.

“Worldwide, there are certainly millions of people with severe diseases that would benefit from whole–genome sequencing, so the demand is certainly there,” MacArthur says. “It’s just a question of whether we can develop the infrastructure that is required to turn the research-grade genomes that we’re generating at the moment into clinical-grade genomes. Given the demand and the practical benefit of having this information … I don’t think there is any question that we will continue to drive, pretty aggressively, towards large-scale -genome sequencing.”

Kohane adds that “although rare diseases are rare, in aggregate they’re actually not — 5 percent of the population, or 1 in 20, is beginning to look common.”

Despite conflicting reports as to its clinical value, given the rapid declines in cost, Kohane says it’s possible that a whole-genome sequence could be less expensive than a CT scan in the next five years. Confident that many of the interpretation issues will be worked out by then, he adds, “this soon-to-be-very-inexpensive test will actually have a lot of clinical value in a variety of situations. I think it will become part the decision procedure of most doctors.”


[Sidebar] ‘Predictive Capacity’ Challenged

In Science Translational Medicine in April, Johns Hopkins University School of Medicine’s Nicholas Roberts and his colleagues showed that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals and concluded that whole-genome sequencing was unlikely to be useful for that purpose.

As the Scripps Research Institute’s Eric Topol says, that Roberts and his colleagues examined the predictive capacity of personal genome sequencing “without any genome sequences” was but one flaw of their interpretation.

In a comment appearing in the same journal in May, Topol elaborated on this criticism, and noted that the Roberts et al. study essentially showed nothing new. “We cannot know the predictive capacity of whole-genome sequencing until we have sequenced a large number of individuals with like conditions,” Topol wrote.

Elsewhere in the journal, Tel Aviv University’s David Golan and Saharon Rosset noted that slightly tweaking the gene-environment parameters of the mathematical model used by Roberts et al. showed that the “predictive capacity of genomes may be higher than their maximal estimates.”

Colin Begg and Malcolm Pike from Memorial Sloan-Kettering Cancer Center also commented on the study in Science Translational Medicine, reporting their -alternative calculation of the predictive capacity of personal sequencing and their analysis of cancer occurrence in the second breast of breast cancer patients, both of which, they wrote, “offer a more optimistic view of the predictive value of genetic data.”

In response to those comments, Bert Vogelstein — who co-authored the Roberts et al. study — and his colleagues wrote in Science Translational Medicine that their “group was the first to show that unbiased genome-wide sequencing could illuminate the basis for a hereditary disease,” adding that they are “acutely aware of its immense power to elucidate disease pathogenesis.” However, Vogelstein and his colleagues also said that recognizing the potential limitations of personal genome sequencing is important to “minimize false expectations and foster the most fruitful investigations.”


[Sidebar] ‘The Single Biggest Problem’

That there is currently no comprehensive, accurate, and openly accessible database of human disease-causing mutations “is the single greatest failure of modern human genetics,” Massachusetts General Hospital’s Daniel MacArthur says.

“We’ve invested so much effort and so much money in researching these Mendelian diseases, and yet we have never managed as a community to centralize all of those mutations in a single resource that’s actually useful,” MacArthur says. While he notes that several groups have produced enormously helpful resources and that others are developing more, currently “none covers anywhere close to the whole of the literature with the degree of detail that is required to make an accurate interpretation.”

Because of this, he adds, researchers are pouring time and resources into rehashing one another’s efforts and chasing down false leads.

“As anyone at the moment who is sequencing genomes can tell you, when you look at a person’s genome and you compare it to any of these databases, you find things that just shouldn’t be there — homozygous mutations that are predicted to be severe, recessive, disease-causing variants and dominant mutations all over the place, maybe a dozen or more, that they’ve seen in every genome,” MacArthur says. “Those things are clearly not what they claim to be, in the sense that a person isn’t sick.” Most often, he adds, the researchers who reported that variant as disease-causing were mistaken. Less commonly, the database moderators are at fault.

“The single biggest problem is that the literature contains a lot of noise. There are things that have been reported to be mutations that just aren’t. And, of course, a lot of the databases are missing a lot of mutations as well,” MacArthur adds. “Until we have a complete database of severe disease mutations that we can trust, genome interpretation will always be far more complicated than it should be.”

Tracy Vence is a senior editor of Genome Technology.

Source: 

http://www.genomeweb.com/node/1098636/

NIST Consortium Embarks on Developing ‘Meter Stick of the Genome’ for Clinical Sequencing

September 05, 2012

The National Institute of Standards and Technology has founded a consortium, called “Genome in a Bottle,” to develop reference materials and performance metrics for clinical human genome sequencing.

Following an initial workshop in April, consortium members – which include stakeholders from industry, academia, and the government – met at NIST last month to discuss details and timelines for the project.

The current aim is to have the first reference genome — consisting of genomic DNA for a specific human sample and whole-genome sequencing data with variant calls for that sample — available by the end of next year, and another, more complete version by mid-2014.

“At present, there are no widely accepted genomics standards or quantitative performance metrics for confidence in variant calling,” the consortium wrote in its work plan, which was discussed at the meeting. Its main motivation is “to develop widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.”

“This is like the meter stick of the genome,” said Marc Salit, leader of the Multiplexed Biomolecular Science group in NIST’s Materials Measurement Laboratory and one of the consortium’s organizers. He and his colleagues were approached by several vendors of next-generation sequencing instrumentation about the possibility of generating standards for assessing the performance of next-gen sequencing in clinical laboratories. The project, he said, will focus on whole-genome sequencing but will also include targeted sequencing applications.

The consortium, which receives funding from NIST and the Food and Drug Administration, is open for anyone to participate. About 100 people, representing 40 to 50 organizations, attended last month’s meeting, among them representatives from Illumina, Life Technologies, Pacific Biosciences, Complete Genomics, the FDA, the Centers for Disease Control and Prevention, commercial and academic clinical laboratories, and a number of large-scale sequencing centers.

Four working groups will be responsible for different aspects of the project: a group led by Andrew Grupe at Celera will select and design the reference materials; a group headed by Elliott Margulies at Illumina will characterize the reference materials experimentally, using multiple sequencing platforms; Steve Sherry at the National Center for Biotechnology Information is heading a bioinformatics, data integration, and data representation group to analyze and represent the experimental data; and Justin Johnson from EdgeBio is in charge of a performance metrics and “figures of merit” group to help laboratories use the reference materials to characterize their own performance.

The reference materials will include both human genomic DNA and synthetic DNA that can be used as spike-in controls. Eventually, NIST plans to release the references as Standard Reference Materials that will be “internationally recognized as certified reference materials of higher order.”

According to Salit, there was some discussion at the meeting about what sample to select for a national reference genome. The initial plan was to use a HapMap sample – NA12878, a female from the CEPH pedigree from Utah – but it turned out that HapMap samples are consented for research use only and not for commercial use, for example in an in vitro diagnostic or for potential re-identification from sequence data.

The genome of NA12878 has already been extensively characterized, and the CDC is developing it as a reference for clinical laboratories doing targeted sequencing. “We were going to build on that momentum and make our first reference material the same genome,” Salit said. But because of the consent issues, NIST’s institutional review board and legal experts are currently evaluating whether the sample can be used.

In the meantime, consortium members have been “quite enthusiastic” about using samples from the Harvard University’s Personal Genome Project, which are broadly consented, Salit said.

The reference material working group issued a recommendation to develop a set of genomes from eight ethnically diverse parent-child trios as references, he said. For cancer applications, the references may also potentially include a tumor-normal pair.

The consortium will characterize all reference materials by several sequencing platforms. Several instrument vendors, as well as a couple of academic labs, have offered to contribute to data production. According to Justin Zook, a biomedical engineer at NIST and another organizer of the consortium, the current plan is to use sequencing technology from Illumina, Life Technologies, Complete Genomics, and – at least for the first genome – PacBio. Some of the sequencing will be done internally at NIST, which has Life Tech’s 5500 and Ion Torrent PGM available. In addition, the consortium might consider fosmid sequencing, which would provide phasing information and lower the error rate, as well as optical mapping to gain structural information, Zook said.

He and his colleagues have developed new methods for calling consensus variants from different data sets already available for the NA12878 sample, which they are planning to submit for publication in the near future. A fraction of the genotype calls will be validated using other methods, such as microarrays and Sanger sequencing. Consensus genotypes with associated confidence levels will eventually be released publicly as NIST Reference Data.

An important part of NIST’s work on the data analysis will be to develop probabilistic confidence estimates for the variant calls. It will also be important to distinguish between homozygous reference genotypes and areas in the genome “where you’re not sure what the genotype is,” Zook said, adding that this will require new data formats.

Coming up with confidence estimates for the different types of variants will be challenging, Zook said, particularly for indels and structural variants. Also, representing complex variants has not been standardized yet.

Several meeting participants called for “reproducible research and transparency in the analysis,” Salit said, and there were discussions about how to implement that at the technical level, including data archives so anyone can re-analyze the reference data.

One of the challenges will be to establish the infrastructure for hosting the reference data, which will require help from the NCBI, Salit said. Also, analyzing the data collaboratively is “not a solved problem,” and the consortium is looking into cloud computing services for that.

The consortium will also develop methods that describe how to use the reference materials to assess the performance of a particular sequencing method, including both experimental protocols and open source software for comparing genotypes. “We could throw this over the fence and tell someone, ‘Here is the genome and here is the variant table,'” Salit said, but, he noted, the consortium would like to help clinical labs use those tools to understand their own performance.

Edge Bio’s Johnson, who is chairing the working group in charge of this effort, is also involved in developing bioinformatic tools to judge the quality of genomes for the Archon Genomics X Prize (CSN 11/2/2011). Salit said that NIST is “leveraging some excellent work coming out of the X Prize” and is collaborating with a member of the X Prize team on the consensus genotype calling project.

By the end of 2013, the consortium wants to have its first “genome in a bottle” and reference data with SNV and maybe indel calls available, which will not yet include all confidence estimates. Another version, to be released in mid-2014, will include further analysis of error rates and uncertainties, as well as additional types of variants, such as structural variation.

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

At AACC, NHGRI’s Green Lays out Vision for Genomic Medicine

July 16, 2012

LOS ANGELES – The age of genomic medicine is within “striking distance,” Eric Green, director of the National Human Genome Research Institute, told attendees of the American Association of Clinical Chemistry’s annual meeting here on Sunday.

Speaking at the conference’s opening plenary session, Green discussed NHGRI’sroadmap for moving genomic findings into clinical practice. While this so-called “helix to healthcare” vision may take many years to fully materialize, “I predict absolutely that it’s coming,” he said.

Green noted that rapid advances in DNA sequencing have put genomics on a similar development path as clinical chemistry, which is also a technology-driven field. “If you look over the history of clinical chemistry, whenever there were technology advances, it became incredibly powerful and new opportunities sprouted up left and right,” he said.

Green likened next-gen sequencing to the autoanalyzers that “changed the face of clinical chemistry” by providing a generic platform that enabled a range of applications. In a similar fashion, low-cost sequencing is becoming a “general purpose technology” that can not only read out DNA sequence but can also provide information about RNA, epigenetic modifications, and other associated biology, he said.

The “low-hanging fruit” for genomic medicine is cancer, where molecular profiling is already being used alongside traditional histopathology to provide information on prognosis and to help guide treatment, he said.

Another area where Green said that genomic medicine is already bearing fruit is pharmacogenomics, where genomic data is proving useful in determining which patients will respond to specific drugs.

Nevertheless, while it’s clear that “sequencing is already altering the clinical landscape,” Green urged caution. “We have to manage expectations and realize it’s going to be many years from going from the most basic information about our genome sequence to actually changing medical care in any serious way,” he said.

In particular, he noted that the clinical interpretation of genomic data is still a challenge. Not only are the data volumes formidable, but the functional role of most variants is still unknown, he noted.

This knowledge gap should be addressed over the next several years as NHGRI and other organizations worldwide sequence “hundreds of thousands” of human genomes as part of large-scale research studies.

“We’re increasingly thinking about how to use that data to actually do clinical care, but I want to emphasize that the great majority of this data being generated will and should be part of research studies and not part of primary clinical care quite yet,” Green said.

Source:

http://www.genomeweb.com/sequencing/aacc-nhgris-green-lays-out-vision-genomic-medicine

Startup Aims to Translate Hopkins Team’s Cancer Genomics Expertise into Patient Care

May 16, 2012

Researchers at Johns Hopkins University who helped pioneer cancer genome sequencing have launched a commercial effort intended to translate their experience into clinical care.

Personal Genome Diagnostics, founded in 2010 by Victor Velculescu and Luis Diaz, aims to commercialize a number of cancer genome analysis methods that have been developed at Hopkins over the past several decades. Velculescu, chief scientific officer of PGDx, is director of cancer genetics at the Ludwig Center for Cancer Genetics and Therapeutics at Hopkins; while Diaz, chief medical officer of the company, is director of translational medicine at the Ludwig Center.

Other founders include Ludwig Center Director Bert Vogelstein as well as Hopkins researchers Ken Kinzler, Nick Papadopoulos, and Shibin Zhou. The team has led a number of seminal cancer sequencing projects, including the first effort to apply large-scale sequencing to cancer genomes, one of the first cancer exome sequencingstudies, and the discovery of a number of cancer-related genes, including TP53, PIK3CA, APC, IDH1 and IDH2.

Velculescu told Clinical Sequencing News that the 10-person company, headquartered in the Science and Technology Park at Johns Hopkins in Baltimore, is a natural extension of the Hopkins group’s research activities.

Several years ago, “we began receiving requests from other researchers, other physicians, collaborators, and then actually patients, family members, and friends, wanting us to do these whole-exome analyses on cancer samples,” he said. “We realized that doing this in the laboratory wasn’t really the best place to do it, so for that reason we founded Personal Genome Diagnostics.”

The goal of the company, he said, “is to translate this history of our group’s experience of cancer genetics and our understanding of cancer biology, together with the technology that has now become available, and to ultimately perform these analyses for individual patients.”

The fledgling company has reached two commercial milestones in the last several weeks. First, it gained CLIA certification for cancer exome sequencing using the HiSeq 2000. In addition, it secured exclusive licensing rights from Hopkins for a technology called digital karyotyping, developed by Velculescu and colleagues to analyze copy number changes in cancer genomes.

PGDx offers a comprehensive cancer genome analysis service that combines exome sequencing with digital karyotyping, which isolates short sequence tags from specific genomic loci in order to identify chromosomal changes as well as amplifications and deletions.

The company sequences tumor-normal pairs and promises a turnaround time of six to 10 weeks, though Velculescu said that ongoing improvements in sequencing technology and the team’s analysis methods promise to reduce that time “significantly.” It is currently seeing turnaround times of under a month.

To date, the company has focused solely on the research market. Customers have included pharmaceutical and biotech companies, individual clinicians and researchers, and contract research organizations, while the scale of these projects has ranged from individual patients to thousands of exomes for clinical trials.

While the company performs its own sequencing for smaller projects, it relies on third-party service providers for larger studies.

PGDx specializes in all aspects of cancer genome analyses, but has a particular focus on the front and back end of the workflow, Velculescu said, including “library construction, pathologic review of the samples, dissection of tumor samples to enrich tumor purity, next generation sequencing, identification of tumor-specific alterations, and linking of these data to clinical and biologic information about human cancer.”

The sequencing step in the middle, however, “is really almost becoming a commodity,” he noted. “Although we’ve done it in house, we typically do outsource it and that allows us to scale with the size of these projects.”

He said that PGDx typically works with “a number of very high-quality sequence partners to do that part of it,” but he declined to disclose these partners.

On the front end, PGDx has developed “a variety of techniques that we’ve licensed and optimized from Hopkins that have allowed us to improve extraction of DNA from both frozen tissue and [formalin-fixed, paraffin-embedded] tissue, even at very small quantities,” Diaz said. The team has also developed methods “to maximize our ability to construct libraries, capture, and then perform exomic sequencing with digital karyotyping.”

Once the sequence data is in hand, “we have a pipeline that takes that information and deciphers the changes that are most likely to be related to the cancer and its genetic make-up,” he said. “That’s not trivial. It requires inspection by an experienced cancer geneticist.”

While the firm is working on automating the analysis, “it’s not something that is entirely automatable at this time and therefore cannot be commoditized,” Diaz said.

The firm issues a report for its customers that “provides information not only on the actual sequence changes which are of high quality, but what these changes are likely to do,” Velculescu said, including “information about diagnosis, prognosis, therapeutic targeting [information] or predictive information about the therapy, and clinical trials.”

So far, the company has relied primarily on word of mouth to raise awareness of its offerings. “We’ve literally been swamped with requests from people who just know us,” Velculescu said. “I think one of the major reasons people have been coming to us for either these small or very large contracts is that people are getting this type of NGS data and they don’t know what to do with it — whether it’s a researcher who doesn’t have a lot of experience in cancer or a clinician who hasn’t seen this type of data before.”

While there’s currently “a wealth in the ability to get data, there’s an inadequacy in being able to understand and interpret the data,” he said.

Pricing for the company’s services is on a case-by-case basis, but Diaz estimated that retail costs are currently between $5,000 and $10,000 per tumor-normal pair for research purposes. Clinical cases are more costly because the depth of coverage is deeper and additional analyses are required, as well as a physician interpretation.

A Cautious Approach

While the company’s ultimate goal is to help oncologists use genomic information to inform treatment for their patients, PGDx is “proceeding cautiously” in that direction, Diaz said.

The firm has so far sequenced around 50 tumor-normal pairs for individual patients, but these have been for “informational purposes,” he said, stressing that the company believes the field of cancer genomics is still in the “discovery” phase.

“I think we’re really at the beginning of the genomic revolution in cancer,” Diaz said. “We are partnering with pharma, with researchers, and with certain clinicians to start bringing this forward — not only as a discovery tool but eventually as a clinical application.”

“We do think that rushing into this right now is too soon, but we are building the infrastructure — for example our recent CLIA approval for cancer genome analyses — to do that,” he added.

This cautious approach sets the firm apart from some competitors, including Foundation Medicine, which is about to launch a targeted sequencing test that it is marketing as a diagnostic aid to help physicians tailor therapy for their patients. Diagnostic firm Asuragen is also offering cancer sequencing services based on a targeted approach (CSN 1/12/12), as are a number of academic labs.

Diaz said that PGDx’s comprehensive approach also sets it apart from these groups. “We think there’s a lot of clinically actionable information in the genome … and we don’t want to limit ourselves by just looking at a set of genes and saying that these may or may not have importance.”

While the genes in targeted panels “may have some data surrounding them with regard to prognosis, or in relation to a therapy, that’s really only a small part of the story when it comes to the patient’s cancer,” Diaz said.

“That’s why we would like to remain the company that looks at the entire cancer genome in a comprehensive fashion, because we don’t know enough yet to break it down to a few genes,” he said.

The company’s proprietary use of digital karyotyping to find copy number alterations is another differentiator, Velculescu said, because many cancer-associated genes — such as p16, EGFR, MYC, and HER2/neu — are only affected by copy number changes, not point mutations.

Ultimately, “we want to develop something that has value for the clinician,” Diaz said. “A clinician currently sees 20 to 30 patients a day and may have only a few minutes to look at a report. If [information from sequencing] doesn’t have immediate high-impact value, it’s going to be very hard to justify its use down the road.”

He added that the company is “thinking very hard about what we can squeeze out of the cancer genome to provide that high-impact clinical value — something that isn’t just going to improve the outcome of patients by a few months or weeks, but actually change the outlook of that patient substantially.”

Source:

http://www.genomeweb.com/sequencing/startup-aims-translate-hopkins-teams-cancer-genomics-expertise-patient-care

 
Bernadette Toner is editorial director for GenomeWeb’s premium content. E-mail her here or follow her GenomeWeb Twitter account at @GenomeWeb.

In Educational Symposium, Illumina to Sequence, Interpret Genomes of 50 Participants for $5K Each

June 27, 2012

This story was originally published June 25.

As part of a company-sponsored symposium this fall to “explore best practices for deploying next-generation sequencing in a clinical setting,” Illumina plans to sequence and analyze the genomes of around 50 participants for $5,000 each, Clinical Sequencing News has learned.

According to Matt Posard, senior vice president and general manager of Illumina’s translational and consumer genomics business, the event is part of a “multi-step process to engage experts in the field around whole-genome sequencing, and to support the conversation.”

The “Understand your Genome” symposium will take place Oct. 22-23 at Illumina’s headquarters in San Diego.

The company sent out invitations to the event over the last few months, targeting individuals with a professional interest in whole-genome sequencing, including medical geneticists, pathologists, academics, and industry or business leaders, Posard told CSN this week. To provide potential participants with more information about the symposium, Illumina also hosted a webinar this month that included a Q&A session.

Registration closed June 14 and has exceeded capacity — initially 50 spots, a number that may increase slightly, Posard said. Everyone else is currently waitlisted, and Illumina plans to host additional symposia next year.

“There has been quite a bit of unanticipated enthusiasm around this from people who are speaking at the event or planning to attend the event,” including postings on blogs and listservs, Posard said.

As part of their $5,000 registration fee, which does not include travel and lodging, participants will have their whole genome sequenced in Illumina’s CLIA-certified and CAP-accredited lab prior to the event. It is also possible to participate without having one’s genome sequenced, but only as a companion to a full registrant, according to Illumina’s website. The company prefers that participants submit their own sample, but as an alternative, they may submit a patient sample instead.

The general procedure is very similar to Illumina’s Individual Genome Sequencing, or IGS, service in that it requires a prescription from a physician, who also receives the results to review them with the participant. However, participants pay less than they would through IGS, where a single human genome currently costs $9,500.

Participants will also have a one-on-one session with an Illumina geneticist prior to being sequenced, and they can choose to not receive certain medical information as part of the genome interpretation.

Doctors will receive the results and review them with the participants sometime before the event. “There will be no surprises for these participants when they come to the symposium,” Posard said.

Results will include not only a list of variants but also a clinical interpretation of the data by Illumina geneticists. This is currently not part of IGS, which requires an interpretation of the data by a third party, but Illumina plans to start offering interpretation services for IGS before the symposium, Posard said.

“Our stated intent has always been that we want to fill in all of the pieces that the physicians require, so we are building a human resource, as well as an informatics team, to provide that clinical interpretation, and we are using that apparatus for the ‘Understand your Genome’ event,” Posard said.

The interpretation will include “a specified subset of genes relating to Mendelian conditions, drug response, and complex disease risks,” according to the website, which notes that “as with any clinical test, the patient and physician must discuss any medically significant results.”

The first day of the symposium will feature presentations on clinical, laboratory, ethical, legal, and social issues around whole-genome sequencing by experts in the field. Speakers include Eric Topol from the Scripps Translational Science Institute, Matthew Ferber from the Mayo Clinic, Robert Green from Brigham and Women’s Hospital and Harvard Medical School, Heidi Rehm from the Harvard Partners Center for Genetics and Genomics, Gregory Tsongalis from the Dartmouth Hitchcock Medical Center, Robert Best from the University of South Carolina School of Medicine, Kenneth Chahine from Ancestry.com, as well as Illumina’s CEO Jay Flatley and chief scientist David Bentley.

On the second day, participants will receive their genome data on an iPad and learn how to analyze their results using the iPad MyGenome application that Illumina launched in April.

The planned symposium stirred some controversy at the European Society of Human Genetics annual meeting in Nuremberg, Germany, this week. During a presentation in a session on the diagnostic use of next-generation sequencing, Gert Matthijs, head of the Laboratory for Molecular Diagnostics at the Center for Human Genetics in Leuven, Belgium, said he was upset because the invitation to Illumina’s event apparently not only reached selected individuals but also patient organizations.

“To me, personally, [the event] tells that some people are really exploring the limits of business, and business models, to get us to genome sequencing,” he said.

“We have to be very careful when we put next-generation sequencing direct to the consumer, or to patient testing, but it’s a free world,” he added later.

Posard said that Illumina welcomes questions about and criticism of the symposium. “This is another example of us being extremely responsible and transparent in how we’re handling this novel application that everybody acknowledges is the wave of the future,” he said. “We want to responsibly introduce that wave, and I believe we’re doing so, through such things as the ‘Understand your Genome’ event, but not limited to this event.”

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

Federal Court Rules Helicos Patent Invalid; Company Reaches Payment Agreement with Lenders

August 30, 2012

NEW YORK (GenomeWeb News) – A federal court has ruled in Illumina’s favor in a lawsuit filed by Helicos BioSciences that had alleged patent infringement.

In a decision dated Aug. 28, District Judge Sue Robinson of the US District Court for the District of Delaware granted Illumina’s motion for summary judgment declaring US Patent No 7,593,109 held by Helicos invalid for “lack of written description.”

Titled “Apparatus and methods for analyzing samples,” the patent relates to an apparatus, systems, and methods for biological sample analysis.

The ‘109 patent was the last of three patents that Helicos accused Illumina of infringing, following voluntary dismissal by Helicos earlier this year with prejudice of the other two patents. In October 2010 Helicos included Illumina and Life Technologies in a lawsuit that originally accused Pacific Biosciences of patent infringement.

Helicos dropped its lawsuit against Life Tech and settled with PacBio earlier this year, leaving Illumina as the sole defendant.

In seeking a motion for summary judgment, Illumina argued that the ‘109 patent does not disclose “a focusing light source operating with any one of the analytical light sources to focus said optical instrument on the sample.” Illumina’s expert witness further said that the patent “does not describe how focusing light source works” nor does it provide an illustration of such a system, according to court documents.

In handing down her decision, Robinson said, “In sum, and based on the record created by the parties, the court concludes that Illumina has demonstrated, by clear and convincing evidence, that the written description requirement has not been met.”

In a statement, Illumina President and CEO Jay Flatley said he was pleased with the court’s decision.

“The court’s ruling on the ‘109 patent, and Helicos’ voluntary dismissal of the other patents in the suit, vindicates our position that we do not infringe any valid Helicos patent,” he said. “While we respect valid and enforceable intellectual property rights of others, Illumina will continue to vigorously defend against unfounded claims of infringement.”

After the close of the market Wednesday, Helicos also disclosed that it had reached an agreement with lenders to waive defaults arising from Helicos’ failure to pay certain risk premium payments in connection with prior liquidity transactions. The transactions are part of risk premium payment agreement Helicos entered into with funds affiliated with Atlas Venture and Flagship Ventures in November 2010.

The lenders have agreed to defer the risk premium payments “until [10] business days after receipt of a written notice from the lenders demanding the payment of such risk premium payments,” Helicos said in a document filed with the US Securities and Exchange Commission.

The Cambridge, Mass.-based firm also disclosed that Noubar Afeyan and Peter Barrett have resigned from its board.

Helicos said two weeks ago that its second-quarter revenues dipped 29 percent year over year to $577,000. In an SEC document, it also warned that existing funds were not sufficient to support its operations and related litigation expenses through the planned September trial date for its dispute with Illumina.

In Thursday trade on the OTC market, shares of Helicos closed down 20 percent at $.04.

Source:

http://www.genomeweb.com/sequencing/federal-court-rules-helicos-patent-invalid-company-reaches-payment-agreement-len

State of the Science: Genomics and Cancer Research

April 2012
Basic research allows for a better understanding of cancer and, eventually, improved patient outcomes. Zhu Chen, China’s minister of health, and Shanghai Jiao Tong University’s Zhen-Yi Wang received the seventh annual Szent-Györgyi prize from the National Foundation for Cancer Research for their work on a treatment for acute promyelocytic leukemia. Genome Technology‘s Ciara Curtin spoke to Chen, Wang, and past prize winners about the state of cancer research.

Genome Technology: Doctors Wang and Chen, can you tell me a bit about the work you did that led to you receiving the Szent-Györgyi prize?

Zhen-Yi Wang: I am a physician. I am working in the clinic, so I have to serve the patients. … I know the genes very superficially, not very deeply, but the question raised to me is: There are so many genes, but how are [we] to judge what is the most important?

Zhu Chen: The work that is recognized by this year’s Szent-Györgyi Prize concerns … acute promyelocytic leukemia. Over the past few decades, we have been involved in developing new treatment strategies against this disease.

You have two [therapies — all-trans retinoic acid and arsenic trioxide] — that target the same protein but with slightly different mechanisms, so we call this synergistic targeting. When the two drugs combine together for the induction therapy, then we see very nice response in terms of the complete remission rate. But more importantly, we see that this synergistic targeting, together with the effect of the chemotherapy, can achieve a very high five-year disease-free survival — as high as 90 percent.

But we were more interested in the functional aspects of the genome, to understand what each gene does and also to particularly understand the network behavior of the genes.

GT: There are a number of consortiums looking at the genome sequences of many cancer types. What do you hope to see from such studies?

Webster Cavenee: This is a way that tumors are being sequenced in a rational kind of way. It would have been done anyway by labs individually, which would have taken a lot more money and taken a lot longer, too. The human genome sequence, everybody said, ‘Why are you going to do that?’ … But that now turns out to be a tremendous resource. … From the point of view of The Cancer Genome Atlas, having the catalog of all of the kinds of mutations which are present in tumors can be very useful because you can see patterns. For example, in the glioblastoma cancer genome project, they found an unexpected association of some mutations and combinations of mutations with drug sensitivity. Nobody would have thought that.

The problem, of course, is that when you are sequencing all these tumors, it’s a very static thing. You get one point in time and you sequence whatever comes out of this big lump of tissue. That big lump is made up of a lot of different kinds of pieces, so when you see a mutation, you can’t know where it came from and you don’t know whether it actually does anything. That then leads into what’s going to be the functionalizing of the genome. Because in the absence of knowing that it has a function, it’s not going to be of very much use to develop drugs or anything like that. And that’s a much bigger exercise because that involves a lot of experiments, not just stuffing stuff into a sequencer.Peter Vogt: [The genome] has to be used primarily to determine function. Without function, there’s not much you can do with these mutations, because the distinction between a driver mutation and a passenger mutation can’t be made just on the basis of sequence.

Carlo Croce: After that, you have to be able to validate all of the genetic operations in model systems where you can reproduce the same changes and see whether there are the same consequences. Otherwise, without validation, to develop therapy doesn’t make much sense because maybe those so-called driver mutations will turn out to be something else.

GT: Will sequencing of patient’s tumors come to the clinic?

CC: It is inevitable. Naturally, there are a lot of bottlenecks. To do the sequencing is the, quote, trivial part and it is going to cost less and less. But then interpreting the data might be a little bit more cumbersome.

Sujuan Ba: Dr. Chen, there is an e-health card in China right now. Do you think some day gene sequencing will be stored in that card?

ZC: We are developing a digital healthcare in China. We started with electronic health records and now by providing the e-health card to the people, that will facilitate the individualized health management and also the supervision of our healthcare system. In terms of the use of genetic information for clinical purposes, as Professor Croce said, it’s going to happen.

GT: What do you think are the major questions in cancer research that still need to be addressed?

PV: There are increasingly two schools of thought on cancer. One is that it is all an engineering problem: We have all the information we need, we just need to engineer the right drugs. The other school says it’s still a basic knowledge problem. I think more and more people think it’s just an engineering problem — give us the money and we’ll do it all. A lot of things can be done, but we still don’t have complete knowledge.

Roundtable Participants
Sujuan Ba, National Foundation for Cancer Research
Webster Cavenee, University of California, San Diego
Zhu Chen, Ministry of Health, China
Carlo Croce, Ohio State University
Peter Vogt, Scripps Research Institute
Zhen-Yi Wang, Shanghai Jiao Tong University

Source:

Read Full Post »

Tumor cell galactins blocked by citrus pectins.

Curator: Meg Baker, PhD, Reg Patent Agent

Posted 13 Mar 2012 in Modified Citrus Pectin (MCP)

Inhibition of Cancer Cell Growth and Metastases

By Jim English and Ward Dean, MD

Modified citrus pectin (MCP) is a unique dietary fiber that is produced by processing natural citrus pectin by altering its pH and splitting the carbohydrate chains to form a low molecular-weight, water-soluble fiber that is rich in galactose. Galectins (GAL, LGALS) are a family of lectins that contain conserved carbohydrate-recognition domains (CRDs) of about 130 amino acids with specificity for β-galactosides found on both N- and O-linked glycans. Cancer cells are prone to express various of the 15 identified galactose binding lectins, transport them to the plasma membrane and release soluble forms in the extracellular space, allowing diverse interactions involved in cell migration, adhesion and angiogenesis. Specific galactions have unique functions: GAL3 plays a role in regulating transcription factors including miRNA (Ramasamy, S. Mol Cell 27 (6) 992-1004, 21 Sept 2007 , http://www.cell.com/molecular-cell/retrieve/pii/S1097276507005631). Gal-9 interaction with TIM-3 was recently demonstrated to have a role in suppression of immune response http://www.ncbi.nlm.nih.gov/pubmed/20574007 . English and Dean review data showing that MCP derived from the pulp and peel of citrus fruits can attach to cancer cells, and oral dosing with MCP prevented metastatic spread of injected prostate cancer cells in animals. These data provide evidence that daily ingestion of plant pectins may be beneficial in preventing or suppressing metastatic cancer.

Read Full Post »

Reporter: Prabodh Kandala, PhD

A drug commonly used in Japan and Korea to treat asthma has been found to stop the spread of breast cancer cells traditionally resistant to chemotherapy, according to a new study led by St. Michael’s pathologist Dr. Gerald Prud’homme.

“Tranilast, a drug approved for use in Japan and South Korea, and not in use in Canada or the U.S., has been used for more than two decades to treat asthma and other allergic disorders including allergic rhinitis and atopic dermatitis,” Dr. Prud’homme says. “Now, our study is the first to discover it not only stops breast cancer from spreading but how the drug targets breast cancer cells.”

Researchers grew breast cancer stem cells, which give rise to other cancer cells, in culture. The cells were injected into two groups of mice, including one group, which was also treated with tranilast. Dr. Prud’homme and his colleagues found the drug reduced growth of the primary cancerous tumour by 50 per cent and prevented the spread of the cancer to the lungs. Researchers also identified a molecule in the cancer cell that binds to tranilast and appears to be responsible for this anti-cancer effect.

Tranilast binds to a molecule known as the aryl hydrocarbon receptor (AHR), which regulates cell growth and some aspects of immunity. This makes the drug beneficial in treating allergies, inflammatory diseases and cancer.

“For the first time, we were able to show that tranilast shows promise for breast cancer treatment in levels commonly well-tolerated by patients who use the drug for other medical conditions,” Dr. Prud’homme said. “These results are very encouraging and we are expanding our studies. Further studies are necessary to determine if the drug is effective against different types of breast and other cancers, and its interaction with anti-cancer drugs.

Ref:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0013831

http://www.sciencedaily.com/releases/2010/11/101103171449.htm

Read Full Post »

Reporter: Prabodh Kandala, PhD

compound found in cannabis may prove to be effective at helping stop the spread of breast cancer cells throughout the body.

The study, by scientists at the California Pacific Medical Center Research Institute, is raising hope that CBD, a compound found in Cannabis sativa, could be the first non-toxic agent to show promise in treating metastatic forms of breast cancer.

“Right now we have a limited range of options in treating aggressive forms of cancer,” says Sean D. McAllister, Ph.D., a cancer researcher at CPMCRI and the lead author of the study. “Those treatments, such as chemotherapy, can be effective but they can also be extremely toxic and difficult for patients. This compound offers the hope of a non-toxic therapy that could achieve the same results without any of the painful side effects.”

The researchers used CBD to inhibit the activity of a gene called Id-1, which is believed to be responsible for the aggressive spread of cancer cells throughout the body, away from the original tumor site.

“We know that Id-1 is a key regulator of the spread of breast cancer,” says Pierre-Yves Desprez, Ph.D., a cancer researcher at CPMCRI and the senior author of the study. “We also know that Id-1 has also been found at higher levels in other forms of cancer. So what is exciting about this study is that if CBD can inhibit Id-1 in breast cancer cells, then it may also prove effective at stopping the spread of cancer cells in other forms of the disease, such as colon and brain or prostate cancer.”

However, the researchers point out that while their findings are promising they are not a recommendation for people with breast cancer to smoke marijuana. They say it is highly unlikely that effective concentrations of CBD could be reached by smoking cannabis. And while CBD is not psychoactive it is still considered a Schedule 1 drug.

Ref:

http://www.sciencedaily.com/releases/2007/11/071123211703.htm

 

Read Full Post »

« Newer Posts - Older Posts »