Feeds:
Posts
Comments

Archive for the ‘Biological Networks, Gene Regulation and Evolution’ Category

MicroRNA in Serum as Biomarker for Cardiovascular Pathologies: acute myocardial infarction, viral myocarditis,  diastolic dysfunction, and acute heart failure

Reporter: Aviva Lev-Ari, PhD, RN

Increased MicroRNA-1 and MicroRNA-133a Levels in Serum of Patients With Cardiovascular Disease Indicate Myocardial Damage

Yasuhide Kuwabara, MD, Koh Ono, MD, PhD, Takahiro Horie, MD, PhD, Hitoo Nishi, MD, PhD, Kazuya Nagao, MD, PhD, Minako Kinoshita, MD, PhD, Shin Watanabe, MD, PhD, Osamu Baba, MD, Yoji Kojima, MD, PhD, Satoshi Shizuta, MD, Masao Imai, MD,Toshihiro Tamura, MD, Toru Kita, MD, PhD and Takeshi Kimura, MD, PhD

Author Affiliations

From the Department of Cardiovascular Medicine, Graduate School of Medicine, Kyoto University, Kyoto, Japan (Y. Kuwabara, K.O., T.H., H.N., K.N., M.K., S.W., O.B., Y. Kojima, S.S., M.I., T.T., T. Kimura); and Kobe City Medical Center General Hospital, Kobe, Japan (T. Kita).

Correspondence to Koh Ono, MD, PhD, Department of Cardiovascular Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin-kawahara-cho, Sakyo-ku, Kyoto, Japan 606-8507. E-mail kohono@kuhp.kyoto-u.ac.jp

Abstract

Background—Recently, elevation of circulating muscle-specific microRNA (miRNA) levels has been reported in patients with acute myocardial infarction. However, it is still unclear from which part of the myocardium or under what conditions miRNAs are released into circulating blood. The purpose of this study was to identify the source of elevated levels of circulating miRNAs and their function in cardiovascular diseases.

Conclusions—These results suggest that elevated levels of circulating miRNA-133a in patients with cardiovascular diseases originate mainly from the injured myocardium. Circulating miR-133a can be used as a marker for cardiomyocyte death, and it may have functions in cardiovascular diseases.

SOURCE:

Circulation: Cardiovascular Genetics. 2011; 4: 446-454

Published online before print June 2, 2011,

doi: 10.1161/ CIRCGENETICS.110.958975

 

Read Full Post »

Common Heart Failure: Clinical Considerations of Heritable Factors

Reporter: Aviva Lev-Ari, PhD, RN

 

Clinical Considerations of Heritable Factors in Common Heart Failure

Thomas P. Cappola, MD, ScM and Gerald W. Dorn II, MD

Author Affiliations

From the Department of Medicine, University of Pennsylvania, Philadelphia, PA (T.P.C.), and Center for Pharmacogenomics, Washington University School of Medicine, St Louis, MO (G.W.D.II.).

Correspondence to Gerald W. Dorn II, MD, Center for Pharmacogenomics, Washington University, 660 S Euclid Ave, Campus Box 8220, St Louis, MO 63110. E-mail gdorn@dom.wustl.edu

Introduction

Heart failure is a common condition responsible for at least 290 000 deaths each year in the United States alone.1 A small minority of heart failure cases are attributed to Mendelian or familial cardiomyopathies. The majority of systolic heart failure cases are not familial but represent the end result of 1 or many conditions that primarily injure the myocardium sufficiently to diminish cardiac output in the absence of compensatory mechanisms. Paradoxically, because they also injure the myocardium, it is the chronic actions of the compensatory mechanisms that in many instances contribute to the progression from simple cardiac injury to dilated cardiomyopathy and overt heart failure. Thus, the epidemiology of common heart failure appears to be just as sporadic as its major antecedent conditions (atherosclerosis, diabetes, hypertension, and viral myocarditis).

Familial trends in preclinical cardiac remodeling2 and risk of developing heart failure3reveal an important role for genetic modifiers in addition to clinical and environmental factors. Candidate gene studies performed over the past 10 years have identified a few polymorphic gene variants that modify risk or progression of common heart failure.4 Whole-genome sequencing will lead to the discovery of other genetic modifiers that were not candidates.5 The imminent availability of individual whole-genome sequences at a cost competitive with available genetic tests for familial cardiomyopathy will no doubt further expand the list of putative genetic heart failure modifiers. Heart failure risk alleles along with traditional clinical factors will need to be considered by clinical cardiologists in their design of optimal disease surveillance and prevention programs and in individually tailoring heart failure management.

The use of individual genetic make-up is likely to have the earliest and greatest impact on managing patients with heart failure by tailoring available pharmacotherapeutics to optimize patient response and minimize adverse effects (ie, the area of pharmacogenetics). Modern heart failure management has been derived and directed by the results of large, randomized, multicenter clinical trials. When standard therapies are applied according to the selection criteria used in these trials, they prolong average survival across affected populations or decrease the incidence of heart failure in populations at risk.6 For this reason, standardized treatment guidelines prescribe heart failure therapies according to trial designs, aiming for the same target doses and general treatment approaches,7 and largely ignore individual characteristics. In this article, we review established and emerging knowledge of genetic influence on common heart failure and try to anticipate how these genetic factors may be best used to eschew the cookie-cutter approach to heart failure management and move toward implementing a personalized medicine approach for the treatment and prevention of this important and prevalent disease.

The Concept of Genotype-Directed Personal Medical Management in Heart Failure

Variation in clinical heart failure progression and therapeutic response (either benefits or side effects) supports the need for a more individualized approach to disease management. On the basis of clinical stratification (eg, by etiology of heart failure as ischemic versus nonischemic, functional status, comorbid disease), physicians try to match each patient’s specific heart failure syndrome with a therapeutic regime devised to provide the most benefit. Standard heart failure pharmacotherapy currently comprises a minimum of 3 medications (angiotensin-converting enzyme [ACE] inhibitors, β-blockers, and aldosterone antagonists), with consideration of additional medications (hydralazine/isosorbide, angiotensin receptor blockers) and diuretics. The recommended target dosages for these agents, derived from their respective clinical trials, is rarely achieved,8 partly because of untoward clinical side effects such as low blood pressure or renal dysfunction. Accordingly, the published guidelines most often are applied in each individual patient using ad hoc approaches derived from personal experience and the “art of medicine.”

Technological advances in human genomics promise a different approach and are bringing cardiology into an era of clinically applied pharmacogenetics9 (whether we want to or not). As sequencing costs decline, it is not hard to envision that patients will present having had their entire genome already sequenced. The imperative to apply genome information in clinical settings will increase, as demonstrated by recent proof-of-concept studies.10 Our field seems poorly prepared for this type of evolution in care; Roden et al9 identified 3 major barriers: First is the absence of rapidly available genotype information in the clinical workflow. This barrier is being overcome with whole-genome sequencing, which (with proper analysis) promises a permanent and largely immutable genetic roadmap for individual disease risk and drug response at a cost comparable to many other clinical tests.11 Second, we must have the knowledge to properly apply information on genetic variants for the diseases we are managing and the drugs we are using. As we describe, this knowledge is accumulating for heart failure and for other cardiac conditions, and the rate at which we are gaining additional information and developing further expertise appears to be accelerating.

The third and perhaps most formidable barrier is the lack of clinical evidence showing how real-time application of genetic information can best benefit patients. As has been broadly communicated to the medical community and lay public, common functional gene variants in CYP2C19 can impair the transformation of clopidogrel into its active metabolite, leading to increased risk of stent thrombosis after percutaneous coronary intervention.12 The relevant question thus becomes the following: If physicians have this information at the time of clinical care and reacted by adjusting clopidogrel dose or substituting prasugrel, which is unaffected by CYP2C19genotype,13 would there be any improvement in clinical outcome? It is also important to consider whether any observed benefits justify the additional costs of genetic testing and for the alternate drug. Studies are currently examining these questions, and similar clinical trials will prospectively examine whether a genotype-guided strategy of warfarin dosing will be superior to the standard genotype-blinded approach in reaching target anticoagulation goals. At this time, there are no similar prospective, randomized, blinded trials of genotype-guided care for common heart failure.

Emerging Variants

The variants described here are established, but new ones are emerging. Although findings in heart failure genome-wide association studies have been limited, we can expect additional common heart failure variants to emerge as sample sizes increase.65 The CHARGE (Cohorts for Heart and Aging Research in Genomic Epidemiology) consortium published a genome-wide association study of incident heart failure that tested for associations between >2.4 million HapMap-imputed polymorphisms in >20 000 subjects.7 They identified 2 loci associated with heart failure, rs10519210 (15q22, containing USP3 encoding a ubiquitin-specific protease) in subjects of European ancestry and rs11172782 (12q14, containing LRIG3encoding a leucine-rich, immunoglobulin-like domain-containing protein of uncertain function) in subjects of African ancestry.66 In a companion study using the same population and genotyping results, mortality analysis of the subgroup of individuals who developed heart failure implicated an intronic SNP in CMTM7 (CKLF-like MARVEL transmembrane domain-containing 7).67 These genetic associations require independent replication and further study to identify the underlying biological mechanisms.

A recently published genome-wide association study by a European consortium on dilated cardiomyopathy identified common variants in BAG3 (BCL2-associated athanogene 3) associated with heart failure57 and identified rare BAG3 missense and truncation mutations that segregate with familial cardiomyopathy. These findings were consistent with an earlier exome-sequencing study that identifiedBAG3 as a familial dilated cardiomyopathy gene and showed recapitulation of cardiomyopathy with BAG3 morpholino knockdown in zebra fish.68 Together, these studies convincingly support variation in BAG3 as a genetic risk factor of cardiomyopathy and heart failure. It is noteworthy that both common and rare functional variations were identified at this locus. A unifying hypothesis for these findings, which needs to be formally tested, is that common variants in BAG3 serve as proxies for rare functional BAG3 mutations with large effects. In this situation, the underlying genetic lesion is a rare variant with a large functional effect. This has recently been described for common variants in MYH6 that correlated with rare functional MYH6 variants to cause sick sinus syndrome.69 It is premature to speculate on the clinical applications of these newer findings.

Moving Knowledge to Practice

A small number of genomic variants have been identified that modify heart failure by affecting well-understood physiological systems. The principal barrier preventing their adoption in practice may be lack of evidence showing how application of this information can best be used for clinical benefit. Trials testing genotype targeting of antiplatelet therapy and anticoagulation will be completed in the coming years. The findings from these studies will likely determine the level of enthusiasm for conducting genotype-guided trials of β-blockers and RAAS antagonists in heart failure. Given that the lifetime risk of heart failure in the United States is estimated at 1 in 5, even a small favorable effect on heart failure prevention or outcome through use of genome-guided therapy has the potential for a large public health impact. We therefore believe that a near-term goal should be to conduct pharmacogenomic trials in heart failure based on our current understanding of heart failure variants.

Looking ahead, unbiased approaches will continue to reveal a large number heart failure-modifying variants (both common and rare). Based on experience in other complex phenotypes, such has height70 and plasma lipid levels,71 the underlying genetic mechanisms for many new heart failure variants will be completely unknown, and their sheer number will preclude detailed experimentation using murine models to figure them out. Leveraging these variants for clinical application is a challenge that we will be forced to confront.

As our ability to identify rare, disease-causing variants improves through personal genome sequencing, we will be faced with the additional problem of how best to estimate the disease risk conferred by a sequence variant for which there has been no biological validation. In probabilistic terms, because there are 3 billion nucleotides in the human genome and over twice that many humans on the planet, it is likely that a nucleotide substitution for every position is represented in someone. Obviously, it will be impossible to recombinantly express and functionally characterize every DNA variant that is going to be implicated in heart failure. Bioinformatics filters have been used to try and separate functionally significant from insignificant variants based on the likelihood of changing transcript expression or protein function. These tools are limited but will improve if we tailor their results to the known characteristics of each gene product. For example, current approaches to categorize amino acid substitutions as conservative or nonconservative based only on charge or side chains can be improved by molecular modeling that incorporates protein-specific structure-function information. This approach has been used to estimate the pathogenicity of myosin heavy chain (MHC) mutations in an effort to determine which mutations are likely to cause familial cardiomyopathy when linkage analysis is not feasible.72 In concept, this approach can be applied to any protein for which structure-function activities have been finely mapped to distinct domains.

A promising extension of this approach may be to use evolutionary genetics to infer disease causality. Again, using the MHC genes as examples, human genome data show a greater prevalence of nonsynonymous gene variants in MYH6, which encodes the minor cardiac α-MHC isoform, compared with the adjacent MYH7, which encodes the major β-MHC isoform. This disparity suggests a greater tolerance for protein changes in the α-MHC isoform and negative selection against these in β-MHC. We can infer, therefore, that amino acid changes are more likely to have adverse impacts in MYH7-encoded β-MHC. If this paradigm survives prospective testing, then the forthcoming explosion of individual genetic data not only will present a massive problem in interpretation, but also will provide the genetic information by which analyses of rare sequence variants across large unaffected populations can help to differentiate the tolerable variants from those that are more likely to alter disease risk.

Each Reference above is found in:

http://circgenetics.ahajournals.org/content/4/6/701.full

SOURCE: 

Circulation: Cardiovascular Genetics.2011; 4: 701-709

doi: 10.1161/ CIRCGENETICS.110.959379

 

Read Full Post »

Not Lower Levels of Serotonin, but Damaged Brain Synapses as the Origin for Mental Depression

Reporter: Aviva Lev-Ari, PhD, RN

Israeli discovery matches right antidepressant for each patient

Genetic study suggest that depression may be caused not by lack of serotonin, but because of damage to the brain synapses.

It all comes down to a simple blood test (illustrative).

It all comes down to a simple blood test (illustrative). Photo by Dreamstime

By Ido Efrati
Published 01:00 09.12.13
A new discovery by Tel Aviv University researchers may make it possible to prescribe the most effective antidepressant based on a simple blood test, avoiding the long and often difficult process of medication adjustments that is currently done by trial and error.The scientists were able to identify genes in blood cells that are linked to the creation of receptors in brain cells and that respond differently to antidepressants in different people. The study by Dr. David Gurwitz and Dr. Noam Shomron, which was recently published in the journal Translational Psychiatry, could change perceptions about the origins of depression and the mechanisms that trigger it.

“People suffering from depression are in great distress and find it very difficult to go through the process of treatment adjustments, which can take weeks or months,” said Shomron, who heads the Genome High-Throughput Sequencing Laboratory at TAU’s Sackler Faculty of Medicine. “We chose to focus on paroxetine, a very common drug for depression, which is sold in Israel under the trade names Seroxat, Paxxet, Paxil, Parotin and Paroxetine-Teva. We were looking for a faster, easier and more effective way to find out how [paroxetine] would affect a particular patient.”

Paroxetine belongs to the SSRI family of drugs that inhibit the re-absorption of serotonin in the brain, the best-known and most popular of which are Prozac and Cipralex. “These drugs do not help all those suffering from depression, and in many cases one must keep trying drugs from other families by trial and error. Meanwhile, the patients and their families suffer,” explained Gurwitz, who heads the National Laboratory for the Genetics of Israeli Populations at Sackler.

One of the interesting things about the research is that it did not involve people suffering from depression. Rather than examine the effect of the drug on patients, the researchers added paroxetine to 80 samples of cultured white blood cells taken from healthy volunteers.

The results showed that in some cases the drugs inhibited cell division in the cultures significantly, while in others the delay was relatively minor. The researchers then focused on those cases with the most extreme responses: the 10 cultures that were most affected by the addition of paroxetine, and those least affected. The aim was to see whether there were significant differences between the two extremes on the genetic and molecular levels. By using a genetic chip, the researchers were able to perform a comprehensive molecular profile of all the selected samples.

“The result surprised us so much that we started to check if we’d made some mistake,” said Shomron. “We discovered that the single biggest difference between the two groups was the level of expression of a gene known as CHL1. Until then, no one had ever linked that particular gene to depression.”

Dr. Gurwitz noted, however, that the protein encoded by the gene CHL1 is recognized in scientific literature as essential for creating synapses (connections between neurons) in the brain. “Our findings suggest that depression may be caused not by lack of serotonin, as is written today in medical books, but because of damage to the synapses, probably resulting from a lack of proteins that repair synapses damaged by stress,” he says.

Giving the researchers a boost is a large clinical study recently published in the United States involving some 1,400 patients treated with the antidepressant Citalopram. Those findings also suggest a link between the gene CHL1 and the response to depression treatment.

Since the 1990s, Gurwitz said, hundreds of genetic studies have dealt with antidepressants. “But almost all of them began with the assumption that the main cause of depression is a lack of serotonin in the brain.” The approach of the two Israelis was totally different, he said. “We chose to look at all the genes of the human genome, about 25,000 genes and see which are affected by antidepressants. We believed the genetic diversity between people would surely be reflected in their response to drugs, which can be measured in vitro.”

The two said that this new insight could lead to a new type of antidepressant, which, instead of boosting serotonin levels in the brain – which are associated with depression, but probably not the cause – could improve the process of repairing damaged synapses.

Read Full Post »

Saudi Human Genome Program, International Barcode of Life Project

Reporter: Aviva Lev-Ari, PhD, RN

Life Tech Becomes Partner in Saudi Human Genome Program, International Barcode of Life Project

December 09, 2013

NEW YORK, GenomeWeb − Life Technologies has become a partner in two research projects, the Saudi Human Genome Program and the International Barcode of Life (iBOL) project, the company said this week. The projects will employ the firm’s Ion Proton, capillary electrophoresis, and PGM sequencing technologies.

The goal of the Saudi Human Genome Project, led by Saudi Arabia’s national funding agency, the King Abdulaziz City for Science and Technology (KACST), is to study the genetic basis of disease in Saudi Arabia and the Middle East.

Over the next five years, the project aims to sequence 100,000 genomes from individuals from the region using Life Tech’s Ion Proton technology. Sequencing will be initially conducted at 10 genome centers across Saudi Arabia, with five additional centers to be created in the future.

Life Tech will design and equip the centers, and provide “end-to-end solutions” and services for operations and informatics. Integrated Gulf Biosystems, Life Tech’s distributor in the Middle East, said it played a “pivotal role” in bringing Life Tech’s technology to KACST.

Results from the project will be used to build a Saudi-specific database, providing the basis for future personalized medicine in the Kingdom. Specifically, the information is expected to help with premarital and prenatal screening for rare genetic diseases, as well as for population studies.

SOURCE

http://www.genomeweb.com//node/1321146?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=Management%20Shakeup%20at%20Hologic;%20Life%20Tech%20Partners%20on%20Saudi%20Genome%20Project;%20Cancer%20Driver%20Gene%20Study%20-%2012/09/2013%2010:50:00%20AM

 

Read Full Post »

Species-specific Genetic Barcodes: Life Tech’s Capillary Electrophoresis Sequencers generated by

Reporter: Aviva Lev-Ari, PhD, RN

Life Tech said that it has also partnered with the Canadian Centre for DNA Barcoding for the iBOL project, a biodiversity study that aims to genetically catalog 500,000 species by late 2015 and 5 million in total.

Project researchers will use Life Tech’s capillary electrophoresis sequencers to generate species-specific genetic barcodes, which will be deposited in a reference library called Barcode of Life Data System. The partnership will focus on a project to study insects around the world and another one to study biodiversity patterns in Central and South America.

In addition, Life Tech and the center will work on developing metagenomic barcoding applications using the PGM sequencer.

Related Stories

SOURCE

http://www.genomeweb.com//node/1321146?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=Management%20Shakeup%20at%20Hologic;%20Life%20Tech%20Partners%20on%20Saudi%20Genome%20Project;%20Cancer%20Driver%20Gene%20Study%20-%2012/09/2013%2010:50:00%20AM

Read Full Post »

Computationally designed “self”-peptide could be used to better target drugs to tumors, to ensure pacemakers are not rejected, and to enhance medical imaging technologies

Reporter: Aviva Lev-Ari, PhD, RN

Synthetic Peptide Fools Immune System

Researchers have created a molecule that helps nanoparticles evade immune attack and could improve drug delivery.

By Dan Cossins | February 21, 2013

 

A macrophage at work in a mouse, stretching itself to gobble up two smaller particlesFLICKR, MAGNARAMA synthetic molecule attached to nanoparticles acts like a passport, convincing immune cells to let the particles pass unimpeded through the body, according to a study published today (February 21) in Science. The computationally designed “self”-peptide could be used to better target drugs to tumors, to ensure pacemakers are not rejected, and to enhance medical imaging technologies.

“It’s the first molecule that can be attached to anything to attenuate the innate immune system, which is currently limiting us from delivering therapeutic particles and implanting devices,” saidDennis Discher, a professor of biophysical engineering at the University of Pennsylvania and a coauthor of the study.

“This is really interesting work,” said Joseph DeSimone, a chemical engineer at the University of North Carolina, Chapel Hill, who was not involved in the research, in an e-mail to The Scientist. “[It] strongly validates the idea of using biological evasion strategies.”

Macrophages recognize, engulf, and clear out foreign invaders, whether they’re microbes entering through a wound or a drug-loaded nanoparticle injected to target disease. Previously, researchers have attempted to escape this response by coating nanoparticles with polymer “brushes” to physically block the adhesion of blood proteins that alert macrophages to the particles’ presence. But these brushes can only delay the macrophage-signaling proteins for so long, and they can hinder uptake by the diseased cells being targeted.

With that in mind, Discher and colleagues tried instead to find a way to convince macrophages that nanoparticles are part of the body. Their previous research had shown that a membrane protein called CD47, which binds to macrophages in humans, signals “self” to the immune system, so that particles with this protein are not attacked.

Examining the architecture of the bond between CD47 and its macrophage receptor, SIRPα, the researchers were able to design a synthetic self-peptide with a similarly snug fit. “This is the key, literally, to unlocking innate immune pacification,” said Discher.

When they chemically synthesized the 21-amino-acid self-peptide and attached it to nanobeads as small as viruses in mice genetically engineered to have human-like SIRPα receptors, the researchers showed that beads with the self-peptide stayed in the blood of for longer than beads with no peptide: 30 minutes after being injected with equal numbers of each type, there were 4 times as many beads with the peptide attached than without. The results demonstrate that the synthetic molecule can reduce the rate at which phagocytes clear the beads from the body, said Discher.

Then, in mice with human lung cancer, the researchers injected fluorescently dyed beads with and without the peptide, and saw that the “self”-beads got through the macrophage-filled spleen and liver and accumulated in greater numbers in the tumor, providing a brighter signal under when imaged. In fact, the self-beads provided a signal from the tumor as strong as beads coated with human CD47.

Finally, to see whether the biological evasion strategy can be successfully combined with targeting, the researchers loaded an anticancer drug into self-beads also coated with antibodies that target cancer cells. Sure enough, these antibody-coated self-beads consistently shrank tumors more than antibody-coated beads lacking the peptide. This confirmed that when antibodies draw the attention of the macrophage, the self-peptides inhibit the macrophage’s response, acting as a “don’t-eat-me” signal, said Discher.

The results demonstrate that the synthetic peptide can provide therapeutic nanoparticles with extra time in the body—time that improves drug delivery. Furthermore, the relative simplicity of the peptide means it can be easily synthesized, making it an attractive component for use in a variety of future applications.

“The findings are “compelling” and “the technology merits moving forward,” Omid Farokhzad, director of the Laboratory of Nanomedicine and Biomaterials at Brigham and Women’s Hospital, part of Harvard Medical School, said in an e-mail to The Scientist.

A crucial next step is to test the efficacy of synthetic self-peptides in humans, Farokhzad added. “The truly relevant test is looking at human pharmacokinetics to see circulating half-life advantages of nanoparticles and their effect on therapeutic outcome.”

P.L. Rodriguez et al., “Minimal ‘self’ peptides that inhibit phagocytic clearance and enhance delivery of nanoparticles,” Science, 339: 971-74, 2013.

SOURCE

 

Read Full Post »

The red tape challenge

reporter and curator: Dror Nir, PhD

Large part of the time and cost for developing a new medical device or a new drug is allocated for achieving regulatory compliance. While quality and safety are desired, having to continually spend additional time and  money throughout the product’s life cycle just on the proof of its quality and safety is painful to all, especially for the health systems which eventually have to pay for it.
On this issue, I bring you the following post:
It has almost become routine: under narratives of increased patient safety and improved efficiency new regulatory requirements are developed, resulting in increased requirements on the industry. The new European pharmacovigilance legislation and the upcoming European medical device regulatory updates are only two examples. Being part of the industry you have very limited impact on the regulations but have to comply with them anyway. That is – if you were to continue marketing your device or drug. Under certain circumstances the cost of meeting legal requirements is so great it may bring into question the viability of continuing certain business activities. This is especially the case for smaller companies or niche products.
R1
It is clear, thus, that you have a huge incentive to try to achieve compliance with minimal effort. If we take a bird’s eye view on the challenge of reaching compliance, two major elements become evident:
  1. The quality system is, in itself, a high maintenance object which consumes ongoing resources:
    • It needs to be revisited often due to changes in the regulatory system or in the business environment.
    • Each change may affect many components of the system and a quick modification may cause inconsistency.
    • Each modification needs to be accepted, signed-off formally by several people and be disseminated via formally recorded training.
    • The organization should withstand audits and inspections in regards to the quality system.
  2. Living with the quality system: Each SOP and work instruction has to be followed, and typically forms need to be filled, signed and filed.
Information Overload

Young companies which are just embarking on the regulatory path often do not realize these two characteristics of the quality system. Quick fixes in the form of SOP texts copied from other organizations or generic templates are being used to get the initial certification. However, as the organization evolves it realizes that a quality system is not a one-time effort and cannot be glued on from external sources.  It has to be streamlined and become part of the way that the organization lives and does business. Companies are enjoying the benefits of improved process design and automation on a large scale every day, in many areas. When recently did you see a delivery person arriving to a pickup without a Barcode reader, so that he does not need to fill any form manually? When was the last time that a software package was released without an automatic consistency check? So too your quality system and related processes may be dramatically engineered to serve you better.

Better efficiency in quality compliance should thus be achieved through careful analysis and optimization of two types of processes:
How do we better maintain the quality system? How do we make it easier to change the system, keep it consistent, train in it, etc.
The SOPs and work instructions: SOPs cannot be just imported from outside or suggested by a QA/RA consultant who does not know the organization very well. SOPs should be a true marriage between the legal and business requirements and should be the result of a careful consideration by all stakeholders. From my experience, the best SOPs are written by the process owner, with the guidance of the regulatory expert. For example: the R&D manager should be the one drafting the design control SOP, with input of the regulatory expert. Such a SOP is much more likely to fit the business needs, and also more likely to be followed by the process owner.
Yes, I realize that thinking this way is very often not what companies do when they rush compliance. I insist that this is what has to be done to achieve sustainable compliance. The good news is that, when companies do look at their quality system in this way, they see many opportunities for significant improvement. Some of those improvements are achieved through use of better IT tools. These tools would typically be in the area of document management and versioning, workflow automation, improved collaboration and electronic signatures. Like any other change, this also requires a vision and a certain effort. However, the long term business impact may be as significant as the difference between business success or failure.

Read Full Post »

Third Annual TCGC: The Clinical Genome Conference, San Francisco, June 10-12, 2014 by Bio-IT World and Cambridge Healthtech Institute

Reporter: Aviva Lev-Ari, PhD, RN

 

UPDATED on 5/1/2014

Register by May 2 for

Hotel Kabuki, San Francisco, CA

June 10 – 12, 2014

FINAL AGENDA

CLINICAL GENOME

conference

THE 3rd ANNUAL

Mining the Genome for Medicine Clinical Genome Conference.com

TCGC

The unstoppable march of genomics into clinical practice continues. In an ideal world, the expanding use of genomic tools will identify disease before the onset of clinical symptoms and determine individualized drug treatment leading to precision medicine. However, many challenges remain or the successful translation of genomic knowledge and technologies into health advances and actionable patient care. Join vital discussions of the applications, questions and solutions surrounding clinical genome analysis.

KEYNOTE SPEAKERS

Atul Butte, M.D., Ph.D.

Division Chief and Associate Professor, Stanford University School of Medicine; Director, Center for Pediatric Bioinformatics, Lucile Packard Children’s Hospital

David Galas, Ph.D.

Principal Scientist, Pacific Northwest Diabetes Research Institute

Gail P. Jarvik, M.D., Ph.D.

Head, Division of Medical Genetics, Arno G. Motulsky Endowed Chair in Medicine and Professor, Medicine and Genome Sciences, University of Washington Medical Center

John Pfeifer, M.D., Ph.D.

Vice Chair, Clinical Affairs, Pathology and Immunology; Professor, Pathology and Immunology, Washington University

John Quackenbush, Ph.D.

Professor, Dana-Farber Cancer Institute and Harvard School of Public Health; Co-Founder and CEO, GenoSpace

Topics Include:

• Working with the Payer Process

• Genome Variation and Clinical Utility

• NGS Is Guiding Therapies

• NGS Is Redefining Genomics

• Interpretation and Translation to the Client

• Integrating Genomic Data into the Clinic

ClinicalGenomeConference.com

Cambridge Healthtech Institute

250 First Avenue, Suite 300

Needham, MA 02494

www.healthtech.com

 

TUESDAY, JUNE 10

7:30 am Conference Registration and Morning Coffee

Working with the Payer Process

8:30 Chairperson’s Opening Remarks

»»KEYNOTE PRESENTATION

8:45 Case Study on Working through the Payer Process

John Pfeifer, M.D., Ph.D., Vice Chair, Clinical Affairs, Pathology; Professor,

Pathology and Immunology; Professor, Obstetrics and Gynecology, Washington

University School of Medicine

If next-generation sequencing (NGS) is to become a part of patient care in routine clinical practice (whether in the setting of oncology or in the setting of inherited genetic disorders), labs that perform clinical NGS must be reimbursed for the testing they provide. Genomics and Pathology Services at Washington University in St. Louis (GPS@WUSTL) will be used as a case study of a national reference lab that has been successful in achieving high levels of reimbursement for the clinical NGS testing it performs, including from private payers. The reasons for GPS’s success will be discussed, including NGS test design, clinical focus of testing, use of different models for reimbursement and payer education.

9:30 Implementation of Clinical Cancer Genomics within an Integrated

Healthcare System

Lincoln D. Nadauld, M.D., Ph.D., Director, Cancer Genomics, Intermountain Healthcare

Precision cancer medicine involves the detection of tumor-specific DNA alterations followed by treatment with therapeutics that specifically target the actionable mutations. Significant advances in genomic technologies have now rendered extended genomic analyses of human malignancies technologically and financially feasible for clinical adoption. Intermountain Healthcare, an integrated healthcare delivery system, is taking advantage of these advances to programmatically implement genomics into the regular treatment of cancer patients to improve clinical outcomes and reduce treatment costs.

10:00 PANEL DISCUSSION:

Payer’s Dilemma: Evolution vs. Revolution

As falling genome sequencing costs help clinicians refine patient diagnoses and therapeutic approaches, new complexities arise over insurance coverage of such tests, classification by CPT codes and other reimbursement issues. Experts on this panel will discuss payer challenges and changes—both rapid and gradual—occurring alongside these advances in clinical genomics.

Moderator: Katherine Tynan, Ph.D., Business Development & Strategic Consulting for Diagnostics

Companies, Tynan Consulting LLC

Panelists:

Tonya Dowd, MPH, Director, Reimbursement Policy and Market Access, Quorum Consulting

Mike M. Moradian, Ph.D., Director of Operations and Molecular Genetics Scientist, Kaiser

Permanente Southern California Regional Genetics Laboratory

Rina Wolf, Vice President of Commercialization Strategies, Consulting and Industry Affairs, XIFIN

Additional Panelists to be Announced

10:45 Networking Coffee Break

11:15 Beyond Genomics: Preparing for the Avalanche of Post-Genomic

Clinical Findings

Jimmy Lin, M.D., Ph.D., President, Rare Genomics Institute

Whole genomic and exomics sequencing applied clinically is revealing newly discovered genes and syndromes at an astonishing rate. While clinical databases and variant annotation continue to grow, much of the effort needed is functional analysis and clinical correlation. At RGI, we are building a comprehensive functional genomics platform that includes electronic health records, biobanking, data management, scientific idea crowdsourcing and contract research sourcing.

11:45 The MMRF CoMMpass Clinical Trial: A Longitudinal Observational

Trial to Identify Genomic Predictors of Outcome in Multiple Myeloma

Jonathan J. Keats, Ph.D., Assistant Professor, Integrated Cancer Genomics Division, Translational

Genomics Research Institute

12:15 pm Luncheon Presentation: Sponsored by

Big Data & Little Data – From Patient Stratification

to Precision Medicine

Colin Williams, Ph.D., Director, Product Strategy, Thomson Reuters

Molecular data has the power, when unlocked, to transform our understanding of disease to support drug discovery and patient care. The key to unlocking this potential is ‘humanising’ the data, through tools and techniques, to a level that supports interpretation by Life Science professionals. This talk will focus on strategies for extracting insight from ‘big data’ by shrinking it to ‘little data’, with a focus on applications to support patient stratification in drug discovery and for practising precision medicine in a clinical setting.

Genome Variation and Clinical Utility

1:45 Chairperson’s Remarks

»»KEYNOTE PRESENTATION

1:50 Lessons from the Clinical Sequencing Exploratory

Research (CSER) Consortium: Genomic Medicine

Implementation

Gail P. Jarvik, M.D., Ph.D., Head, Division of Medical Genetics, Arno G. Motulsky Endowed Chair in Medicine and Professor, Medicine and Genome

Sciences, University of Washington Medical Center

Recent technologies have led to affordable genomic testing. However, implementation of genomic medicine faces many hurdles. The Clinical Sequencing Exploratory Research (CSER) Consortium, which includes nine genomic medicine projects, was formed to explore these challenges and opportunities. Dr. Jarvik is the PI of a CSER genomic medicine project and of the CSER coordinating center. She will focus on the frequency of exomic incidental findings, including those of the 56 genes recommended for incidental finding return by the ACMG. The CSER group has annotated the putatively pathogenic and novel variants of the Exome Variant Server (EVS) to estimate the rate of these in individuals of European and African ancestry. Experience with consenting and returning incidental findings will also be reviewed.

2:35 Decoding the Patient’s Genome: Clinical Use of Genome-Wide

Sequencing Data

Elizabeth Worthey, Ph.D., Assistant Professor, Pediatrics & Bioinformatics Program, Human & Molecular Genetics Center, Medical College of Wisconsin

Despite significant advances in our understanding of the genetic basis of disease, genomewide identification and subsequent interpretation of the molecular changes that lead to human disease represent the most significant challenges in modern human genetics.

Starting in 2009 at MCW, we have performed clinical WGS and WES to diagnose patients coming from across all clinical specialties. I will discuss findings, pros and cons in approach, challenges remaining and where we go next.

3:05 Analyzing Variants with a DTC Genetics Database

Brian Naughton, Ph.D., Founding Scientist, 23andMe, Inc.

Sequencing a genome results in dozens of potentially disease-causing variants (VUS). I describe some examples of using the 23andMe database, including quick recontact of participants, to determine if a variant is disease-causing.

3:35 Refreshment Break in the Exhibit Hall with Poster Viewing

 

Genome Interpretation Software Solutions: Software Spotlights

(Sponsorship Opportunities Available)

Obtaining clinical genome data is rapidly becoming a reality, but analyzing and interpreting the data remains a bottleneck. While there are many commercial software solutions and pipelines for managing raw genome sequence data, providing the medical interpretation and delivering a clinical diagnosis will be the critical step in fulfilling the promise of genomic medicine. This session will showcase how genome data analysis companies are streamlining the genomic diagnostic pipeline through:

• Transferring raw sequencing data

• Interpreting genetic variations

• Building new software and cloud-based analysis pipelines

• Investigating the genetic basis of disease or drug response

• Integrating with other clinical data systems

• Creating new medical-grade databases

• Reporting relevant clinical information in a physician-friendly manner

• Continuous learning feedback

4:15 Software Spotlight #1

4:30 Copy Number Variant Detection Using Sponsored by

Next-Generation Sequencing: State of the Art

Alexander Kaplun, Ph.D., Field Applications Scientist, BIOBASE

This talk will provide a short review about the current state of the art in detection of larger variants that have an important role in many diseases such as haplotypes, indels, repeats, copy number variants (CNVs), structural variants (SVs) and fusion genes using NGS methods, and an outlook to their use for pharmacogenomic genotyping.

4:45 Software Spotlight #3

5:00 Software Spotlight #4

5:15 Software Spotlight #5

5:30 Pertinence Metric Enables Hypothesis-Independent Sponsored by

Genome-Phenome Analysis in Seconds

Michael M. Segal, M.D., Ph.D., Chief Scientist, SimulConsult

Genome-phenome analysis combines processing of a genomic variant table and comparison of the patient’s findings to those of known diseases (“phenome”). In a study of 20 trios, accuracy was 100% when using trios with family-aware calling, and close to that if only probands were used. The gene pertinence metric calculated in the analysis was 99.9% for the causal genes. The analysis took seconds and was hypothesis-independent as to form of inheritance or number of causal genes. Similar benefits were found in gene discovery situations.

6:00 Welcome Reception in the Exhibit Hall with Poster Viewing

7:00 Close of Day

WEDNESDAY, JUNE 11

7:30 am Breakfast Presentation (Sponsorship Opportunity Available) or Morning Coffee

NGS Is Guiding Therapies

8:30 Chairperson’s Opening Remarks

8:35 Next-Generation Sequencing Approaches for Identifying Patients

Who May Benefit from PARP Inhibitor Therapy

Mitch Raponi, Ph.D., Senior Director and Head, Molecular Diagnostics, Clovis Oncology

The following questions will be addressed: What biomarkers should we be focusing on to identify appropriate patients who will likely benefit from PARP inhibitors? How can we apply next-generation sequencing technologies to identify all patients who will respond to the PARP inhibitor rucaparib? What regulatory challenges are we faced with for approval of NGS companion diagnostics?

9:05 Whole-Genome and Whole-Transcriptome Sequencing to Guide

Therapy for Patients with Advanced Cancer

Glen J. Weiss, M.D., MBA, Director, Clinical Research, Cancer Treatment Centers of America

Treating advanced cancer with agents that target a single-cell surface receptor, up-regulated or amplified gene product or mutated gene has met with some success; however, eventually the cancer progresses. We used next-generation sequencing technologies (NGS) including whole-genome sequencing (WGS), and where feasible, whole-transcriptome sequencing (WTS) to identify genomic events and associated expression changes in advanced cancer patients. While the initial effort was a slower process than anticipated due to a variety of issues, we demonstrated the feasibility of using NGS in advanced cancer patients so that treatments for patients with progressing tumors may be improved. This lecture will highlight some of these challenges and where we are today in bringing NGS to patients.

9:35 The SmartChip TE™ Target Enrichment System for Sponsored by

Clinical Next-Gen Sequencing

Gianluca Roma, MS MBA, Director, Product Management, WaferGen Biosystems

10:05 Coffee Break in the Exhibit Hall with Poster Viewing

Data Mining

»»KEYNOTE PRESENTATION

10:45 Translating a Trillion Points of Data into

Therapies, Diagnostics and New Insights into Disease

Atul Butte, M.D., Ph.D., Division Chief and Associate Professor, Stanford University School of Medicine; Director, Center for Pediatric Bioinformatics,

Lucile Packard Children’s Hospital; Co-Founder, Personalis and Numedii

There is an urgent need to translate genome-era discoveries into clinical utility, but the difficulties in making bench-to-bedside translations have been well described. The nascent field of translational bioinformatics may help. Dr. Butte’s lab at Stanford builds and applies tools that convert more than a trillion points of molecular, clinical and epidemiological data— measured by researchers and clinicians over the past decade—into diagnostics, therapeutics and new insights into disease. Dr. Butte, a bioinformatician and pediatric endocrinologist, will highlight his lab’s work on using publicly available molecular measurements to find new uses for drugs, including drug repositioning for inflammatory bowel disease, discovering new treatable inflammatory mechanisms of disease in type 2 diabetes and the evaluation of patients presenting with whole genomes sequenced.

11:30 DGIdb – Mining the Druggable Genome

Malachi Griffith, Ph.D., Research Faculty, Genetics, The Genome Institute, Washington University School of Medicine

In the era of high-throughput genomics, investigators are frequently presented with lists of mutated or otherwise altered genes implicated in human disease. Numerous resources exist to generate hypotheses about how such genomic events might be targeted therapeutically or prioritized for drug development. The Drug-Gene Interaction database (DGIdb) mines these resources and provides an interface for searching lists of genes against a compendium of drug-gene interactions and potentially druggable genes. DGIdb can be accessed at dgidb.org.

12:00 pm Sponsored Presentation (Opportunity Available)

12:30 Luncheon Presentation (Sponsorship Opportunity Available)

 

The unstoppable march of genomics into clinical practice continues. In an ideal world, the expanding use of genomic tools will identify disease before the onset of clinical symptoms and determine individualized drug treatment leading to precision medicine. However, many challenges remain for the successful translation of genomic knowledge and technologies into health advances and clinical practice.

Bio-IT World and Cambridge Healthtech Institute are again proud to host the Third Annual TCGC: The Clinical Genome Conference, inviting stakeholders from all arenas impacting clinical genomics to share new findings and solutions for advancing the application of clinical genome medicine.

TCGC brings together many constituencies for frank and vital discussion of the applications, questions and solutions surrounding clinical genome analysis, including scientists, physicians, diagnosticians, genetic counselors, bioinformaticists, ethicists, regulators, insurers, lawyers and administrators.

Topics addressing successful translation of genomic knowledge and technologies into advancement of clinical utility (medicines and diagnostics) include but are not limited to:

Scientific Investigation and Interpretation

  • Technologies/Platforms
  • WGS/Exome/Single-Cell Sequencing
  • Drug and Diagnostic Targets
  • Interpretation and Analysis Pipelines
  • Case Studies

Clinical Integration and Implementation

  • Mechanisms to Monitor Genomic Medicine
  • Determining Clinical Utility
  • Standardization/Regulation/Certification
  • Reimbursement
  • Data Management
  • Diagnostic Lab Infrastructure
  • HIT/Data Integration
  • Reporting Results to Patients/Physicians

Call for Speakers
For a limited time, we are inviting researchers and clinicians applying genome analysis tools in clinical settings, as well as regulators and administrators implementing genomics into the clinic, to submit proposals for platform presentations. Please note that due to limited speaking slots, preference is given to abstracts from those within pharmaceutical and biopharmaceutical companies, regulators and those from academic centers. Additionally, as per CHI policy, a select number of vendors/consultants who provide products and services to these genomic researchers are offered opportunities for podium presentation slots based on a variety of Corporate Sponsorships.

All proposals are subject to review by the organizers and Scientific Advisory Committee.

Please click here to submit a proposal.

Submission deadline for priority consideration: November 15, 2013

For more details on the conference, please contact:
Mary Ann Brown
Executive Director, Conferences
Cambridge Healthtech Institute
250 First Avenue, Suite 300
Needham, MA 02494
T:  781-972-5497
E:  mabrown@healthtech.com

For exhibit and sponsorship opportunities, please contact:
Jay Mulhern
Manager, Business Development, Conferences & Media
Cambridge Healthtech Institute
250 First Avenue, Suite 300
Needham, MA 02494
T: 781-972-1359
E: jmulhern@healthtech.com

SOURCE

http://www.clinicalgenomeconference.com/

 

Read Full Post »

Cardiology, Genomics and Individualized Heart Care: Framingham Heart Study (65 y-o study) & Jackson Heart Study (15 y-o study)

Cardiology, Genomics and Individualized Heart Care

Curator: Aviva Lev-Ari, PhD, RN

Article ID #90: Cardiology, Genomics and Individualized Heart Care: Framingham Heart Study (65 y-o study) & Jackson Heart Study (15 y-o study). Published on 12/1/2014

WordCloud Image Produced by Adam Tubman

 

The topic of Cardiology, Genomics and Individualized Heart Care is been developed in the following forthcoming e-Book on a related subject matter:

Curators: Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

This e-Book has the following Parts:

PART 1
Genomics and Medicine

Introduction to Volume Three
1.1: Genomics and Medicine: The Physician’s View
1.2: Ribozymes and RNA Machines – Work of Jennifer A. Doudn
1.3: Genomics and Medicine: The Geneticist’s View
1.4: Genomics in Medicine – Establishing a Patient-Centric View of Genomic Data

PART 2
Epigenetics- Modifiable Factors Causing Cardiovascular Diseases

2.1 Diseases Etiology

2.1.1 Environmental Contributors Implicated as Causing Cardiovascular Diseases
2.1.2 Diet: Solids and Fluid Intake
2.1.3 Physical Activity and Prevention of Cardiovascular Diseases
2.1.4 Psychological Stress and Mental Health: Risk for Cardiovascular Diseases
2.1.5 Correlation between Cancer and Cardiovascular Diseases
2.1.6 Medical Etiologies for Cardiovascular Diseases: Evidence-based Medicine – Leading DIAGNOSES of Cardiovascular Diseases, Risk Biomarkers and Therapies
2.1.7 Signaling Pathways
2.1.8 Proteomics and Metabolomics

2.2 Assessing Cardiovascular Disease with Biomarkers

2.2.1 Issues in Genomics of Cardiovascular Diseases
2.2.2 Endothelium, Angiogenesis, and Disordered Coagulation
2.2.3 Hypertension BioMarkers
2.2.4 Inflammatory, Atherosclerotic and Heart Failure Markers
2.2.5 Myocardial Markers

2.3  Therapeutic Implications: Focus on Ca(2+) signaling, platelets, endothelium

2.3.1 The Centrality of Ca(2+) Signaling and Cytoskeleton Involving Calmodulin Kinases and Ryanodine Receptors

2.3.2 Platelets in Translational Research ­ 2

2.3.3 The Final Considerations of the Role of Platelets and Platelet Endothelial Reactions in Atherosclerosis

2.3.4 Nitric Oxide Synthase Inhibitors (NOS-I)

2.3.5 Resistance to Receptor of Tyrosine Kinase

2.3.6 Oxidized Calcium Calmodulin Kinase and Atrial Fibrillation

2.3.7 Advanced Topics in Sepsis and the Cardiovascular System at its End Stage

2.4 Comorbidity of Diabetes and Aging

PART 3
Determinants of Cardiovascular Diseases
Genetics, Heredity and Genomics Discoveries

Introduction
3.1 Why cancer cells contain abnormal numbers of chromosomes (Aneuploidy)
3.2 Functional Characterization of Cardiovascular Genomics: Disease Case Studies @ 2013 ASHG
3.3 Leading DIAGNOSES of Cardiovascular Diseases covered in Circulation: Cardiovascular Genetics, 3/2010 – 3/2013
3.4  Commentary on Biomarkers for Genetics and Genomics of Cardiovascular Disease

PART 4
Individualized Medicine Guided by Genetics and Genomics Discoveries

4.1 Preventive Medicine: Cardiovascular Diseases
4.2 Gene-Therapy for Cardiovascular Diseases
4.3 Congenital Heart Disease/Defects
4.4 Pharmacogenomics for Cardiovascular Diseases

SOURCE

http://pharmaceuticalintelligence.com/biomed-e-books/series-a-e-books-on-cardiovascular-diseases/volume-three-etiologies-of-cardiovascular-diseases-epigenetics-genetics-genomics/

The Next Frontier in Heart Care

Research Aims to Personalize Treatment With Genetics

Nov. 25, 2013 7:18 p.m. ET

VIEW VIDEO

http://online.wsj.com/news/articles/SB10001424052702304281004579220373600912930#!

Two influential heart studies are joining forces to bring the power of genetics and other 21st century tools to battle against heart disease and stroke. Ron Winslow and study co-director Dr. Vasan Ramachandran explain. Photo: Shubhangi Ganeshrao Kene/Corbis.

Scientists from two landmark heart-disease studies are joining forces to wield the power of genetics in battling the leading cause of death in the U.S.

Cardiologists have struggled in recent years to score major advances against heart disease and stroke. Although death rates have been dropping steadily since the 1960s, progress combating the twin diseases has plateaued by other measures.

Genetics has had a profound impact on cancer treatment in recent years. Now, heart-disease specialists hope genetics will reveal fresh insight into the interaction between a

  • person’s biology,
  • living habits and
  • medications

that can better predict who is at risk of a heart attack or stroke.

“There’s a promise of new treatments with this research,” said Daniel Jones, chancellor of the University of Mississippi and former principal investigator of the 15-year-old Jackson Heart Study, a co-collaborator in the new genetics initiative.

Scienc e Source /Photo Researchers Inc. (hearts); below, l-r: Boston University; Robert Jordan/Univ. of Miss.; Jay Ferchaud/Univ. of Miss Medical Center

Prevention efforts also could improve with the help of genetics research, Dr. Jones said. For example, an estimated 75 million Americans currently have high blood pressure, or hypertension, but only about half of those are able to control it with medication. It can take months of trial-and-error for a doctor to get the right dose or combination of pills for a patient. Researchers hope genetic and other information might enable doctors to identify subgroups of hypertension that respond to specific treatments and target patients with an appropriate therapy.

Also collaborating on the genetics project is the 65-year-old Framingham Heart Study. Its breakthrough findings decades ago linked heart disease to such factors as smoking, high blood pressure and high cholesterol. Framingham findings have been a foundation of cardiovascular disease prevention policy for a half-century.

More than 15,000 people have participated in the Framingham study. The Jackson study, with more than 5,000 participants, was launched in 1998 to better understand risk factors in African-Americans, who were underrepresented in Framingham and who bear a higher burden of cardiovascular disease than the rest of the population. Both studies are funded by the National Heart, Lung, and Blood Institute, part of the National Institutes of Health.

Exactly how the collaboration, announced last week, will proceed hasn’t been determined. One promising area is the “biobank,” the collection of more than one million blood and other biological samples gathered during biennial checkups of Framingham study participants going back more than a half century.

The samples are stored in freezers in an underground earthquake-proof facility in Massachusetts, said Vasan Ramachandran, a Boston University scientist who takes over at the beginning of next year as principal investigator of the Framingham Heart Study. Another 40,000 samples from the Jackson study are kept in freezers in Vermont. By subjecting samples to DNA sequencing and other tests, researchers say they may be able to identify variations linked to progression of cardiovascular disease—or protection from it.

Each study is likely to enroll new participants as part of the collaboration to allow tracking of risk factors and diet and exercise habits, for instance, in real time instead of only during infrequent checkups.

Heart disease is linked to about 800,000 deaths a year in the U.S. In 2010, some 200,000 of those deaths could have been avoided, including more than 112,300 deaths among people younger than 65, according to a recent analysis by the Centers for Disease Control and Prevention. But those avoidable deaths reflected a 3.8% per year decline in mortality rates during the previous 10 years.

Now, widespread prevalence of obesity and diabetes threatens to undermine such gains. And a large gap remains between how white patients and minorities—especially African-Americans—benefit from effective strategies.

There have been few new transformative cardiovascular treatments since the mid-1980s to early 1990s, when a stream of large-scale trials of new agents ranging from clot-busters to treat heart attacks to the mega class of statins electrified the cardiology field with evidence of significant improvements in survival from the disease. One reason: Some of those remedies have proven tough to beat with new treatments.

What’s more, use of the current menu of medicines for reducing heart risk remains an imprecise art. Besides

  • blood pressure drugs,
  • cholesterol-lowering statins

also are widely prescribed. Drug-trial statistics show that to prevent a single first heart attack in otherwise healthy patients can require prescribing a statin to scores of patients, but no one knows for sure who actually benefits and who doesn’t.

“It would be great if we could make some more paradigm-shifting discoveries,” said Michael Lauer, director of cardiovascular sciences at the NHLBI, which is a part of the National Institutes of Health.

Finding new treatments isn’t the only aim of the new project. “You could use existing therapies smarter,” said Joseph Loscalzo, chairman of medicine at Brigham and Women’s Hospital in Boston.

The American Heart Association launched the initiative and has committed $30 million to it over the next five years. The AHA sees the project as critical to its goal to achieve a 20% improvement in cardiovascular health in the U.S. while also reducing deaths from heart disease and stroke by 20% for the decade ending in 2020, said Nancy Brown, the nonprofit organization’s chief executive.

The Jackson study has already identified characteristics of cardiovascular risk among African-American patients “that may have promise for new insights” in a collaborative effort, said Adolfo Correa, professor of medicine and pediatrics at University of Mississippi Medical Center and interim director of the Jackson study.

For instance, there is a higher prevalence of obesity among Jackson participants than seen in the Framingham cohorts. Obesity is associated with high blood pressure, diabetes and cardiovascular risk. Diabetes is also more prevalent among blacks than whites.

But African-Americans of normal weight appear to have higher rates of hypertension and diabetes than whites of normal weight. “The question is, should [measures] for defining diabetes be different or the same for the [different] populations and are they associated with the same risk of cardiovascular disease?” said Dr. Correa. The collaboration, he said, may provide better comparisons.

Researchers, who plan to use tools other than genetics, think more might be learned about blood pressure and heart and stroke risk by monitoring patients in real time using mobile devices rather than taking readings only in periodic office visits. For example, high blood pressure during sleep or spikes during exercise could indicate risks that don’t show up in a routine measurement in the doctors’ office.

A big challenge is making sense of the huge amounts of data involved in sequencing DNA and linking it to

  • medical records,
  • diet and
  • exercise habits and other variables that influence risk.

“The analytical methods for sorting out these complex relationships are still in evolution,” said Dr. Loscalzo, of Brigham and Women’s Hospital. “The cost of sequencing is getting cheaper and cheaper. The hard part is analyzing the data.”

Write to Ron Winslow at ron.winslow@wsj.com

SOURCE

http://online.wsj.com/news/articles/SB10001424052702304281004579220373600912930#!

The e-Reader is advised to to review tightly related articles in

http://pharmaceuticalintelligence.com/biomed-e-books/series-a-e-books-on-cardiovascular-diseases/volume-three-etiologies-of-cardiovascular-diseases-epigenetics-genetics-genomics/

Read Full Post »

Searchable Genome for Drug Development

Reporter: Aviva Lev-Ari, PhD, RN

The Druggable Genome Is Now Googleable

By Aaron Krol

November 22, 2013 | Relationships between human genetic variation and drug responses are being documented at an accelerating rate, and have become some of the most promising avenues of research for understanding the molecular pathways of diseases and pharmaceuticals alike. Drug-gene interactions are a cornerstone of personalized medicine, and learning about the drugs that mediate gene expression can point the way toward new therapeutics with more targeted effects, or novel disease targets for existing drugs. So it may seem surprising that, until October of this year, a researcher interested in pharmacogenetics generally needed the help of a dedicated bioinformatician just to access the known background on a gene’s drug associations.

Obi and Malachi Griffith are particularly dedicated bioinformaticians, who specialize in applying data analytics to cancer research, a rich field for drug-gene information. Like many professionals in their budding field, the Griffiths pursued doctoral research in bioinformatics applications at a time when this was not quite recognized as a distinct discipline, and quickly found their data-mining talents in hot demand. “We found ourselves answering the same questions over and over again,” says Malachi. “A clinician or researcher, who perhaps wasn’t a bioinformatician, would have a list of genes, and would ask, ‘Well, which of these genes are kinases? Which of these genes has a known drug or is potentially druggable?’ And we would spend time writing custom scripts and doing ad hocanalyses, and eventually decided that you really shouldn’t need a bioinformatics expert to answer this question for you.”

The Griffiths – identical twin brothers, though Malachi helpfully sports a beard – had by this time joined each other at one of the world’s premiere genomic research centers, the Genome Institute at Washington University in St. Louis, and figured they had the resources to improve this state of affairs. The Genome Institute is generously funded by the NIH and was a major contributor to the Human Genome Project; the Griffiths had congregated there deliberately after completing post-doctoral fellowships at the Lawrence Berkeley National Laboratory in California (Obi) and the Michael Smith Genome Sciences Centre in Vancouver (Malachi). “When we finished our PhDs, we knew we would like to set up a lab together,” says Obi. At the Genome Institute, they pitched the idea of building a free, searchable online database of drug-gene associations, and soon the Drug Gene Interaction Database (DGIdb) was under development.

In Search of the Druggable Genome

Existing public databases, like DrugBank, the Therapeutic Target Database, and PharmGKB, were the first ports of call, where a wealth of information was waiting to be re-aggregated in a searchable format. “For their use cases [these databases] are quite powerful,” says Obi. “They were just missing that final component, which is user accessibility for the non-informatics expert.” Getting all this data into DGIdb was and remains the most labor-intensive part of the project. At least two steps removed from the original sources establishing each interaction, the Griffiths felt they had to reexamine each data point, tracing it back to publication and scrutinizing its reliability. “It’s sort of become a rite of passage in our group,” says Malachi. “When new people join the lab, they have to really dig into this resource, learn what it’s all about, and then contribute some of their time toward manual curation.”

The website’s main innovation, however, is its user interface, which presents itself like Google but returns results a little more like a good medical records system. The homepage lets you enter a gene or panel of genes into a search box, and if desired, add a few basic filters. Entering search terms brings up a chart that quickly summarizes any known drug interactions, which can then be further filtered or tracked back to the original sources. The emphasis is not on a detailed breakdown of publications or molecular behavior, but on immediately viewing which drugs affect a given gene’s expression and how. “We did try to place quite a bit of emphasis on creating something that was intuitive and easy to use,” says Malachi. Beta testing involved watching unfamiliar users navigate the website and taking notes on how they interacted with the platform.

DGIdb went live in February of this year, followed by a publication in Nature Methods this October, and the database is now readily accessible at http://dgidb.org/. The code is open source and can be modified for any specific use case, using the Perl, Ruby, Shell, or Python programming languages, and the Genome Institute has also made available their internal API for users who want to run documents through the database automatically, or perform more sophisticated search functions. User response will be key to sustaining and expanding the project, and the Griffiths are looking forward to an update that draws on outside researchers’ knowledge. “A lot of this information [on drug-gene interactions] really resides in the minds of experts,” says Malachi, “and isn’t in a form that we can easily aggregate it from… We’re really motivated to have a crowdsourcing element, so that we can start to harness all of that information.” In the meantime, the bright orange “Feedback” button on every page of the site is being bombarded with requests to add specific interactions to the database.

Not all these interactions are easy to validate. “Another area that we’re really actively trying to pursue,” adds Malachi, “is getting information out of sources where text mining is required, where information is really not in a form where the interaction between genes and drugs is laid out quickly.” He cites the example of clinicaltrials.gov, where the results of all registered clinical trials in the United States are made available online. This surely includes untapped material on drug-gene interactions, but nowhere are those results neatly summarized. “You either have a huge manual curation problem on your hands – there’s literally hundreds of thousands of clinical trial records – or you have to come up with some kind of machine learning, text-mining approach.” So far, the Genome Institute has been limited to manual curation for this kind of scenario, but with a resource as large as the clinical trials registry, the Griffiths hope to bring their programming savvy to bear on a more efficient attack.

In the meantime, new resources are continuously being brought into the database, rising from eleven data sources on launch to sixteen now, with more in the curation pipeline. DGIdb is already regularly incorporated in the Genome Institute’s research. Every cancer patient sequenced at Washington University has her genetic data run first through an analytics pipeline to find genes with unusual variants or levels of expression, and then through DGIdb to see whether any of these genes are known to be druggable. This is an ideal use case for the database, which is presently biased toward cancer-related interactions, the Griffiths’ own area of research.

The twins have a personal investment in advancing cancer therapeutics. Their mother died in her forties from an aggressive case of breast cancer, while Obi and Malachi were still in high school, and their family has continued to suffer disproportionately from cancer ever since. Says Obi, “We’ve had the opportunity to see [everything from] terrible, tragic outcomes… to the other end of the spectrum, where advances in the way cancer is treated were able to really make a huge difference to both our cousin and our brother,” both in remission after life-threatening cases of childhood leukemia and Ewing’s sarcoma, respectively. “Everyone can tell these stories,” Malachi adds, “but we’ve had a little more than our fair share.”

DGIdb can’t influence cancer care directly – most of the data available on drug-gene interactions is too tentative for clinical use – but it can spur research into more personalized treatments for genetically distinct cancers, and increasingly for other diseases as more information is brought inside. Meanwhile, companies like Foundation Medicine and MolecularHealth are drawing on similar drug-gene datasets, narrowed down to the most actionable information, to tailor clinical action to individual cancer patients. The Griffiths are cautiously optimistic that research like the Genome Institute’s is approaching the crucial tipping point where finely tuned clinical decisions could be made based on a patient’s genetic profile. “We’re still firmly on the academic research side,” says Malachi, but “we’re definitely at the stage where this idea needs to be pursued aggressively.”

SOURCE

Read Full Post »

« Newer Posts - Older Posts »