Feeds:
Posts
Comments

Archive for the ‘Statistical Methods for Research Evaluation’ Category

The Role of Sibling Kinship, Sex, and Age of Ischemic Stroke Onset: The Familial Component

Reporter: Aviva Lev-Ari, PhD, RN

 

Familial Effects on Ischemic Stroke – The Role of Sibling Kinship, Sex, and Age of Onset

Katherine Kasiman, MSc, Cecilia Lundholm, MSc, Sven Sandin, MSc, Ninoa Malki, MSc, Pär Sparén, PhD and Erik Ingelsson, MD, PhD

Author Affiliations

From the Department of Medical Epidemiology and Biostatistics (K.K., C.L., S.S., N.M., P.S., E.I.), Karolinska Institutet, Stockholm, Sweden; Centre for Molecular Epidemiology (K.K.), Saw Swee Hock School of Public Health, National University of Singapore, Singapore.

Correspondence to Prof Erik Ingelsson, MD, PhD, FAHA, Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Box 281, Nobels väg 12A, SE-171 77 Stockholm, Sweden. E-mail erik.ingelsson@ki.se

Abstract

Background—Previous studies on familial risk of ischemic stroke have supported genetic influence on the disease incidence. This study aimed to characterize these familial effects in a nationwide population-based study by taking into account sibling relations, sex of siblings, and age of onset, with respect to ischemic stroke incidence.

Methods and Results—Incident ischemic stroke cases identified from the Swedish Hospital Discharge and Cause of Death Registers between 1987 and 2007 were linked to their stroke-free siblings (study participants), forming an exposed sib-pair. Each exposed sib-pair was matched up to 5 unexposed sib-pairs from the Multi-Generation Registry by birth and calendar years. Incident ischemic stroke risk was assessed using hazard estimates obtained from stratified Cox regression analyses. A total of 30 735 exposed and 152 391 unexposed study participants were included in the analyses. The overall risk of incident ischemic stroke when exposed was significantly increased (relative risk, 1.61; 95% confidence interval, 1.48–1.75;P<0.001). Familial risk was higher in full (relative risk, 1.64; 95% confidence interval, 1.50–1.81; P<0.001) than in half (relative risk, 1.41; 95% confidence interval, 1.10–1.82; P=0.007) siblings. Familial risk of early ischemic stroke almost doubled when exposed to early ischemic stroke (relative risk, 1.94; 95% confidence interval, 1.41–2.67; P<0.001).

Conclusions—There was a 60% increased risk for ischemic stroke in individuals having a sibling with prior stroke. The familial effect was even higher for full-sibling relations. Familial effects were observed in both male and female individuals, and no differential effects depending on the sex of either of the siblings were found.

Published online before print March 8, 2012,

doi: 10.1161/ CIRCGENETICS.111.962191

http://circgenetics.ahajournals.org/content/5/2/226.abstract?sid=5201ab39-7f11-4007-b159-2d1e15663cd5

 

 

Read Full Post »

Genomics of Incident Ischemic Stroke Events, Stroke and Cardiovascular Disease

Reporter: Aviva Lev-Ari, PhD, RN

 

Associations Between Incident Ischemic Stroke Events and Stroke and Cardiovascular Disease-Related Genome-Wide Association Studies Single Nucleotide Polymorphisms in the Population Architecture Using Genomics and Epidemiology Study

Cara L. Carty, PhD, Petra Bůžková, PhD, Myriam Fornage, PhD, Nora Franceschini, MD, Shelley Cole, PhD, Gerardo Heiss, MD, PhD, Lucia A. Hindorff, PhD, MPH, Barbara V. Howard, PhD, Sue Mann, MPH, Lisa W. Martin, MD, Ying Zhang, PhD, Tara C. Matise, PhD, Ross Prentice, PhD, Alexander P. Reiner, MD, MS and Charles Kooperberg, PhD

Author Affiliations

From the Public Health Sciences, Fred Hutchinson Cancer Research Center (C.L.C., S.M., R.P., C.K.); Department of Biostatistics, University of Washington, Seattle, WA (P.B.); Institute of Molecular Medicine, University of Texas Health Sciences Center at Houston, Houston, TX (M.F.); Division of Epidemiology, School of Public Health, University of Texas Health Sciences Center, Houston, TX (M.F.); Department of Epidemiology, University of North Carolina, Chapel Hill, NC (N.F., G.H.); Department of Genetics, Texas Biomedical Research Institute, San Antonio, TX (S.C.); Office of Population Genomics, National Human Genome Research Institute, Bethesda, MD (L.A.H.); Medstar Health Research Institute, Washington, DC (B.V.H.); George Washington University School of Medicine, Washington, DC (B.V.H., L.W.M.); University of Oklahoma Health Sciences Center, Oklahoma City, OK (Y.Z.); Department of Genetics, Rutgers University, Piscataway, NJ (T.C.M.); Department of Epidemiology, University of Washington, Seattle, WA (A.P.R.).

Correspondence to Dr Cara L. Carty, Fred Hutchinson Cancer Research Center, 1100 Fairview Ave N./M3-A410, Seattle, WA 98109. E-mail ccarty@fhcrc.org

Abstract

Background—Genome-wide association studies (GWAS) have identified loci associated with ischemic stroke (IS) and cardiovascular disease (CVD) in European-descent individuals, but their replication in different populations has been largely unexplored.

Methods and Results—Nine single nucleotide polymorphisms (SNPs) selected from GWAS and meta-analyses of stroke, and 86 SNPs previously associated with myocardial infarction and CVD risk factors, including blood lipids (high density lipoprotein [HDL], low density lipoprotein [LDL], and triglycerides), type 2 diabetes, and body mass index (BMI), were investigated for associations with incident IS in European Americans (EA) N=26 276, African-Americans (AA) N=8970, and American Indians (AI) N=3570 from the Population Architecture using Genomicsand Epidemiology Study. Ancestry-specific fixed effects meta-analysis with inverse variance weighting was used to combine study-specific log hazard ratios from Cox proportional hazards models. Two of 9 stroke SNPs (rs783396 and rs1804689) were associated with increased IS hazard in AA; none were significant in this large EA cohort. Of 73 CVD risk factor SNPs tested in EA, 2 (HDL and triglycerides SNPs) were associated with IS. In AA, SNPs associated with LDL, HDL, and BMI were significantly associated with IS (3 of 86 SNPs tested). Out of 58 SNPs tested in AI, 1 LDL SNP was significantly associated with IS.

Conclusions—Our analyses showing lack of replication in spite of reasonable power for many stroke SNPs and differing results by ancestry highlight the need to follow up on GWAS findings and conduct genetic association studies in diverse populations. We found modest IS associations with BMI and lipids SNPs, though these findings require confirmation.

SOURCE:

Circulation: Cardiovascular Genetics.2012; 5: 210-216

 

Read Full Post »

New Functional Apolipoprotein B Variant Influencing Oxidized Low-Density Lipoprotein Levels But Not Cardiovascular Events: Genome-Wide Association Study

Reporter: Aviva Lev-Ari, PhD, RN

 

Genome-Wide Association Study Pinpoints a New Functional Apolipoprotein B Variant Influencing Oxidized Low-Density Lipoprotein Levels But Not Cardiovascular Events

AtheroRemo Consortium

Kari-Matti Mäkelä, BM, BSc, Ilkka Seppälä, MSc, Jussi A. Hernesniemi, MD, PhD, Leo-Pekka Lyytikäinen, MD, Niku Oksala, MD, PhD, DSc, Marcus E. Kleber, PhD, Hubert Scharnagl, PhD, Tanja B. Grammer, MD, Jens Baumert, PhD, Barbara Thorand, PhD,Antti Jula, MD, PhD, Nina Hutri-Kähönen, MD, PhD, Markus Juonala, MD, PhD, Tomi Laitinen, MD, PhD, Reijo Laaksonen, MD, PhD, Pekka J. Karhunen, MD, PhD, Kjell C. Nikus, MD, PhD, Tuomo Nieminen, MD, PhD, MSc, Jari Laurikka, MD, PhD, Pekka Kuukasjärvi, MD, PhD, Matti Tarkka, MD, PhD, Jari Viik, PhD, Norman Klopp, PhD,Thomas Illig, PhD, Johannes Kettunen, PhD, Markku Ahotupa, PhD, Jorma S.A. Viikari, MD, PhD, Mika Kähönen, MD, PhD, Olli T. Raitakari, MD, PhD, Mahir Karakas, MD, Wolfgang Koenig, MD, PhD, Bernhard O. Boehm, MD, Bernhard R. Winkelmann, MD, Winfried März, MD and Terho Lehtimäki, MD, PhD

Correspondence to Kari-Matti Mäkelä, Department of Clinical Chemistry, Finn-Medi 2, PO Box 2000, FI-33521 Tampere, Finland. E-mail kari-matti.makela@uta.fi

Abstract

Background—Oxidized low-density lipoprotein may be a key factor in the development of atherosclerosis. We performed a genome-wide association study on oxidized low-density lipoprotein and tested the impact of associated single-nucleotide polymorphisms (SNPs) on the risk factors of atherosclerosis and cardiovascular events.

Methods and Results—A discovery genome-wide association study was performed on a population of young healthy white individuals (N=2080), and the SNPs associated with a P<5×10–8 were replicated in 2 independent samples (A: N=2912; B: N=1326). Associations with cardiovascular endpoints were also assessed with 2 additional clinical cohorts (C: N=1118; and D: N=808). We found 328 SNPs associated with oxidized low-density lipoprotein. The genetic variant rs676210 (Pro2739Leu) in apolipoprotein B was the proxy SNP behind all associations (P=4.3×10–136, effect size=13.2 U/L per allele). This association was replicated in the 2 independent samples (A and B, P=2.5×10–47 and 1.1×10–11, effect sizes=10.3 U/L and 7.8 U/L, respectively). In the meta-analyses of cohorts A, C, and D (excluding cohort B without angiographic data), the top SNP did not associate significantly with the age of onset of angiographically verified coronary artery disease (hazard ratio=1.00 [0.94–1.06] per allele), 3-vessel coronary artery disease (hazard ratio=1.03 [0.94–1.13]), or myocardial infarction (hazard ratio=1.04 [0.96–1.12]).

Conclusions—This novel genetic marker is an important factor regulating oxidized low-density lipoprotein levels but not a major genetic factor for the studied cardiovascular endpoints.

SOURCE:

Circulation: Cardiovascular Genetics.2013; 6: 73-81

Published online before print December 17, 2012,

doi: 10.1161/ CIRCGENETICS.112.964965

 

Read Full Post »

LDL, HDL, TG, ApoA1 and ApoB: Genetic Loci Associated With Plasma Concentration of these Biomarkers – A Genome-Wide Analysis With Replication

Reporter: Aviva Lev-Ari, PhD, RN

Genetic Loci Associated With Plasma Concentration of Low-Density Lipoprotein Cholesterol, High-Density Lipoprotein Cholesterol, Triglycerides, Apolipoprotein A1, and Apolipoprotein B Among 6382 White Women in Genome-Wide Analysis With Replication

Daniel I. Chasman, PhD*Guillaume Paré, MD, MS*Robert Y.L. Zee, PhD, MPH, Alex N. Parker, PhD, Nancy R. Cook, ScD, Julie E. Buring, ScD, David J. Kwiatkowski, MD, PhD, Lynda M. Rose, MS, Joshua D. Smith, BS, Paul T. Williams, PhD, Mark J. Rieder, PhD, Jerome I. Rotter, MD, Deborah A. Nickerson, PhD, Ronald M. Krauss, MD,Joseph P. Miletich, MD and Paul M Ridker, MD, MPH

Author Affiliations

From the Center for Cardiovascular Disease Prevention (D.I.C., G.P., R.Y.L.Z., N.R.C., J.E.B., L.M.R., P.M.R.) and Donald W. Reynolds Center for Cardiovascular Research (D.I.C., G.P., R.Y.L.Z., N.R.C., D.J.K., P.M.R.), Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass; Amgen, Inc, Cambridge, Mass (A.N.P., J.M.P.); Department of Genome Sciences, University of Washington, Seattle, Wash (J.D.S., M.J.R., D.A.N.); Life Science Division, Lawrence Berkeley National Laboratory, Berkeley, Calif (P.T.W., R.M.K.); Medical Genetics Institute, Cedars-Sinai Medical Center, Los Angeles, Calif (J.I.R.); and Children’s Hospital Oakland Research Institute, Oakland, Calif (R.M.K.).

Correspondence to Daniel I. Chasman, Center for Cardiovascular Disease Prevention, Brigham and Women’s Hospital, 900 Commonwealth Ave E, Boston, MA 02215. E-mail dchasman@rics.bwh.harvard.edu

Abstract

Background— Genome-wide genetic association analysis represents an opportunity for a comprehensive survey of the genes governing lipid metabolism, potentially revealing new insights or even therapeutic strategies for cardiovascular disease and related metabolic disorders.

Methods and Results— We have performed large-scale, genome-wide genetic analysis among 6382 white women with replication in 2 cohorts of 970 additional white men and women for associations between common single-nucleotide polymorphisms and low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, triglycerides, apolipoprotein (Apo) A1, and ApoB. Genome-wide associations (P<5×10−8) were found at the PCSK9 gene, the APOB gene, the LPLgene, the APOA1-APOA5 locus, the LIPC gene, the CETP gene, the LDLR gene, and the APOE locus. In addition, genome-wide associations with triglycerides at the GCKRgene confirm and extend emerging links between glucose and lipid metabolism. Still other genome-wide associations at the 1p13.3 locus are consistent with emerging biological properties for a region of the genome, possibly related to the SORT1 gene. Below genome-wide significance, our study provides confirmatory evidence for associations at 5 novel loci with low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, or triglycerides reported recently in separate genome-wide association studies. The total proportion of variance explained by common variation at the genome-wide candidate loci ranges from 4.3% for triglycerides to 12.6% for ApoB.

Conclusion— Genome-wide associations at the GCKR gene and near the SORT1gene, as well as confirmatory associations at 5 additional novel loci, suggest emerging biological pathways for lipid metabolism among white women.

 SOURCE:

Circulation: Cardiovascular Genetics.2008; 1: 21-30

doi: 10.1161/ CIRCGENETICS.108.773168

Read Full Post »

North Americans With Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy: Genomics of Ventricular arrhythmias, A-Fib, Right Ventricular Dysplasia, Cardiomyopathy – Comprehensive Desmosome Mutation Analysis

Reporter: Aviva Lev-Ari, PhD, RN

Genomics of Ventricular arrhythmias, A-Fib, Right Ventricular Dysplasia, Cardiomyopathy – Comprehensive Desmosome Mutation Analysis in North Americans With Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy

A. Dénise den Haan, MD, Boon Yew Tan, MBChB, Michelle N. Zikusoka, MD, Laura Ibañez Lladó, MS, Rahul Jain, MD, Amy Daly, MS, Crystal Tichnell, MGC, Cynthia James, PhD, Nuria Amat-Alarcon, MS, Theodore Abraham, MD, Stuart D. Russell, MD,David A. Bluemke, MD, PhD, Hugh Calkins, MD, Darshan Dalal, MD, PhD and Daniel P. Judge, MD

Author Affiliations

From the Department of Medicine/Cardiology (A.D.d.H., B.Y.T., M.N.Z., L.I.L., R.J., A.D., C.T., C.J., N.A.-A., T.A., S.D.R., H.C., D.D., D.P.J.), Johns Hopkins University School of Medicine, Baltimore, Md; Department of Cardiology, Division of Heart and Lungs (A.D.d.H.), University Medical Center Utrecht, Utrecht, The Netherlands; and National Institutes of Health, Radiology and Imaging Sciences (D.A.B.), Bethesda, Md.

Correspondence to Daniel P. Judge, MD, Johns Hopkins University, Division of Cardiology, Ross 1049; 720 Rutland Avenue, Baltimore, MD 21205. E-mail djudge@jhmi.edu

Abstract

Background— Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is an inherited disorder typically caused by mutations in components of the cardiac desmosome. The prevalence and significance of desmosome mutations among patients with ARVD/C in North America have not been described previously. We report comprehensive desmosome genetic analysis for 100 North Americans with clinically confirmed or suspected ARVD/C.

Methods and Results— In 82 individuals with ARVD/C and 18 people with suspected ARVD/C, DNA sequence analysis was performed on PKP2, DSG2, DSP, DSC2, and JUP. In those with ARVD/C, 52% harbored a desmosome mutation. A majority of these mutations occurred in PKP2. Notably, 3 of the individuals studied have a mutation in more than 1 gene. Patients with a desmosome mutation were more likely to have experienced ventricular tachycardia (73% versus 44%), and they presented at a younger age (33 versus 41 years) compared with those without a desmosome mutation. Men with ARVD/C were more likely than women to carry a desmosome mutation (63% versus 38%). A mutation was identified in 5 of 18 patients (28%) with suspected ARVD. In this smaller subgroup, there were no significant phenotypic differences identified between individuals with a desmosome mutation compared with those without a mutation.

Conclusions— Our study shows that in 52% of North Americans with ARVD/C a mutation in one of the cardiac desmosome genes can be identified. Compared with those without a desmosome gene mutation, individuals with a desmosome gene mutation had earlier-onset ARVD/C and were more likely to have ventricular tachycardia.

SOURCE:

Circulation: Cardiovascular Genetics.2009; 2: 428-435

Published online before print June 3, 2009,

doi: 10.1161/ CIRCGENETICS.109.858217

 

Read Full Post »

Heart and Aging Research in Genomic Epidemiology: 1700 MIs and 2300 coronary heart disease events among about 29 000 eligible patients: Design of Prospective Meta-Analyses of Genome-Wide Association Studies From 5 Cohorts

Reporter: Aviva Lev-Ari, PhD, RN

 

Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium

Heart and Aging Research in Genomic Epidemiology: 1700 MIs and 2300 coronary heart disease events among about 29 000 eligible patients: Design of Prospective Meta-Analyses of Genome-Wide Association Studies From 5 Cohorts

Bruce M. Psaty, MD, PhD, Christopher J. O’Donnell, MD, MPH, Vilmundur Gudnason, MD, PhD, Kathryn L. Lunetta, PhD, Aaron R. Folsom, MD, Jerome I. Rotter, MD,André G. Uitterlinden, PhD, Tamara B. Harris, MD, Jacqueline C.M. Witteman, PhD,Eric Boerwinkle, PhD and on Behalf of the CHARGE Consortium

Author Affiliations

From the Cardiovascular Health Research Unit, Departments of Medicine, Epidemiology, and Health Services (B.M.P.), University of Wash; Center for Health Studies, Group Health (B.M.P.), Seattle, Wash; the National Heart, Lung and Blood Institute and the Framingham Heart Study (C.J.O.D.), Framingham, Mass; Icelandic Heart Association and the Department of Cardiovascular Genetics (Y.G.), University of Iceland, Reykjavik, Iceland; Department of Biostatistics (K.L.), Boston University School of Public Health, Mass; Division of Epidemiology and Community Health (A.R.F.), University of Minnesota, Minneapolis; Medical Genetics Institute (J.I.R.), Cedars-Sinai Medical Center, Los Angeles, Calif; Departments of Internal Medicine (A.G.U.) and Epidemiology (A.G.U., J.C.M.W.), Erasmus Medical Center, Rotterdam, The Netherlands; Laboratory of Epidemiology, Demography, and Biometry (T.B.H.), Intramural Research Program, National Institute on Aging, Bethesda, Md; and Human Genetics Center and Division of Epidemiology (E.B.), University of Texas, Houston.

Guest editor for this article was Elizabeth R. Hauser, PhD.

Abstract

Background— The primary aim of genome-wide association studies is to identify novel genetic loci associated with interindividual variation in the levels of risk factors, the degree of subclinical disease, or the risk of clinical disease. The requirement for large sample sizes and the importance of replication have served as powerful incentives for scientific collaboration.

Methods— The Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium was formed to facilitate genome-wide association studies meta-analyses and replication opportunities among multiple large population-based cohort studies, which collect data in a standardized fashion and represent the preferred method for estimating disease incidence. The design of the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium includes 5 prospective cohort studies from the United States and Europe: the Age, Gene/Environment Susceptibility—Reykjavik Study, the Atherosclerosis Risk in Communities Study, the Cardiovascular Health Study, the Framingham Heart Study, and the Rotterdam Study. With genome-wide data on a total of about 38 000 individuals, these cohort studies have a large number of health-related phenotypes measured in similar ways. For each harmonized trait, within-cohort genome-wide association study analyses are combined by meta-analysis. A prospective meta-analysis of data from all 5 cohorts, with a properly selected level of genome-wide statistical significance, is a powerful approach to finding genuine phenotypic associations with novel genetic loci.

Conclusions— The Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium and collaborating non-member studies or consortia provide an excellent framework for the identification of the genetic determinants of risk factors, subclinical-disease measures, and clinical events.

Example of Coronary Heart Disease

The cohort-study methods papers provide detail about many of the phenotypes listed in Table 2. For coronary heart disease, investigators knowledgeable about the phenotype in each study decided to focus on fatal and nonfatal myocardial infarction (MI) as the primary outcome because the MI criteria differed in only trivial ways among the studies. There were some minor differences in the definition of the composite outcome of MI, fatal coronary heart disease, and sudden death, which became the secondary outcome. Only subjects at risk for an incident event were included in the analysis. MI survivors whose DNA was drawn after the event were not eligible. The primary analysis was restricted to Europeans or European Americans. Patients entered the analysis at the time of the DNA blood draw, and were followed until an event, death, loss to follow up, or the last visit. The main recommendations of the Analysis Committee were adopted, and a threshold of 5×10−8 was selected for genome-wide statistical significance. Analyses in progress include about 1700 MIs and 2300 coronary heart disease events among about 29 000 eligible patients. Each cohort conducted its own analysis, and results were uploaded to a secure share site for the fixed-effects meta-analysis. Even with this number of events (Supplemental Figure 2), power is good for only for relatively high minor allele frequencies (>0.25) and large relative risks (>1.3).

The authors had full access to and take full responsibility for the integrity of the data. All authors have read and agree to the manuscript as written.

Discussion

In thousands of published papers, the 5 CHARGE cohort studies and many of the collaborating studies have already characterized the risk factors for and the incidence and prognosis of a variety of aging-related and cardiovascular conditions. The analysis of the incident MI, for instance, is free from the survival bias typically associated with cross-sectional or case-control studies. The methodologic advantages of the prospective population-based cohort design, the similarity of phenotypes across 5 studies, the availability of genome-wide genotyping data in each cohort, and the need for large sample sizes to provide reliable estimates of genotype-phenotype associations have served as the primary incentives for the formation of the CHARGE consortium, which includes GWAS data on about 38 000 individuals. The consortium effort relies on collaborative methods that are similar to those used by the individual contributing cohorts.

Phenotype experts who know the studies and the data well are responsible for phenotype-standardization across cohorts. The coordinated prospectively planned meta-analyses of CHARGE provide results that are virtually identical to a cohort-adjusted pooled analysis of individual level data. This approach–the within-study analysis followed by a between-study meta-analysis–avoids the human subjects issues associated with individual-level data sharing.

Editors, reviewers, and readers expect replication as the standard in science.6 The finding of a genetic association in one population with evidence for replication in multiple independent populations provides moderate assurance against false-positive reports and helps to establish the validity of the original finding. In a single experiment, the discovery-replication structure is traditionally embodied in a 2-stage design. The CHARGE consortium includes up to 5 independent replicate samples as well as additional collaborating studies for some phenotype working groups, so that it would have been possible to set up analysis plans within CHARGE to mimic the traditional 2-stage design for replication. For instance, the 2 largest cohorts could have served as the discovery set and the others as the replication set. However, attaining the extremely small probability values expected in GWAS requires large sample sizes. For any phenotype, a prospective meta-analysis of all participating cohorts, with a properly selected level of genome-wide statistical significance to minimize the chance of false-positives, is the most powerful approach to finding new genuine associations for genetic loci.25 When findings narrowly miss the prespecified significance threshold, genotyping individuals in other independent populations provides additional evidence about the association. For findings that substantially exceed pre-established significance thresholds, the results of a CHARGE meta-analysis effectively provide evidence of a multistudy replication.

The effort to assemble and manage the CHARGE consortium has provided some interesting and unanticipated challenges. Participating cohorts often had relationships with outside study groups that predated the formation of CHARGE. Timelines for genotyping and imputation have shifted. Purchases of new computer systems for the volume of work were sometimes necessary. Each cohort came to the consortium with their own traditions for methods of analysis, organization, and authorship policies that, while appropriate for their own work, were not always optimal for collaboration with multiple external groups. Within each cohort, the investigators had often formed working groups that divided up the large number of available phenotypes in ways that made sense locally but did not necessarily match the configuration that had been adopted by other cohorts. The Research Steering Committee has attempted to create a set of CHARGE working groups that accommodate the needs and the conventions of the various cohorts. Transparency, disclosure, and professional collaborative behavior by all participating investigators have been essential to the process.

Resource limitations are another challenge. Grant applications that funded the original single-study genome-wide genotyping effort typically imagined a much simpler design. The CHS whole-genome study had as its primary aim, for instance, the analysis of data on 3 endpoints, coronary disease, stroke and heart failure. With a score of active phenotype working groups, the CHARGE collaboration broadened the scope of the short-term work well beyond initial expectations for all the participating cohorts.

One of the premier challenges has been communications among scores of investigators at a dozen sites. CHS and ARIC are themselves multi-site studies. To be successful, the CHARGE collaboration has required effective communications: (1) within each cohort; (2) between cohorts; (3) within the CHARGE working groups; and (4) among the major CHARGE committees. In addition to the traditional methods of conference calls and email, the CHARGE “wiki,” set up by Dr J. Bis (Seattle, Wash), has provided a crucial and highly functional user-driven website for calendars, minutes, guidelines, working group analysis plans, manuscript proposals, and other documents. In the end, there is no substitute for face-to-face meetings, especially at the beginning of the collaboration, and this complex meta-organization has benefited from several CHARGE-wide meetings.

The major emerging opportunity is the collaboration with other studies and consortia. Many working groups have already incorporated nonmember studies into their efforts. Several working groups have coordinated submissions of initial manuscripts with the parallel submission of manuscripts from other studies or consortia. Several working groups have embarked on plans for joint meta-analyses between CHARGE and other consortia. CHARGE has tried to acknowledge and reward the efforts of champions, who assume leadership responsibility for moving these large complex projects forward and who are often hard-working young investigators, the key to the future success of population science.

The CHARGE Consortium represents an innovative model of collaborative research conducted by research teams that know well the strengths, the limitations, and the data from 5 prospective population-based cohort studies. By leveraging the dense genotyping, deep phenotyping and the diverse expertise, prospective meta-analyses are underway to identify and replicate the major common genetic determinants of risk factors, measures of subclinical disease, and clinical events for cardiovascular disease and aging.

SOURCE:

Circulation: Cardiovascular Genetics.2009; 2: 73-80

doi: 10.1161/ CIRCGENETICS.108.829747

 

Read Full Post »

Species-specific Genetic Barcodes: Life Tech’s Capillary Electrophoresis Sequencers generated by

Reporter: Aviva Lev-Ari, PhD, RN

Life Tech said that it has also partnered with the Canadian Centre for DNA Barcoding for the iBOL project, a biodiversity study that aims to genetically catalog 500,000 species by late 2015 and 5 million in total.

Project researchers will use Life Tech’s capillary electrophoresis sequencers to generate species-specific genetic barcodes, which will be deposited in a reference library called Barcode of Life Data System. The partnership will focus on a project to study insects around the world and another one to study biodiversity patterns in Central and South America.

In addition, Life Tech and the center will work on developing metagenomic barcoding applications using the PGM sequencer.

Related Stories

SOURCE

http://www.genomeweb.com//node/1321146?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=Management%20Shakeup%20at%20Hologic;%20Life%20Tech%20Partners%20on%20Saudi%20Genome%20Project;%20Cancer%20Driver%20Gene%20Study%20-%2012/09/2013%2010:50:00%20AM

Read Full Post »

Cardiology, Genomics and Individualized Heart Care: Framingham Heart Study (65 y-o study) & Jackson Heart Study (15 y-o study)

Cardiology, Genomics and Individualized Heart Care

Curator: Aviva Lev-Ari, PhD, RN

Article ID #90: Cardiology, Genomics and Individualized Heart Care: Framingham Heart Study (65 y-o study) & Jackson Heart Study (15 y-o study). Published on 12/1/2014

WordCloud Image Produced by Adam Tubman

 

The topic of Cardiology, Genomics and Individualized Heart Care is been developed in the following forthcoming e-Book on a related subject matter:

Curators: Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

This e-Book has the following Parts:

PART 1
Genomics and Medicine

Introduction to Volume Three
1.1: Genomics and Medicine: The Physician’s View
1.2: Ribozymes and RNA Machines – Work of Jennifer A. Doudn
1.3: Genomics and Medicine: The Geneticist’s View
1.4: Genomics in Medicine – Establishing a Patient-Centric View of Genomic Data

PART 2
Epigenetics- Modifiable Factors Causing Cardiovascular Diseases

2.1 Diseases Etiology

2.1.1 Environmental Contributors Implicated as Causing Cardiovascular Diseases
2.1.2 Diet: Solids and Fluid Intake
2.1.3 Physical Activity and Prevention of Cardiovascular Diseases
2.1.4 Psychological Stress and Mental Health: Risk for Cardiovascular Diseases
2.1.5 Correlation between Cancer and Cardiovascular Diseases
2.1.6 Medical Etiologies for Cardiovascular Diseases: Evidence-based Medicine – Leading DIAGNOSES of Cardiovascular Diseases, Risk Biomarkers and Therapies
2.1.7 Signaling Pathways
2.1.8 Proteomics and Metabolomics

2.2 Assessing Cardiovascular Disease with Biomarkers

2.2.1 Issues in Genomics of Cardiovascular Diseases
2.2.2 Endothelium, Angiogenesis, and Disordered Coagulation
2.2.3 Hypertension BioMarkers
2.2.4 Inflammatory, Atherosclerotic and Heart Failure Markers
2.2.5 Myocardial Markers

2.3  Therapeutic Implications: Focus on Ca(2+) signaling, platelets, endothelium

2.3.1 The Centrality of Ca(2+) Signaling and Cytoskeleton Involving Calmodulin Kinases and Ryanodine Receptors

2.3.2 Platelets in Translational Research ­ 2

2.3.3 The Final Considerations of the Role of Platelets and Platelet Endothelial Reactions in Atherosclerosis

2.3.4 Nitric Oxide Synthase Inhibitors (NOS-I)

2.3.5 Resistance to Receptor of Tyrosine Kinase

2.3.6 Oxidized Calcium Calmodulin Kinase and Atrial Fibrillation

2.3.7 Advanced Topics in Sepsis and the Cardiovascular System at its End Stage

2.4 Comorbidity of Diabetes and Aging

PART 3
Determinants of Cardiovascular Diseases
Genetics, Heredity and Genomics Discoveries

Introduction
3.1 Why cancer cells contain abnormal numbers of chromosomes (Aneuploidy)
3.2 Functional Characterization of Cardiovascular Genomics: Disease Case Studies @ 2013 ASHG
3.3 Leading DIAGNOSES of Cardiovascular Diseases covered in Circulation: Cardiovascular Genetics, 3/2010 – 3/2013
3.4  Commentary on Biomarkers for Genetics and Genomics of Cardiovascular Disease

PART 4
Individualized Medicine Guided by Genetics and Genomics Discoveries

4.1 Preventive Medicine: Cardiovascular Diseases
4.2 Gene-Therapy for Cardiovascular Diseases
4.3 Congenital Heart Disease/Defects
4.4 Pharmacogenomics for Cardiovascular Diseases

SOURCE

http://pharmaceuticalintelligence.com/biomed-e-books/series-a-e-books-on-cardiovascular-diseases/volume-three-etiologies-of-cardiovascular-diseases-epigenetics-genetics-genomics/

The Next Frontier in Heart Care

Research Aims to Personalize Treatment With Genetics

Nov. 25, 2013 7:18 p.m. ET

VIEW VIDEO

http://online.wsj.com/news/articles/SB10001424052702304281004579220373600912930#!

Two influential heart studies are joining forces to bring the power of genetics and other 21st century tools to battle against heart disease and stroke. Ron Winslow and study co-director Dr. Vasan Ramachandran explain. Photo: Shubhangi Ganeshrao Kene/Corbis.

Scientists from two landmark heart-disease studies are joining forces to wield the power of genetics in battling the leading cause of death in the U.S.

Cardiologists have struggled in recent years to score major advances against heart disease and stroke. Although death rates have been dropping steadily since the 1960s, progress combating the twin diseases has plateaued by other measures.

Genetics has had a profound impact on cancer treatment in recent years. Now, heart-disease specialists hope genetics will reveal fresh insight into the interaction between a

  • person’s biology,
  • living habits and
  • medications

that can better predict who is at risk of a heart attack or stroke.

“There’s a promise of new treatments with this research,” said Daniel Jones, chancellor of the University of Mississippi and former principal investigator of the 15-year-old Jackson Heart Study, a co-collaborator in the new genetics initiative.

Scienc e Source /Photo Researchers Inc. (hearts); below, l-r: Boston University; Robert Jordan/Univ. of Miss.; Jay Ferchaud/Univ. of Miss Medical Center

Prevention efforts also could improve with the help of genetics research, Dr. Jones said. For example, an estimated 75 million Americans currently have high blood pressure, or hypertension, but only about half of those are able to control it with medication. It can take months of trial-and-error for a doctor to get the right dose or combination of pills for a patient. Researchers hope genetic and other information might enable doctors to identify subgroups of hypertension that respond to specific treatments and target patients with an appropriate therapy.

Also collaborating on the genetics project is the 65-year-old Framingham Heart Study. Its breakthrough findings decades ago linked heart disease to such factors as smoking, high blood pressure and high cholesterol. Framingham findings have been a foundation of cardiovascular disease prevention policy for a half-century.

More than 15,000 people have participated in the Framingham study. The Jackson study, with more than 5,000 participants, was launched in 1998 to better understand risk factors in African-Americans, who were underrepresented in Framingham and who bear a higher burden of cardiovascular disease than the rest of the population. Both studies are funded by the National Heart, Lung, and Blood Institute, part of the National Institutes of Health.

Exactly how the collaboration, announced last week, will proceed hasn’t been determined. One promising area is the “biobank,” the collection of more than one million blood and other biological samples gathered during biennial checkups of Framingham study participants going back more than a half century.

The samples are stored in freezers in an underground earthquake-proof facility in Massachusetts, said Vasan Ramachandran, a Boston University scientist who takes over at the beginning of next year as principal investigator of the Framingham Heart Study. Another 40,000 samples from the Jackson study are kept in freezers in Vermont. By subjecting samples to DNA sequencing and other tests, researchers say they may be able to identify variations linked to progression of cardiovascular disease—or protection from it.

Each study is likely to enroll new participants as part of the collaboration to allow tracking of risk factors and diet and exercise habits, for instance, in real time instead of only during infrequent checkups.

Heart disease is linked to about 800,000 deaths a year in the U.S. In 2010, some 200,000 of those deaths could have been avoided, including more than 112,300 deaths among people younger than 65, according to a recent analysis by the Centers for Disease Control and Prevention. But those avoidable deaths reflected a 3.8% per year decline in mortality rates during the previous 10 years.

Now, widespread prevalence of obesity and diabetes threatens to undermine such gains. And a large gap remains between how white patients and minorities—especially African-Americans—benefit from effective strategies.

There have been few new transformative cardiovascular treatments since the mid-1980s to early 1990s, when a stream of large-scale trials of new agents ranging from clot-busters to treat heart attacks to the mega class of statins electrified the cardiology field with evidence of significant improvements in survival from the disease. One reason: Some of those remedies have proven tough to beat with new treatments.

What’s more, use of the current menu of medicines for reducing heart risk remains an imprecise art. Besides

  • blood pressure drugs,
  • cholesterol-lowering statins

also are widely prescribed. Drug-trial statistics show that to prevent a single first heart attack in otherwise healthy patients can require prescribing a statin to scores of patients, but no one knows for sure who actually benefits and who doesn’t.

“It would be great if we could make some more paradigm-shifting discoveries,” said Michael Lauer, director of cardiovascular sciences at the NHLBI, which is a part of the National Institutes of Health.

Finding new treatments isn’t the only aim of the new project. “You could use existing therapies smarter,” said Joseph Loscalzo, chairman of medicine at Brigham and Women’s Hospital in Boston.

The American Heart Association launched the initiative and has committed $30 million to it over the next five years. The AHA sees the project as critical to its goal to achieve a 20% improvement in cardiovascular health in the U.S. while also reducing deaths from heart disease and stroke by 20% for the decade ending in 2020, said Nancy Brown, the nonprofit organization’s chief executive.

The Jackson study has already identified characteristics of cardiovascular risk among African-American patients “that may have promise for new insights” in a collaborative effort, said Adolfo Correa, professor of medicine and pediatrics at University of Mississippi Medical Center and interim director of the Jackson study.

For instance, there is a higher prevalence of obesity among Jackson participants than seen in the Framingham cohorts. Obesity is associated with high blood pressure, diabetes and cardiovascular risk. Diabetes is also more prevalent among blacks than whites.

But African-Americans of normal weight appear to have higher rates of hypertension and diabetes than whites of normal weight. “The question is, should [measures] for defining diabetes be different or the same for the [different] populations and are they associated with the same risk of cardiovascular disease?” said Dr. Correa. The collaboration, he said, may provide better comparisons.

Researchers, who plan to use tools other than genetics, think more might be learned about blood pressure and heart and stroke risk by monitoring patients in real time using mobile devices rather than taking readings only in periodic office visits. For example, high blood pressure during sleep or spikes during exercise could indicate risks that don’t show up in a routine measurement in the doctors’ office.

A big challenge is making sense of the huge amounts of data involved in sequencing DNA and linking it to

  • medical records,
  • diet and
  • exercise habits and other variables that influence risk.

“The analytical methods for sorting out these complex relationships are still in evolution,” said Dr. Loscalzo, of Brigham and Women’s Hospital. “The cost of sequencing is getting cheaper and cheaper. The hard part is analyzing the data.”

Write to Ron Winslow at ron.winslow@wsj.com

SOURCE

http://online.wsj.com/news/articles/SB10001424052702304281004579220373600912930#!

The e-Reader is advised to to review tightly related articles in

http://pharmaceuticalintelligence.com/biomed-e-books/series-a-e-books-on-cardiovascular-diseases/volume-three-etiologies-of-cardiovascular-diseases-epigenetics-genetics-genomics/

Read Full Post »

Risk of Bias in Translational Science

Author: Larry H. Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN

 

Assessment of risk of bias in translational science

Andre Barkhordarian1, Peter Pellionisz2, Mona Dousti1, Vivian Lam1,Lauren Gleason1, Mahsa Dousti1, Josemar Moura3 and Francesco Chiappelli14*  

1Oral Biology & Medicine, School of Dentistry, UCLA, Evidence-Based Decisions Practice-Based Research Network, Los Angeles, USA

2Pre-medical program, UCLA, Los Angeles, CA

3School of Medicine, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

4Evidence-Based Decisions Practice-Based Research Network, UCLA School of Dentistry, Los Angeles, CA

Journal of Translational Medicine 2013, 11:184   http://dx.doi.org/10.1186/1479-5876-11-184
http://www.translational-medicine.com/content/11/1/184

This is an Open Access article distributed under the terms of the Creative Commons Attribution License 
http://creativecommons.org/licenses/by/2.0

Abstract

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

Background

As recently discussed in this journal [1], translational medicine is a rapidly evolving field. In its most recent conceptualization, it consists of two primary domains:

  • translational research proper and
  • translational effectiveness.

This distinction arises from a cogent articulation of the fundamental construct of translational medicine in particular, and of translational health care in general.

The Institute of Medicine’s Clinical Research Roundtable conceptualized the field as being composed by two fundamental “blocks”:

  • one translational “block” (T1) was defined as “…the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans…”, and
  • the second translational “block” (T2) was described as “…the translation of results from clinical studies into everyday clinical practice and health decision making…” [2].

These are clearly two distinct facets of one meta-construct, as outlined in Figure 1. As signaled by others, “…Referring to T1 and T2 by the same name—translational research—has become a source of some confusion. The 2 spheres are alike in name only. Their goals, settings, study designs, and investigators differ…” [3].

1479-5876-11-184-1  Fig 1. TM construct

Figure 1. Schematic representation of the meta-construct of translational health carein general, and translational medicine in particular, which consists of two fundamental constructs: the T1 “block” (as per Institute of Medicine’s Clinical Research Roundtable nomenclature), which represents the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention as well as their first testing in humans, and the T2 “block”, which pertains to translation of results from clinical studies into everyday clinical practice and health decision making [[3]]. The two “blocks” are inextricably intertwined because they jointly strive toward patient-centered research outcomes (PCOR) through the process of comparative effectiveness and efficacy research/review and analysis for clinical practice (CEERAP). The domain of each construct is distinct, since the “block” T1 is set in the context of a laboratory infrastructure within a nurturing academic institution, whereas the setting of “block” T2 is typically community-based (e.g., patient-centered medical/dental home/neighborhoods [4]; “communities of practice” [5]).

For the last five years at least, the Federal responsibilities for “block” T1 and T2 have been clearly delineated. The National Institutes of Health (NIH) predominantly concerns itself with translational research proper – the bench-to-bedside enterprise (T1); the Agency for Healthcare Research Quality (AHRQ) focuses on the result-translation enterprise (T2). Specifically: “…the ultimate goal [of AHRQ] is research translation—that is, making sure that findings from AHRQ research are widely disseminated and ready to be used in everyday health care decision-making…” [6]. The terminology of translational effectiveness has emerged as a means of distinguishing the T2 block from T1.

Therefore, the bench-to-bedside enterprise pertains to translational research, and the result-translation enterprise describes translational effectiveness. The meta-construct of translational health care (viz., translational medicine) thus consists of these two fundamental constructs:

  • translational research and
  • translational effectiveness,

which have distinct purposes, protocols and products, while both converging on the same goal of new and improved means of

  • individualized patient-centered diagnostic and prognostic care.

It is important to note that the U.S. Patient Protection and Affordable Care Act (PPACA, 23 March 2010) has created an environment that facilitates the pursuit of translational health care because it emphasizes patient-centered outcomes research (PCOR). That is to say, it fosters the transaction between translational research (i.e., “block” T1)(TR) and translational effectiveness (i.e., “block” T2)(TE), and favors the establishment of communities of practice-research interaction. The latter, now recognized as practice-based research networks, incorporate three or more clinical practices in the community into

  • a community of practices network coordinated by an academic center of research.

Practice-based research networks may be a third “block” (T3)(PBTN) in translational health care and they could be conceptualized as a stepping-stone, a go-between bench-to-bedside translational research and result-translation translational effectiveness [7]. Alternatively, practice-based research networks represent the practical entities where the transaction between

  • translational research and translational effectiveness can most optimally be undertaken.

It is within the context of the practice-based research network that the process of bench-to-bedside can best seamlessly proceed, and it is within the framework of the practice-based research network that

  • the best evidence of results can be most efficiently translated into practice and
  • be utilized in evidence-based clinical decision-making, viz. translational effectiveness.

Translational effectiveness

As noted, translational effectiveness represents the translation of the best available evidence in the clinical practice to ensure its utilization in clinical decisions. Translational effectiveness fosters evidence-based revisions of clinical practice guidelines. It also encourages

  • effectiveness-focused,
  • patient-centered and
  • evidence-based clinical decision-making.

Translational effectiveness rests not only on the expertise of the clinical staff and the empowerment of patients, caregivers and stakeholders, but also, and

  • most importantly on the best available evidence [8].

The pursuit of the best available evidence is the foundation of

  • translational effectiveness and more generally of
  • translational medicine in evidence-based health care.

The best available evidence is obtained through a systematic process driven by

  • a research question/hypothesis that is articulated about clearly stated criteria that pertain to the
  • patient (P), the interventions (I) under consideration (C), for the sought clinical outcome (O), within a given timeline (T) and clinical setting (S).

PICOTS is tested on the appropriate bibliometric sample, with tools of measurements designed to establish the level (e.g., CONSORT) and the quality of the evidence. Statistical and meta-analytical inferences, often enhanced by analyses of clinical relevance [9], converge into the formulation of the consensus of the best available evidence. Its dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation

  • in the utilization of the best available evidence in clinical decisions, viz., translational effectiveness.

To be clear, translational effectiveness – and, in the perspective discussed above, translational health care – is anchored on obtaining the best available evidence,

  • which emerges from highest quality research.
  • which is obtained when errors are minimized.

In an early conceptualization [10], errors in research were presented as

  • those situations that threaten the internal and the external validity of a research study –

that is, conditions that impede either the study’s reproducibility, or its generalization. In point of fact, threats to internal and external validity [10] represent specific aspects of systematic errors (i.e., bias) in the

  • research design,
  • methodology and
  • data analysis.

Thence emerged a branch of science that seeks to

  • understand,
  • control and
  • reduce risk of bias in research.

Risk of bias and the best available evidence

It follows that the best available evidence comes from research with the fewest threats to internal and to external validity – that is to say, the fewest systematic errors: the lowest risk of bias. Quality of research, as defined in the field of research synthesis [11], has become synonymous with

  • low bias and contained risk of bias [1215].

Several years ago, the Cochrane group embarked on a new strategy for assessing the quality of research studies by examining potential sources of bias. Certain original areas of potential bias in research were identified, which pertain to

(a) the sampling and the sample allocation process, to measurement, and to other related sources of errors (reliability of testing),

(b) design issues, including blinding, selection and drop-out, and design-specific caveats, and

(c) analysis-related biases.

A Risk of Bias tool was created (Cochrane Risk of Bias), which covered six specific domains:

1. selection bias,

2. performance bias,

3. detection bias,

4. attrition bias,

5. reporting bias, and

6. other research protocol-related biases.

Assessments were made within each domain by one or more items specific for certain aspects of the domain. Each items was scored in two distinct steps:

1. the support for judgment was intended to provide a succinct free-text description of the domain being queried;

2. each item was scored high, low, or unclear risk of material bias (defined here as “…bias of sufficient magnitude to have a notable effect on the results or conclusions…” [16]).

It was advocated that assessments across items in the tool should be critically summarized for each outcome within each report. These critical summaries were to inform the investigator so that the primary meta-analysis could be performed either

  • only on studies at low risk of bias, or for
  • the studies stratified according to risk of bias [16].

This is a form of acceptable sampling analysis designed to yield increased homogeneity of meta-analytical outcomes [17]. Alternatively, the homogeneity of the meta-analysis can be further enhanced by means of the more direct quality-effects meta-analysis inferential model [18].

Clearly, one among the major drawbacks of the Cochrane Risk of Bias tool is

  • the subjective nature of its assessment protocol.

In an effort to correct for this inherent weakness of the instrument, the Cochrane group produced

  • detailed criteria for making judgments about the risk of bias from each individual item[16], and
  • that judgments be made independently by at least two people, with any discrepancies resolved by discussion [16].

This approach to increase the reliability of measurement in research synthesis protocols

  • is akin to that described by us [19,20] and by AHRQ [21].

In an effort to aid clinicians and patients in making effective health care related decisions, AHRQ developed an alternative Risk of Bias instrument for enabling systematical evaluation of evidence reporting [22]. The AHRQ Risk of Bias instrument was created to monitor four primary domains:

1. risk of bias: design, methodology, analysis scoring – low, medium, high

2. consistency: extent of similarity in effect sizes across studies within a bibliome scoring – consistent, inconsistent, unknown

3. directness: unidirectional link between the interventions of interest and the sought outcome, as opposed to multiple links in a casual chain scoring – direct, indirect

4. precision: extent of certainty for estimate of effect with respect to the outcome scoring – precise, imprecise In addition, four secondary domains were identified:

a. Dose response association: pattern of a larger effect with greater exposure (Present/Not Present/Not Applicable or Not Tested)

a. Confounders: consideration of confounding variables (Present/Absent)

a. Strength of association: likelihood that the observed effect is large enough that it cannot have occurred solely as a result of bias from potential confounding factors (Strong/Weak)

a. Publication bias

The AHRQ Risk of Bias instrument is also designed to yield an overall grade of the estimated risk of bias in quality reporting:

•Strength of Evidence Grades (scored as high – moderate – low – insufficient)

This global assessment, in addition to incorporating the assessments above, also rates:

–major benefit

–major harm

–jointly benefits and harms

–outcomes most relevant to patients, clinicians, and stakeholders

The AHRQ Risk of Bias instrument suffers from the same two major limitations as the Cochrane tool:

1. lack of formal psychometric validation as most other tools in the field [21], and

2. providing a subjective and not quantifiable assessment.

To begin the process of engaging in a systematic dialectic of the two instruments in terms of their respective construct and content validity, it is necessary

  • to validate each for reliability and validity either by means of the classic psychometric theory or generalizability (G) theory, which allows
  • the simultaneous estimation of multiple sources of measurement error variance (i.e., facets)
  • while generalizing the main findings across the different study facets.

G theory is particularly useful in clinical care analysis of this type, because it permits the assessment of the reliability of clinical assessment protocols.

  • the reliability and minimal detectable changes across varied combinations of these facets are then simply calculated [23], but
  • it is recommended that G theory determination follow classic theory psychometric assessment.

Therefore, we have commenced a process of revision the AHRQ Risk of Bias instrument by rendering questions in primary domains quantifiable (scaled 1–4),

  • which established the intra-rater reliability (r = 0.94, p < 0.05), and
  • the criterion validity (r = 0.96, p < 0.05) for this instrument (Figure 2).

????????????????????????????????????????

 

Figure 2. Proportion of shared variance in criterion validity (A) and inter-rater reliability (B) in the AHRQ Risk of Bias instrument revised as described.
Two raters were trained and standardized 
[20] with the revised AHRQ Risk of Bias and with the R-Wong instrument, which has been previously validated[24]. Each rater independently produced ratings on a sample of research reports with both instruments on two separate occasions, 1–2 months apart. Pearson correlation coefficient was used to compute the respective associations. The figure shows Venn diagrams to illustrate the intersection between each two sets data used in the correlations. The overlap between the sets in each panel represents the proportion of shared variance for that correlation. The percent of unexplained variance is given in the insert of each panel.

A similar revision of the Cochrane Risk of Bias tool may also yield promising validation data. G theory validation of both tools will follow. Together, these results will enable a critical and systematic dialectical comparison of the Cochrane and the AHRQ Risk of Bias measures.

Discussion

The critical evaluation of the best available evidence is critical to patient-centered care, because biased research findings are fundamentally invalid and potentially harmful to the patient. Depending upon the tool of measurement, the validity of an instrument in a study is obtained by means of criterion validity through correlation coefficients. Criterion validity refers to the extent to which one measures or predicts the value of another measure or quality based on a previously well-established criterion. There are other domains of validity such as: construct validity and content validity that are rather more descriptive than quantitative. Reliability however is used to describe the consistency of a measure, the extent to which a measurement is repeatable. It is commonly assessed quantitatively by correlation coefficients. Inter-rater reliability is rendered as a Pearson correlation coefficient between two independent readers, and establishes equivalence of ratings produced by independent observers or readers. Intra-rater reliability is determined by repeated measurement performed by the same subject (rater/reader) at two different points in time to assess the correlation or strength of association of the two sets of scores.

To establish the reliability of research quality assessment tools it is necessary, as we previously noted [20]:

•a) to train multiple readers in sharing a common view for the cognitive interpretation of each item. Readers must possess declarative knowledge a factual form of information known to be static in nature a certain depth of knowledge and understanding of the facts about which they are reviewing the literature. They must also have procedural knowledge known as imperative knowledge that can be directly applied to a task in this case a clear understanding of the fundamental concepts of research methodology, design, analysis and inference.

•b) to train the readers to read and evaluate the quality of a set of papers independently and blindly. They must also be trained to self-monitor and self-assess their skills for the purpose of insuring quality control.

•c) to refine the process until the inter-rater correlation coefficient and Cohen coefficient of agreement are about 0.9 (over 81% shared variance). This will establishes that the degree of attained agreement among well-trained readers is beyond chance.

•d) to obtain independent and blind reading assessments from readers on reports under study.

•e) to compute means and standard deviation of scores for each question across the reports, repeat process if the coefficient of variations are greater than 5% (i.e., less than 5% error among the readers across each questions).

The quantification provided by instruments validated in such a manner to assess the quality and the relative lack of bias in the research evidence allows for the analysis of the scores by means of the acceptable sampling protocol. Acceptance sampling is a statistical procedure that uses statistical sampling to determine whether a given lot, in this case evidence gathered from an identified set of published reports, should be accepted or rejected [12,25]. Acceptable sampling of the best available evidence can be obtained by:

•convention: accept the top 10 percentile of papers based on the score of the quality of the evidence (e.g., low Risk of Bias);

•confidence interval (CI95): accept the papers whose scores fall at of beyond the upper confidence limit at 95%, obtained with mean and variance of the scores of the entire bibliome;

•statistical analysis: accept the papers that sustain sequential repeated Friedman analysis.

To be clear, the Friedman test is a non-parametric equivalent of the analysis of variance for factorial designs. The process requires the 4-E process outlined below:

•establishing a significant Friedman outcome, which indicates significant differences in scores among the individual reports being tested for quality;

•examining marginal means and standard deviations to identify inconsistencies, and to identify the uniformly strong reports across all the domains tested by the quality instrument

•excluding those reports that show quality weakness or bias

•executing the Friedman analysis again, and repeating the 4-E process as many times as necessary, in a statistical process akin to hierarchical regression, to eliminate the evidence reports that exhibit egregious weakness, based on the analysis of the marginal values, and to retain only the group of report that harbor homogeneously strong evidence.

Taken together, and considering the domain and the structure of both tools, expectations are that these analyses will confirm that these instruments are two related entities, each measuring distinct aspects of bias. We anticipate that future research will establish that both tools assess complementary sub-constructs of one and the same archetype meta-construct of research quality.

References

  1. Jiang F, Zhang J, Wang X, Shen X: Important steps to improve translation from medical research to health policy.

    J Trans Med 2013, 11:33. BioMed Central Full Text OpenURL

  2. Sung NS, Crowley WF Jr, Genel M, Salber P, Sandy L, Sherwood LM, Johnson SB, Catanese V, Tilson H, Getz K, Larson EL, Scheinberg D, Reece EA, Slavkin H, Dobs A, Grebb J, Martinez RA, Korn A, Rimoin D:Central challenges facing the national clinical research enterprise.

    JAMA 2003, 289:1278-1287. PubMed Abstract | Publisher Full Text OpenURL

  3. Woolf SH: The meaning of translational research and why it matters.

    JAMA 2008, 299(2):211-213. PubMed Abstract | Publisher Full Text OpenURL

  4. Chiappelli F: From translational research to translational effectiveness: the “patient-centered dental home” model.

    Dental Hypotheses 2011, 2:105-112. Publisher Full Text OpenURL

  5. Maida C: Building communities of practice in comparative effectiveness research. In Comparative effectiveness and efficacy research and analysis for practice (CEERAP): applications for treatment options in health care. Edited by Chiappelli F, Brant X, Cajulis C. Heidelberg: Springer–Verlag; 2012.

    Chapter 1

    OpenURL

  6. Agency for Healthcare Research and Quality: Budget estimates for appropriations committees, fiscal year (FY) 2008: performance budget submission for congressional justification.

    Performance budget overview 2008.

    http://www.ahrq.gov/about/cj2008/cjweb08a.htm#Statement webcite. Accessed 11 May 2013

    OpenURL

  7. Westfall JM, Mold J, Fagnan L: Practice-based research—“blue highways” on the NIH roadmap.

    JAMA 2007, 297:403-406. PubMed Abstract | Publisher Full Text OpenURL

  8. Chiappelli F, Brant X, Cajulis C: Comparative effectiveness and efficacy research and analysis for practice (CEERAP) applications for treatment options in health care. Heidelberg: Springer–Verlag; 2012. OpenURL

  9. Dousti M, Ramchandani MH, Chiappelli F: Evidence-based clinical significance in health care: toward an inferential analysis of clinical relevance.

    Dental Hypotheses 2011, 2:165-177. Publisher Full Text OpenURL

  10. Campbell D, Stanley J: Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally; 1963. OpenURL

  11. Littell JH, Corcoran J, Pillai V: Research synthesis reports and meta-analysis. New York, NY: Oxford Univeristy Press; 2008. OpenURL

  12. Chiappelli F: The science of research synthesis: a manual of evidence-based research for the health sciences. Hauppauge NY: NovaScience Publisher, Inc; 2008. OpenURL

  13. Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions version 5.0.1. Chichester, West Sussex, UK: John Wiley & Sons. The Cochrane collaboration; 2008. OpenURL

  14. CRD: Systematic Reviews: CRD’s guidance for undertaking reviews in health care. National Institute for Health Research (NIHR). University of York, UK: Center for reviews and dissemination; 2009. PubMed Abstract| Publisher Full Text OpenURL

  15. McDonald KM, Chang C, Schultz E: Closing the quality Gap: revisiting the state of the science. Summary report. U.S. Department of Health & Human Services. AHRQ, Rockville, MD: Summary report. AHRQ publication No. 12(13)-E017; 2013. OpenURL


Read Full Post »

Gene Expression: Algorithms for Protein Dynamics

Reporter:  Aviva Lev-Ari, PhD, RN

Stanford-developed algorithm reveals complex protein dynamics behind gene expression

BY KRISTA CONGER

Michael Snyder

In yet another coup for a research concept known as “big data,” researchers at the Stanford University School of Medicine have developed a computerized algorithm to understand the complex and rapid choreography of hundreds of proteins that interact in mindboggling combinations to govern how genes are flipped on and off within a cell.

To do so, they coupled findings from 238 DNA-protein-binding experiments performed by the ENCODE project — a massive, multiyear international effort to identify the functional elements of the human genome — with a laboratory-based technique to identify binding patterns among the proteins themselves.

The analysis is sensitive enough to have identified many previously unsuspected, multipartner trysts. It can also be performed quickly and repeatedly to track how a cell responds to environmental changes or crucial developmental signals.

“At a very basic level, we are learning who likes to work with whom to regulate around 20,000 human genes,” said Michael Snyder, PhD, professor and chair of genetics at Stanford. “If you had to look through all possible interactions pair-wise, it would be ridiculously impossible. Here we can look at thousands of combinations in an unbiased manner and pull out important and powerful information. It gives us an unprecedented level of understanding.”

Snyder is the senior author of a paper describing the research published Oct. 24 in Cell. The lead authors are postdoctoral scholars Dan Xie, PhD, Alan Boyle, PhD, and Linfeng Wu, PhD.

Proteins control gene expression by either binding to specific regions of DNA, or by interacting with other DNA-bound proteins to modulate their function. Previously, researchers could only analyze two to three proteins and DNA sequences at a time, and were unable to see the true complexities of the interactions among proteins and DNA that occur in living cells.

The challenge resembled trying to figure out interactions in a crowded mosh pit by studying a few waltzing couples in an otherwise empty ballroom, and it has severely limited what could be learned about the dynamics of gene expression.

The ENCODE, for the Encyclopedia of DNA Elements, project was a five-year collaboration of more than 440 scientists in 32 labs around the world to reveal the complex interplay among regulatory regions, proteins and RNA molecules that governs when and how genes are expressed. The project has been generating a treasure trove of data for researchers to analyze for the last eight years.

In this study, the researchers combined data from genomics (a field devoted to the study of genes) and proteomics (which focuses on proteins and their interactions). They studied 128 proteins, called trans-acting factors, which are known to regulate gene expression by binding to regulatory regions within the genome. Some of the regions control the expression of nearby genes; others affect the expression of genes great distances away.

The researchers used 238 data sets generated by the ENCODE project to study the specific DNA sequences bound by each of the 128 trans-acting factors. But these factors aren’t monogamous; they bind many different sequences in a variety of protein-DNA combinations. Xie, Boyle and Snyder designed a machine-learning algorithm to analyze all the data and identify which trans-acting factors tend to be seen together and which DNA sequences they prefer.

Wu then performed immunoprecipitation experiments, which use antibodies to identify protein interactions in the cell nucleus. In this way, they were able to tell which proteins interacted directly with one another, and which were seen together because their preferred DNA binding sites were adjoining.

“Before our work, only the combination of two or three regulatory proteins were studied, which oversimplified how gene regulators collaborate to find their targets,” Xie said. “With our method we are able to study the combination of more than 100 regulators and see a much more complex structure of collaboration. For example, it had been believed that a key regulator of cell proliferation called FOS typically only works with JUN protein family members. We show, in addition to JUN, FOS has different partners under different circumstances. In fact, we found almost all the canonical combinations of two or three trans-acting factors have many more partners than we previously thought.”

To broaden their analysis, the researchers included data from other sources that explored protein-binding patterns in five cell types. They found that patterns of co-localization among proteins, in which several proteins are found clustered closely on the DNA to govern gene expression, vary according to cell type and the conditions under which the cells are grown. They also found that many of these clusters can be explained through interactions among proteins, and that not every protein bound to DNA directly.

“We’d like to understand how these interactions work together to make different cell types and how they gain their unique identities in development,” Snyder said. “Furthermore, diseased cells will have a very different type of wiring diagram. We hope to understand how these cells go astray.”

Other Stanford co-authors include life science research assistant Jie Zhai and life science research associate Trupti Kawli, PhD.

The research was supported by the National Human Genome Research Institute (grants U54HG004558 and U54HG006996).

Information about Stanford’s Department of Genetics, which also supported the work, is available at http://genetics.stanford.edu.

PRINT MEDIA CONTACT
Krista Conger | Tel (650) 725-5371
kristac@stanford.edu
BROADCAST MEDIA CONTACT
M.A. Malone | Tel (650) 723-6912
mamalone@stanford.edu

Stanford Medicine integrates research, medical education and patient care at its three institutions – Stanford University School of MedicineStanford Hospital & Clinics and Lucile Packard Children’s Hospital. For more information, please visit the Office of Communication & Public Affairs site at

http://mednews.stanford.edu/.http://med.stanford.edu/ism/2013/october/snyder.html?goback=%2Egde_5180384_member_5799368448383397888#sthash%2EhU03LKIX%2Edpuf

 

Dynamic trans-Acting Factor Colocalization in Human Cells

Cell, Volume 155, Issue 3, 713-724, 24 October 2013
Copyright © 2013 Elsevier Inc. All rights reserved.
10.1016/j.cell.2013.09.043

Authors

    • Highlights
    • Colocalization patterns of 128 TFs in human cells
    • An application of SOMs to study high-dimensional TF colocalization patterns
    • Colocalization patterns are dynamic through stimulation and across cell types
    • Many TF colocalizations can be explained by protein-protein interaction

    Summary

    Different trans-acting factors (TFs) collaborate and act in concert at distinct loci to perform accurate regulation of their target genes. To date, the cobinding of TF pairs has been investigated in a limited context both in terms of the number of factors within a cell type and across cell types and the extent of combinatorial colocalizations. Here, we use an approach to analyze TF colocalization within a cell type and across multiple cell lines at an unprecedented level. We extend this approach with large-scale mass spectrometry analysis of immunoprecipitations of 50 TFs. Our combined approach reveals large numbers of interesting TF-TF associations. We observe extensive change in TF colocalizations both within a cell type exposed to different conditions and across multiple cell types. We show distinct functional annotations and properties of different TF cobinding patterns and provide insights into the complex regulatory landscape of the cell.

    http://www.cell.com/abstract/S0092-8674%2813%2901217-8#!

    Personalized medicine aims to assess medical risks, monitor, diagnose and treat patients according to their specific genetic composition and molecular phenotype. The advent of genome sequencing and the analysis of physiological states has proven to be powerful (Cancer Genome Atlas Research Network, 2011). However, its implementation for the analysis of otherwise healthy individuals for estimation of disease risk and medical interpretation is less clear. Much of the genome is difficult to interpret and many complex diseases, such as diabetes, neurological disorders and cancer, likely involve a large number of different genes and biological pathways (Ashley et al., 2010,Grayson et al., 2011,Li et al., 2011), as well as environmental contributors that can be difficult to assess. As such, the combination of genomic information along with a detailed molecular analysis of samples will be important for predicting, diagnosing and treating diseases as well as for understanding the onset, progression, and prevalence of disease states (Snyder et al., 2009).

    Presently, healthy and diseased states are typically followed using a limited number of assays that analyze a small number of markers of distinct types. With the advancement of many new technologies, it is now possible to analyze upward of 105 molecular constituents. For example, DNA microarrays have allowed the subcategorization of lymphomas and gliomas (Mischel et al., 2003), and RNA sequencing (RNA-Seq) has identified breast cancer transcript isoforms (Li et al., 2011,van der Werf et al., 2007,Wu et al., 2010,Lapuk et al., 2010). Although transcriptome and RNA splicing profiling are powerful and convenient, they provide a partial portrait of an organism’s physiological state. Transcriptomic data, when combined with genomic, proteomic, and metabolomic data are expected to provide a much deeper understanding of normal and diseased states (Snyder et al., 2010). To date, comprehensive integrative omics profiles have been limited and have not been applied to the analysis of generally healthy individuals.

    To obtain a better understanding of: (1) how to generate an integrative personal omics profile (iPOP) and examine as many biological components as possible, (2) how these components change during healthy and diseased states, and (3) how this information can be combined with genomic information to estimate disease risk and gain new insights into diseased states, we performed extensive omics profiling of blood components from a generally healthy individual over a 14 month period (24 months total when including time points with other molecular analyses). We determined the whole-genome sequence (WGS) of the subject, and together with transcriptomic, proteomic, metabolomic, and autoantibody profiles, used this information to generate an iPOP. We analyzed the iPOP of the individual over the course of healthy states and two viral infections (Figure 1A). Our results indicate that disease risk can be estimated by a whole-genome sequence and by regularly monitoring health states with iPOP disease onset may also be observed. The wealth of information provided by detailed longitudinal iPOP revealed unexpected molecular complexity, which exhibited dynamic changes during healthy and diseased states, and provided insight into multiple biological processes. Detailed omics profiling coupled with genome sequencing can provide molecular and physiological information of medical significance. This approach can be generalized for personalized health monitoring and medicine.

     

    Read Full Post »

    « Newer Posts - Older Posts »