Feeds:
Posts
Comments

Archive for the ‘Statistical Methods for Research Evaluation’ Category

Coronary Reperfusion Therapies: CABG vs PCI – Mayo Clinic preprocedure Risk Score (MCRS) for Prediction of in-Hospital Mortality after CABG or PCI

Author and Curator: Larry H. Bernstein, MD, FCAP 

and

Curator: Aviva Lev-Ari, PhD, RN

 

Published on Mar 27, 2012

Mayo Clinic cardiologist Charanjit Rihal, M.D. discusses a recent study conducted by Mayo Clinic that focuses on predicting operator outcomes in coronary angioplasty procedures.

“We’ve been interested in prediction of outcomes after coronary angioplasty and stent procedures for some time,” says Dr. Rihal. “Almost ten years ago, we published a paper called ‘The Mayo Clinic Risk Score for Prediction of Adverse Events following Coronary Angioplasty and Stent Procedures’. We’ve since refined into the ‘New Mayo Clinic Risk Score’, which includes seven key variables that predict bad outcomes following PCI procedures.”

The study, which was presented at the 2012 ACC Annual Scientific Session & Expo, presents a novel application of the Mayo Clinic Risk Score to predict operator specific outcomes in coronary angioplasty procedures.

“We looked at the outcomes of over 8000 procedures performed by 21 Mayo Clinic interventional cardiologists as predicted by the Mayo Clinic Risk Score,” says Dr. Rihal. “On an individual basis, we were able to calculate the expected mortality and adverse event rate and compare that to the actual observed mortality and adverse event rate. We were able to show that in our clinical practice of PCI, this risk score was very useful as a performance measure.

In a pleasant surprise, the study also discovered an outlier whose outcomes for instances of adverse event rates were much better than expected. “We don’t know exactly why this operator has such good results,” remarks Dr. Rihal, “But that will be the next phase of this analysis. We can compare procedural, pre-procedural, and post procedural practices of this operator and see if there are things that are translatable to the rest of us.”

VIEW VIDEO

Singh M, Gersh BJ, Li S, Rumsfeld JS, Spertus JA, O’Brien SM, Suri RM, Peterson ED.
Circulation. 2008 Jan 22;117(3):356-62.  http://dx.doi.org/10.1161/CIRCULATIONAHA.107.711523     Epub 2008 Jan 2.  PMID: 18172033
BACKGROUND:  Current risk models predict in-hospital mortality after either coronary artery bypass graft surgery or percutaneous coronary interventions. The overlap of models suggests that the same variables can define the risks of alternative coronary reperfusion therapies. We sought  a preprocedure risk model that can predict in-hospital mortality after either percutaneous coronary intervention or coronary artery bypass graft surgery.
METHODS AND RESULTS:  We tested the ability of the recently validated, integer-based Mayo Clinic Risk Score (MCRS) for percutaneous coronary intervention, which is based solely on preprocedure variables:
  • age,
  • creatinine,
  • ejection fraction,
  • myocardial infarction < or = 24 hours,
  • shock,
  • congestive heart failure
  • peripheral vascular disease
to predict in-hospital mortality among 370,793 patients in the Society of Thoracic Surgeons  (STS) database undergoing isolated coronary artery bypass graft surgery from 2004 to 2006. The median age of the STS database patients was 66 years (quartiles 1 to 3, 57 to 74 years), with 37.2% of patients > or = 70 years old. The high prevalence of comorbid conditions included
  • diabetes mellitus (37.1%)
  • hypertension (80.5%)
  • peripheral vascular disease (15.3%)
  • renal disease (creatinine > or = 1.4 mg/dL; 11.8%).
A strong association existed between the MCRS and the observed mortality in the STS database. The in-hospital mortality ranged between 0.3% (95% confidence interval 0.3% to 0.4%) with a score of 0 on the MCRS and 33.8% (95% confidence interval 27.3% to 40.3%) with an MCRS score of 20 to 24. The discriminatory ability of the MCRS was moderate, as measured by the area under the receiver operating characteristic curve (C-statistic = 0.715 to 0.784 among various subgroups); performance was inferior to the STS model for most categories tested.
CONCLUSIONS:  This model is based on the 7 preprocedure risk variables listed above. However, it  may be useful for providing patients with individualized, evidence-based estimates of procedural risk as part of the informed consent process before percutaneous or surgical revascularization.
It appears to this reviewer that the model might provide a better AUC if it were reconstructed as follows:
  1. age
  2. estimated creatinine clearance (which has been improved substantially by the Mayo Clinic)
  3. EF
  4. AMI < 24 hrs
  5. Decompensated CHF or shock
  6. PVD, or carotid artery disease, or PAD
  7. MAP
Mean arterial pressure (MAP) Calculator: Systolic BP: mm Hg: Diastolic BP: mm Hg Background: Equation: MAP = [(2 x diastolic)+systolic] / 3      http://www.globalrph.com/map.htm
There is another question that This reviewer has about the approach to prediction of post-procedural survival from pre-procedural information.
  • Age falls into interval classes that would suffice for use as classification variables.
  • Creatinine is a measurement that is a continuous variable, but I  call attention to the fact that eGFR would be preferred, as physicians tend to look at the creatinine roughly in relationship to age, gender, and body size or BMI.
  • The laboratory contribution as powerful information is underutilized.
On the one hand, CHF is important, but how is the distinction made between
  • stable CHF and
  • decompensated CHF, or degrees in between?
This is where the amino-terminal pro b-type natriuretic perptide, or the BNP has been used in isolation, but not in a multivariate model such as described.  There is a difference between them, but whether the difference makes a difference is unproved.
The BNP, derived from the propeptide is made by the myocardium as a hormonal mediator of sodium retention.  The BNP is degraded by the vascular endothelium, so it’s half time of disappearance would not reflect renal dysfunction, which is not the case for the NT proBNP.  This observation has nothing to do with the medical use of BNP.
Related articles

Other related articles were published on this Open Access Online Scientific Journal, including:

Survivals Comparison of Coronary Artery Bypass Graft (CABG) and Percutaneous Coronary Intervention (PCI) / Coronary Angioplasty

Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

http://pharmaceuticalintelligence.com/2013/06/23/comparison-of-cardiothoracic-bypass-and-percutaneous-interventional-catheterization-survivals/

Competition in the Ecosystem of Medical Devices in Cardiac and Vascular Repair: Heart Valves, Stents, Catheterization Tools and Kits for Open Heart and Minimally Invasive Surgery (MIS) (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/06/22/competition-in-the-ecosystem-of-medical-devices-in-cardiac-and-vascular-repair-heart-valves-stents-catheterization-tools-and-kits-for-open-heart-and-minimally-invasive-surgery-mis/

Bioabsorbable Drug Coating Scaffolds, Stents and Dual Antiplatelet Therapy (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/29/bioabsorbable-drug-coating-scaffolds-stents-and-dual-antiplatelet-therapy/

Vascular Repair: Stents and Biologically Active Implants (larryhbern)
http://pharmaceuticalintelligence.com/2013/05/04/stents-biologically-active-implants-and-vascular-repair/

Drug Eluting Stents: On MIT’s Edelman Lab’s Contributions to Vascular Biology and its Pioneering Research on DES (larryhbern)

http://pharmaceuticalintelligence.com/2013/04/25/contributions-to-vascular-biology/
Coronary Artery Disease – Medical Devices Solutions: From First-In-Man Stent Implantation, via Medical Ethical Dilemmas to Drug Eluting Stents (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/08/13/coronary-artery-disease-medical-devices-solutions-from-first-in-man-stent-implantation-via-medical-ethical-dilemmas-to-drug-eluting-stents/

Survivals Comparison of Coronary Artery Bypass Graft (CABG) and Percutaneous Coronary Intervention (PCI) / Coronary Angioplasty (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/23/comparison-of-cardiothoracic-bypass-and-percutaneous-interventional-catheterization-survivals/
Trans-apical Transcatheter Aortic Valve Replacement in a Patient with Severe and Complex Left Main Coronary Artery Disease (LMCAD) (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/17/management-of-difficult-trans-apical-transcatheter-aortic-valve-replacement-in-a-patient-with-severe-and-complex-arterial-disease/
Transcatheter Aortic Valve Replacement (TAVR): Postdilatation to Reduce Paravalvular Regurgitation During TAVR with a Balloon-expandable Valve (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/17/postdilatation-to-reduce-paravalvular-regurgitation-during-transcatheter-aortic-valve-replacement/

Svelte Medical Systems’ Drug-Eluting Stent: 0% Clinically-Driven Events Through 12-Months in First-In-Man Study (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/28/svelte-medical-systems-drug-eluting-stent-0-clinically-driven-events-through-12-months-in-first-in-man-study/

Acute and Chronic Myocardial Infarction: Quantification of Myocardial Perfusion Viability – FDG-PET/MRI vs. MRI or PET alone (Justin Pearlman, Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/22/acute-and-chronic-myocardial-infarction-quantification-of-myocardial-viability-fdg-petmri-vs-mri-or-pet-alone/

Biomaterials Technology: Models of Tissue Engineering for Reperfusion and Implantable Devices for Revascularization (larryhbern)
http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/
Revascularization: PCI, Prior History of PCI vs CABG (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/04/25/revascularization-pci-prior-history-of-pci-vs-cabg/
Accurate Identification and Treatment of Emergent Cardiac Events (larryhbern)
http://pharmaceuticalintelligence.com/2013/03/15/accurate-identification-and-treatment-of-emergent-cardiac-events/
FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-cardiovascular-imaging-technology/
The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX, and Clinical SYNTAX (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-established-risk-scores-timi-grace-syntax-and-clinical-syntax/
CABG or PCI: Patients with Diabetes – CABG Rein Supreme (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/11/05/cabg-or-pci-patients-with-diabetes-cabg-rein-supreme/
New Drug-Eluting Stent Works Well in STEMI (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/08/22/new-drug-eluting-stent-works-well-in-stemi/

Three coronary artery bypass grafts, a LIMA to...

Three coronary artery bypass grafts, a LIMA to LAD and two saphenous vein grafts – one to the right coronary artery (RCA) system and one to the obtuse marginal (OM) system. (Photo credit: Wikipedia)

Forrester-classification for classification of...

Forrester-classification for classification of Congestive heart failure ; Forrester-Klassifikation zur Einteilung einer akuten Herzinsuffizienz (Photo credit: Wikipedia)

Read Full Post »

Effect of Hospital Characteristics on Outcomes of Endovascular Repair of Descending Aortic Aneurysms in US Medicare Population

Writer and Curator: Larry H. Bernstein, MD, FCAP 

and

Curator: Aviva Lev-Ari, PhD, RN 

Impact of hospital volume and type on outcomes of open and endovascular repair of descending thoracic aneurysms in the United States Medicare population.

Patel VI, Mukhopadhyay S, Ergul E, Aranson N, …., Cambria RP.
Journal of vascular surgery 2013;    http://dx.doi.org/10.1016/j.jvs.2013.01.035

 

Open surgery for thoracic aortic aneurysm has had success, but it carries complication risks.  In 2004, a much less invasive procedure, thoracic endovascular repair (TEVAR) was introduced. It eliminated a need for open surgery in many patients, but not all were suitable candidtes .  The advances in endovascular technology and procedural breakthroughs  since it was introduced has contributed to a dramatic transformation of the specialty of thoracic aortic surgery. The decision of which patients require open surgery is necessarily determined by the limitations of the procedure and the condition of the patient.
Thoracic endovascular aortic repair (TEVAR) is a minimally invasive alternative to conventional open surgical reconstruction for the treatment of thoracic aortic aneurysm. TEVAR procedures can be challenging and, at times, extraordinarily difficult.  Meticulous assessment of anatomy and preoperative procedure planning are absolutely paramount to produce optimal outcomes. The rapidly Increased use of TEVAR has produced favorable outcomes of TEVAR compared with open abdominal repair for descending thoracic aortic aneurysms (DTAs).   But the success of these procedure depends on requisite skills, and following guidelines intended for use in quality-improvement programs that assess the standard of care expected from all physicians who perform TEVAR procedures.
Currently, there is a diverse array of endografts that are commercially available to treat the thoracic aorta. Multiple studies have demonstrated excellent outcomes of thoracic endovascular aortic repair for the treatment of thoracic aortic aneurysms, with less reported perioperative morbidity and mortality in comparison with conventional open repair. Additionally, similar outcomes have been demonstrated for the treatment of type B dissections. However, the technology remains relatively novel, and larger studies with longer term outcomes are necessary to more fully evaluate the role of endovascular therapy for the treatment of thoracic aortic disease.
The MGH/Partners vascular surgeons evaluated the effect of case volume and hospital teaching status on clinical outcomes of intact DTA repair to gain an insight into whether there was a variability in DTAs outcomes based on hospital size, patient mix, number of procedures, staff characteristics, and teaching status.  This study was needed for establishing the type of procedure most suited to the type of patient, and to obtain the most accurate analysis of cost requirements based on resource allocation for reimbursement purposes.
The Medicare Provider Analysis and Review (MEDPAR) data set (2004 to 2007) was queried to identify open repair or TEVAR for DTA. Hospitals were stratified by DTA volume into high volume (HV; ≥8 cases/y) or low volume (LV; <8 cases/y) and teaching or nonteaching. The effect of hospital variables on the primary study end point of 30-day mortality and secondary end points of 30-day complications and long-term survival after open repair and TEVAR DTA repair were studied using univariate testing, multivariable regression modeling, Kaplan-Meier survival analysis, and Cox proportional hazards regression modeling.
They identified 763 hospitals performing 3554 open repairs and 3517 TEVARs. Overall DTA repair increased (P < .01) from 1375 in 2004 to 1987 in 2007. The proportion of hospitals performing open repair significantly decreased from 95% in 2004 to 57% in 2007 (P < .01), whereas
  • those performing TEVAR increased (P < .01) from 24% to 76%.
Overall repair type shifted from open (74% in 2004, the year before initial commercial availability of TEVAR) to TEVAR (39% open in 2007; P < .01). The fraction of open repairs at LV hospitals
  • decreased from 56% in 2004 to 44% in 2007 (P < .01), whereas
  • TEVAR increased from 24% in 2004 to 51% in 2007 (P < .01).
Overall mortality during the study interval for
  •  open repair was 15% at LV hospitals vs 11% at HV hospitals (P < .01), whereas
  • TEVAR mortality was similar, at 3.9% in LV vs 5.5% in HV hospitals (P = .43).
LV was independently associated with increased mortality after open repair (odds ratio, 1.4; 95% confidence interval, 1.1-1.8; P < .01) but not after TEVAR. There was no independent effect of hospital teaching status on mortality or complications after open repair or TEVAR repair.
The total number of DTA repairs significantly increased after the introduction of TEVAR for DTA. Operative mortality for TEVAR is independent of hospital volume and type, whereas
  • mortality after open surgery is lower at HV hospitals.
While the TEVAR mortality is significantly less than that of open surgery, the mortality in open surgery is higher for LV hospitals.  The data suggests that TEVAR can be safely performed across a spectrum of hospitals, whereas open surgery should be performed only at HV hospitals.
  1. Standard of Practice for the Endovascular Treatment of Thoracic Aortic Aneurysms and Type B Dissections. Fanelli F, and  Dake MD.  Cardiovasc Intervent Radiol. 2009 September; 32(5): 849–860.  http://dx.doi.org/10.1007/s00270-009-9668-6  PMCID: PMC2744786
  2. Thoracic aortic aneurysms and dissections: endovascular treatment. Baril DT, Cho JS, Chaer RA, Makaroun MS. Division of Vascular Surgery, University of Pittsburgh Medical Center, Pittsburgh, PAMt Sinai J Med. 2010 May-Jun;77(3):256-69.  http://dx.doi.org/10.1002/msj.20178.

Related Articles in Pharmaceuticval Intelligence

Abdominal Aortic Aneurysms (AAA): Albert Einstein’s Operation by Dr. Nissen   (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/06/11/abdominal-aortic-aneurysms-aaa-albert-einsteins-operation-by-dr-nissen/

The Heart Surgery Specialty: heart transplant, lung transplant, heart-lung transplantation, aortic valve surgery, bypass surgery, minimally invasive cardiac surgery, heart valve surgery, removal of cardiac tumors, reoperation valve surgery  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/?p=14092&preview=true

No Early Symptoms – An Aortic Aneurysm Before It Ruptures – Is There A Way To Know If I Have it?
(Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/06/10/no-early-symptoms-an-aortic-aneurysm-before-it-ruptures-is-there-a-way-to-know-if-i-have-it/

First-of-Its-Kind FDA Approval for ‘AUI’ Device with Endurant II AAA Stent Graft: Medtronic Expands in Endovascular Aortic Repair in the United States   (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/30/first-of-its-kind-fda-approval-for-aui-device-with-endurant-ii-aaa-stent-graft-medtronic-expands-in-endovascular-aortic-repair-in-the-united-states/

Abdominal Aortic Aneurysm: Endovascular repair and open repair resulted in similar long-term survival
(Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/12/03/abdominal-aortic-aneurysm-endovascular-repair-and-open-repair-resulted-in-similar-long-term-survival/

EUROPCR 2013, Paris 5/21-5/24, 2013 Conference for Cardiolovascular Intervention and Interventional Medicine  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/29/europcr-2013-paris-521-524-2013-conference-for-cardiolovascular-intervention-and-interventional-medicine/

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/03/07/genomics-genetics-of-cardiovascular-disease-diagnoses-a-literature-survey-of-ahas-circulation-cardiovascular-genetics-32010-32013/

Competition in the Ecosystem of Medical Devices in Cardiac and Vascular Repair: Heart Valves, Stents, Catheterization Tools and Kits for Open Heart and Minimally Invasive Surgery (MIS)  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/06/22/competition-in-the-ecosystem-of-medical-devices-in-cardiac-and-vascular-repair-heart-valves-stents-catheterization-tools-and-kits-for-open-heart-and-minimally-invasive-surgery-mis/

Bioabsorbable Drug Coating Scaffolds, Stents and Dual Antiplatelet Therapy (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/29/bioabsorbable-drug-coating-scaffolds-stents-and-dual-antiplatelet-therapy/

Vascular Repair: Stents and Biologically Active Implants (larryhbern)
http://pharmaceuticalintelligence.com/2013/05/04/stents-biologically-active-implants-and-vascular-repair/

Drug Eluting Stents: On MIT’s Edelman Lab’s Contributions to Vascular Biology and its Pioneering Research on DES  (larryhbern)
http://pharmaceuticalintelligence.com/2013/04/25/contributions-to-vascular-biology/

Coronary Artery Disease – Medical Devices Solutions: From First-In-Man Stent Implantation, via Medical Ethical Dilemmas to Drug Eluting Stents  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/08/13/coronary-artery-disease-medical-devices-solutions-from-first-in-man-stent-implantation-via-medical-ethical-dilemmas-to-drug-eluting-stents/

Survivals Comparison of Coronary Artery Bypass Graft (CABG) and Percutaneous Coronary Intervention (PCI) / Coronary Angioplasty (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/23/comparison-of-cardiothoracic-bypass-and-percutaneous-interventional-catheterization-survivals/

Trans-apical Transcatheter Aortic Valve Replacement in a Patient with Severe and Complex Left Main Coronary Artery Disease (LMCAD) (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/17/management-of-difficult-trans-apical-transcatheter-aortic-valve-replacement-in-a-patient-with-severe-and-complex-arterial-disease/

Transcatheter Aortic Valve Replacement (TAVR): Postdilatation to Reduce Paravalvular Regurgitation During TAVR with a Balloon-expandable Valve  (larryhbern)
http://pharmaceuticalintelligence.com/2013/06/17/postdilatation-to-reduce-paravalvular-regurgitation-during-transcatheter-aortic-valve-replacement/

Svelte Medical Systems’ Drug-Eluting Stent: 0% Clinically-Driven Events Through 12-Months in First-In-Man Study  (Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/28/svelte-medical-systems-drug-eluting-stent-0-clinically-driven-events-through-12-months-in-first-in-man-study/

Acute and Chronic Myocardial Infarction: Quantification of Myocardial Perfusion Viability – FDG-PET/MRI vs. MRI or PET alone  (Justin Pearlman, Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2013/05/22/acute-and-chronic-myocardial-infarction-quantification-of-myocardial-viability-fdg-petmri-vs-mri-or-pet-alone/

Biomaterials Technology: Models of Tissue Engineering for Reperfusion and Implantable Devices for Revascularization (larryhbern)
http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

Revascularization: PCI, Prior History of PCI vs CABG  (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/04/25/revascularization-pci-prior-history-of-pci-vs-cabg/

Accurate Identification and Treatment of Emergent Cardiac Events (larryhbern)
http://pharmaceuticalintelligence.com/2013/03/15/accurate-identification-and-treatment-of-emergent-cardiac-events/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-cardiovascular-imaging-technology/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX, and Clinical SYNTAX  (A Lev-Ari)
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Nitric Oxide and it’s impact on Cardiothoracic Surgery  (tildabarliya)
http://pharmaceuticalintelligence.com/2012/12/15/nitric-oxide-and-its-impact-on-cardiothoracic-surgery/

CABG or PCI: Patients with Diabetes – CABG Rein Supreme (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/11/05/cabg-or-pci-patients-with-diabetes-cabg-rein-supreme/

To Stent or Not? A Critical Decision (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/10/23/to-stent-or-not-a-critical-decision/

Endothelin Receptors in Cardiovascular Diseases: The Role of eNOS Stimulation (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/10/04/endothelin-receptors-in-cardiovascular-diseases-the-role-of-enos-stimulation/

Absorb™ Bioresorbable Vascular Scaffold: An International Launch by Abbott Laboratories
(Aviva Lev-Ari)
http://pharmaceuticalintelligence.com/2012/09/29/absorb-bioresorbable-vascular-scaffold-an-international-launch-by-abbott-laboratories/

Carotid Stenting: Vascular surgeons have pointed to more minor strokes in the stenting group and cardiologists to more myocardial infarctions in the CEA cohort. (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/09/21/carotid-stenting-vascular-surgeons-have-pointed-to-more-minor-strokes-in-the-stenting-group-and-cardiologists-to-more-myocardial-infarctions-in-the-cea-cohort/

New Drug-Eluting Stent Works Well in STEMI (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/08/22/new-drug-eluting-stent-works-well-in-stemi/

lobal Supplier Strategy for Market Penetration & Partnership Options (Niche Suppliers vs. National Leaders) in the Massachusetts Cardiology & Vascular Surgery Tools and Devices Market for Cardiac Operating Rooms and Angioplasty Suites (A Lev-Ari)
http://pharmaceuticalintelligence.com/2012/06/22/global-supplier-strategy-for-market-penetration-partnership-options-niche-suppliers-vs-national-leaders-in-the-massachusetts-cardiology-vascular-surgery-tools-and-devices-market-for-car/

Histopathological image of dissecting aneurysm...

Histopathological image of dissecting aneurysm of thoracic aorta in a patient without evidence of Marfan syndrome. The damaged aorta was surgically removed and replaced by artificial vessel. Victoria blue & HE stain. (Photo credit: Wikipedia)

Diagram of aortic aneurysm Figure A shows a no...

Diagram of aortic aneurysm Figure A shows a normal aorta. Figure B shows a thoracic aortic aneurysm (which is located behind the heart). Figure C shows an abdominal aortic aneurysm located below the arteries that supply blood to the kidneys. (Photo credit: Wikipedia)

Thoracic aorta

Thoracic aorta (Photo credit: Wikipedia)

Open Heart Surgery

Open Heart Surgery (Photo credit: Wikipedia)

Read Full Post »

FREE DOWNLOAD – Business Intelligence Application for Pharmaceutical and Biotech Professionals

Reporter: Aviva Lev-Ari, PhD, RN

 

 

Business Intelligence Application for Pharmaceutical and Biotech Professionals

Submitted by

Dr Stephen Breslin

Chief Executive | Glasgow Science Centre

50 Pacific Quay | Glasgow | G51 1EA

Download the FREE Software Application 

The Sophie Pharma & Biotech App is a powerful personal business and technology intelligence tool to increase your productivity.  Sophie will work on your Ipad, Iphone, Android tablet or smartphone.

You can download it directly from the Apple and Google store. For more information visit http://www.sophieknowledge.com <http://www.linkedin.com/redirect?url=www%2Esophieknowledge%2Ecom&amp;urlhash=kBVe&amp;_t=mbox_mebc>

Read Full Post »

Early Detection of Prostate Cancer: American Urological Association (AUA) Guideline

Author-Writer: Dror Nir, PhD

When reviewing the DETECTION OF PROSTATE CANCER section on the AUA website , The first thing that catches one’s attention is the image below; clearly showing two “guys” exploring with interest what could be a CT or MRI image…..

 fig 1

But, if you bother to read the review underneath this image regarding EARLY DETECTION OF PROSTATE CANCER: AUA GUIDELINE produced by an independent group that was commissioned by the AUA to conduct a systematic review and meta-analysis of the published literature on prostate cancer detection and screening; Panel Members: H. Ballentine Carter, Peter C. Albertsen, Michael J. Barry, Ruth Etzioni, Stephen J. Freedland, Kirsten Lynn Greene, Lars Holmberg, Philip Kantoff, Badrinath R. Konety, Mohammad Hassan Murad, David F. Penson and Anthony L. Zietman – You are bound to be left with a strong feeling that something is wrong!

The above mentioned literature review was done using rigorous approach.

“The AUA commissioned an independent group to conduct a systematic review and meta-analysis of the published literature on prostate cancer detection and screening. The protocol of the systematic review was developed a priori by the expert panel. The search strategy was developed and executed

by reference librarians and methodologists and spanned across multiple databases including Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid EMBASE, Ovid Cochrane Database of Systematic Reviews, Ovid Cochrane Central Register of Controlled Trials and Scopus. Controlled vocabulary supplemented with keywords was used to search for the relevant concepts of prostate cancer, screening and detection. The search focused on DRE, serum biomarkers (PSA, PSA Isoforms, PSA kinetics, free PSA, complexed PSA, proPSA, prostate health index, PSA velocity, PSA

doubling time), urine biomarkers (PCA3, TMPRSS2:ERG fusion), imaging (TRUS, MRI, MRS, MR-TRUS fusion), genetics (SNPs), shared-decision making and prostate biopsy. The expert panel manually identified additional references that met the same search criteria”

While reading through the document, I was looking for the findings related to the roll of imaging in prostate cancer screening; see highlighted above. The only thing I found: “With the exception of prostate-specific antigen (PSA)-based prostate cancer screening, there was minimal evidence to assess the outcomes of interest for other tests.

This must mean that: Notwithstanding hundreds of men-years and tens of millions of dollars which were invested in studies aiming to assess the contribution of imaging to prostate cancer management, no convincing evidence to include imaging in the screening progress was found by a group of top-experts in a thorough and rigorously managed literature survey! And it actually  lead the AUA to declare that “Nothing new in the last 20 years”…..

My interpretation of this: It says-it-all on the quality of the clinical studies that were conducted during these years, aiming to develop an improved prostate cancer workflow based on imaging. I hope that whoever reads this post will agree that this is a point worth considering!

For those who do not want to bother reading the whole AUA guidelines document here is a peer reviewed summary:

Early Detection of Prostate Cancer: AUA Guideline; Carter HB, Albertsen PC, Barry MJ, Etzioni R, Freedland SJ, Greene KL, Holmberg L, Kantoff P, Konety BR, Murad MH, Penson DF, Zietman AL; Journal of Urology (May 2013)”

It says:

“A systematic review was conducted and summarized evidence derived from over 300 studies that addressed the predefined outcomes of interest (prostate cancer incidence/mortality, quality of life, diagnostic accuracy and harms of testing). In addition to the quality of evidence, the panel considered values and preferences expressed in a clinical setting (patient-physician dyad) rather than having a public health perspective. Guideline statements were organized by age group in years (age<40; 40 to 54; 55 to 69; ≥70).

RESULTS: With the exception of prostate-specific antigen (PSA)-based prostate cancer screening, there was minimal evidence to assess the outcomes of interest for other tests. The quality of evidence for the benefits of screening was moderate, and evidence for harm was high for men age 55 to 69 years. For men outside this age range, evidence was lacking for benefit, but the harms of screening, including over diagnosis and over treatment, remained. Modeled data suggested that a screening interval of two years or more may be preferred to reduce the harms of screening.

CONCLUSIONS: The Panel recommended shared decision-making for men age 55 to 69 years considering PSA-based screening, a target age group for whom benefits may outweigh harms. Outside this age range, PSA-based screening as a routine could not be recommended based on the available evidence. The entire guideline is available at www.AUAnet.org/education/guidelines/prostate-cancer-detection.cfm.”

 

Other research papers related to the management of Prostate cancer were published on this Scientific Web site:

From AUA2013: “Histoscanning”- aided template biopsies for patients with previous negative TRUS biopsies

Imaging-biomarkers is Imaging-based tissue characterization

On the road to improve prostate biopsy

State of the art in oncologic imaging of Prostate

Imaging agent to detect Prostate cancer-now a reality

Scientists use natural agents for prostate cancer bone metastasis treatment

Today’s fundamental challenge in Prostate cancer screening

ROLE OF VIRAL INFECTION IN PROSTATE CANCER

Men With Prostate Cancer More Likely to Die from Other Causes

New Prostate Cancer Screening Guidelines Face a Tough Sell, Study Suggests

New clinical results supports Imaging-guidance for targeted prostate biopsy

Prostate Cancer: Androgen-driven “Pathomechanism” in Early-onset Forms of the Disease

Prostate Cancer and Nanotecnology

Prostate Cancer Cells: Histone Deacetylase Inhibitors Induce Epithelial-to-Mesenchymal Transition

Imaging agent to detect Prostate cancer-now a reality

Scientists use natural agents for prostate cancer bone metastasis treatment

ROLE OF VIRAL INFECTION IN PROSTATE CANCER

Prostate Cancers Plunged After USPSTF Guidance, Will It Happen Again?

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Dr. Lev-Ari had 100 hours at the Coronary Care Unit at the Brigham and Women’s Hospital in Boston in 2006, and four month at the ICU at Faulkner Hospital, Boston in 2007.

A Randomized Trial of Nighttime Physician Staffing in an Intensive Care Unit

Meeta Prasad Kerlin, M.D., M.S.C.E., Dylan S. Small, Ph.D., Elizabeth Cooney, M.P.H., Barry D. Fuchs, M.D., Lisa M. Bellini, M.D., Mark E. Mikkelsen, M.D., M.S.C.E., William D. Schweickert, M.D., Rita N. Bakhru, M.D., Nicole B. Gabler, Ph.D., M.H.A., Michael O. Harhay, M.P.H., John Hansen-Flaschen, M.D., and Scott D. Halpern, M.D., Ph.D.

May 20, 2013DOI: 10.1056/NEJMoa1302854

BACKGROUND

Increasing numbers of intensive care units (ICUs) are adopting the practice of nighttime intensivist staffing despite the lack of experimental evidence of its effectiveness.

Full Text of Background…

METHODS

We conducted a 1-year randomized trial in an academic medical ICU of the effects of nighttime staffing with in-hospital intensivists (intervention) as compared with nighttime coverage by daytime intensivists who were available for consultation by telephone (control). We randomly assigned blocks of 7 consecutive nights to the intervention or the control strategy. The primary outcome was patients’ length of stay in the ICU. Secondary outcomes were patients’ length of stay in the hospital, ICU and in-hospital mortality, discharge disposition, and rates of readmission to the ICU. For length-of-stay outcomes, we performed time-to-event analyses, with data censored at the time of a patient’s death or transfer to another ICU.

Full Text of Methods…

RESULTS

A total of 1598 patients were included in the analyses. The median Acute Physiology and Chronic Health Evaluation (APACHE) III score (in which scores range from 0 to 299, with higher scores indicating more severe illness) was 67 (interquartile range, 47 to 91), the median length of stay in the ICU was 52.7 hours (interquartile range, 29.0 to 113.4), and mortality in the ICU was 18%. Patients who were admitted on intervention days were exposed to nighttime intensivists on more nights than were patients admitted on control days (median, 100% of nights [interquartile range, 67 to 100] vs. median, 0% [interquartile range, 0 to 33]; P<0.001). Nonetheless, intensivist staffing on the night of admission did not have a significant effect on the length of stay in the ICU (rate ratio for the time to ICU discharge, 0.98; 95% confidence interval [CI], 0.88 to 1.09; P=0.72), ICU mortality (relative risk, 1.07; 95% CI, 0.90 to 1.28), or any other end point. Analyses restricted to patients who were admitted at night showed similar results, as did sensitivity analyses that used different definitions of exposure and outcome. 

CONCLUSIONS

In an academic medical ICU in the United States, nighttime in-hospital intensivist staffing did not improve patient outcomes. (Funded by University of Pennsylvania Health System and others; ClinicalTrials.gov number, NCT01434823.)

RESULTS

The study period included 352 nights, 175 of which (50%) were randomly assigned to the intervention; nighttime intensivists staffed 166 (95%) of the intervention nights. A total of 1598 patients were included in the analyses (Figure 1), of whom 970 (61%) were admitted at night (Table 1TABLE 1Characteristics of the Study Population., and Table S1 in the Supplementary Appendix). The median APACHE III score (with scores ranging from 0 to 299 and higher scores indicating more severe illness) was 67 (interquartile range, 47 to 91), and the median length of stay in the ICU was 52.7 hours (interquartile range, 29.0 to 113.4). A total of 381 patients (24%) died in the hospital, including 293 (18%) who died in the ICU.

Nighttime intensivists were generally younger than the daytime intensivists (Table S2 in the Supplementary Appendix), although most (82%) also worked as daytime intensivists during the study period. Nighttime intensivists completed post-shift questionnaires on 116 intervention nights (66%) and reported evaluating a median of 4 (interquartile range, 3 to 5) new patients and 2 (interquartile range, 1 to 3) previously admitted patients per night (Table S3 in the Supplementary Appendix). During the control nights, the at-home intensivists received a median of 2 (interquartile range, 1 to 3) calls each night, and the critical care fellows received a median of 2 (interquartile range, 0 to 3) calls each night (Table S4 in the Supplementary Appendix).

Patients who were admitted on intervention days had greater cumulative exposure to nighttime intensivists than did patients who were admitted on control days (median, 100% of nights [interquartile range, 67 to 100] vs. median, 0% [interquartile range, 0 to 33]; P<0.001). Staffing with nighttime intensivists did not have a significant effect on the length of stay in the ICU (rate ratio for ICU discharge, 0.98; 95% confidence interval [CI], 0.88 to 1.09; P=0.72) (Figure 2AFIGURE 2Kaplan–Meier Curves for Time to Discharge from the ICU.). In this study, the rate ratio refers to the instantaneous rate of discharge from the ICU in the intervention group divided by the instantaneous rate of discharge from the ICU in the control group, such that a rate ratio greater than 1 would indicate that the intervention shortened the time to ICU discharge. The findings were similar in analyses that were restricted to patients admitted at night (hazard ratio, 0.98; 95% CI, 0.84 to 1.13; P=0.74) (Figure 2B), and in several sensitivity analyses (Table 2TABLE 2Primary, Restricted, and Sensitivity Analyses of the Effect of the Intervention.). The results were also similar in the rank-based analysis of length of stay in the ICU, in which deaths were coded as the longest possible length of stay (P=0.51).

Nighttime intensivist staffing also had no significant effect on the length of stay in the hospital (median, 174 hours [interquartile range, 91 to 361] in the intervention group vs. 166 hours [interquartile range, 84 to 328] in the control group; rate ratio, 0.91; 95% CI, 0.82 to 1.02; P=0.12) or on ICU mortality, hospital mortality, readmission to the ICU among ICU survivors, or discharge to home (Table 3TABLE 3Secondary Outcomes.). Analyses that were restricted to patients admitted at night also showed no significant effects of nighttime intensivist staffing.

The patients’ APACHE III score did not modify the effect of the intervention on the length of stay in the ICU (P=0.28 for interaction) (Table S5 in theSupplementary Appendix). The effects of the intervention on the length of stay in the ICU were also similar during periods in which residents were more experienced and those in which residents were less experienced (Table S6 in the Supplementary Appendix). There was significant heterogeneity in the effect of the intervention on readmission to the ICU during the two periods (P=0.03 for interaction). However, the intervention was not associated with significantly fewer readmissions during the inexperienced-resident period (relative risk for readmission, 0.58; 95% CI, 0.10 to 3.39), and the heterogeneity was due, in part, to a nonsignificantly higher readmission rate with the intervention during the experienced-resident period (relative risk for readmission, 1.94; 95% CI, 0.87 to 4.30).

The Web-based surveys were completed by 41 of 91 eligible residents (45%). A majority of residents reported that the presence of nighttime intensivists improved the quality of care as perceived by the resident, provided support to residents, permitted appropriate resident autonomy, and improved the educational experience (Table S7 in the Supplementary Appendix).

DISCUSSION

In this single-center randomized trial of in-hospital nighttime intensivist staffing in an academic medical center in the United States, we found no evidence that this staffing model, as compared with nighttime telephone availability of the daytime intensivist, had a significant effect on length of stay in the ICU or hospital, ICU or in-hospital mortality, readmission to the ICU, or the probability of discharge to home. We also observed no significant benefits of the intervention in subgroups of patients for whom we had hypothesized the greatest effects: patients admitted at night, those with the most severe illness at the time of ICU admission, and those admitted during the period when the residents were least experienced. These findings are consistent with a multicenter observational study that suggested that in hospitals with high-intensity daytime intensivist staffing, the presence of nighttime intensivists did not reduce mortality.14 The current trial extends this work by removing the potential for ICU-level and patient-level confounding and by documenting the lack of significant effects on a broad range of outcomes.

There are several possible explanations for the lack of significant benefit of nighttime intensivists in this study. First, there may be limited room for improvement in ICUs that have daytime intensivist staffing, particularly if the benefits of daytime intensivist staffing derive from better ICU-wide processes of care.9,28 Second, nighttime intensivist staffing may be associated with discontinuity of care for some patients, offsetting benefits for other patients. Third, in the staffing model and setting that we studied, bedside intensivists may not add to the quality of care provided by well-trained resident physicians who have telephone access to intensivists. Finally, nighttime intensivists may truly have an effect on mortality in a small number of patients, but such patients may be so few in number that detecting these benefits would require a much larger study. Future research that investigates these and other potential explanations could inform broader debates about the best ways to use a limited intensivist workforce.29,30

We also found that most residents believed that nighttime intensivists improved their educational experience and provided desirable support for decision making. These findings are tempered by the positive framing of the questions in our survey and the modest response rate. Nonetheless, academic centers may wish to consider residents’ perspectives in choosing to adopt or retain this staffing model.

A strength of this randomized trial is that it took place in an ICU in which 61% of admissions occurred at night. If nighttime intensivists were effective, it is likely that they would be particularly effective in an ICU with such a large nighttime workload. In addition, by randomly assigning weeks rather than individual nights, we ensured that our contrast would meaningfully represent the presence or absence of year-round nighttime intensivist staffing.

An important limitation of this study is that it was performed in a single, academic medical ICU in the United States that had round-the-clock coverage by reasonably well-trained residents. Our results may be generalizable to U.S. academic ICUs with high-intensity daytime staffing, which have been among the early adopters of nighttime intensivist staffing in the United States. However, our study does not address whether nighttime intensivist staffing may provide benefits in community ICUs, ICUs without high-intensity daytime staffing, ICUs with fewer or less well-trained residents, or ICUs in other countries.

Second, our nighttime workforce may differ with respect to age, frequency of shifts, or other characteristics from workforces that are employed or considered elsewhere. It is uncertain whether different nighttime staffing models would affect patient outcomes.

Third, although we evaluated several outcomes, the presence of nighttime intensivists may affect other outcomes such as physician burn-out, staff satisfaction, patient and family experiences, objectively measured educational outcomes, and the incidence of malpractice claims. In addition, outcomes external to the ICU were not measured, such as the possible role of nighttime intensivists in helping hospitals meet quality benchmarks.15

In summary, this randomized trial in an ICU with high-intensity staffing during the day failed to identify benefits to adding intensivists at night. The compelling face validity of nighttime intensivist staffing has probably spurred the widespread adoption of this staffing model.8,10 However, nighttime intensivist staffing may also be one of several expensive medical practices that have been adopted without a supportive evidence base.31 Because the adoption of nighttime intensivist staffing by hospitals with plentiful resources may siphon intensivists away from hospitals with fewer resources,17,18 rigorous evaluation of the model is needed in settings that were not evaluated in this study.

Supported by the University of Pennsylvania Health System and by pilot grants (to Dr. Halpern) from the Roybal Center for Behavioral Economics and Health, National Institute on Aging, National Institutes of Health (P30AG034546), and the Department of Medical Ethics and Health Policy, University of Pennsylvania.

Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.

This article was published on May 20, 2013 at NEJM.org.

http://www.nejm.org/doi/full/10.1056/NEJMoa1302854?query=OF#t=articleDiscussion

Read Full Post »

Dealing with the Use of the High Sensitivity Troponin (hs cTn) Assays: Preparing the United States for High-Sensitivity Cardiac Troponin Assays

Author and Curator: Larry H Bernstein, MD, FCAP
Author and Curator: Aviva Lev-Ari, PhD, RD

In this article we shall address the two following papers:
  1. Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI – Corus CAD, hs cTn, CCTA
  2. Frederick K. Korley, MD, Allan S. Jaffe, MD in Journal of the American College of Cardiology  J Am Coll Cardiol. 2013; 61(17):1753-1758.

In a previous posting I commented on the problem of hs cTn use and the on site ED performance of cardiac treadmill (done in Europe)

  • prior to a decision of CT scan (not done in US).

Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI – Corus CAD, hs cTn, CCTA

We examine the emergence of Alternatives to Angiography and PCI as most common strategy for ER admission with listed cause of Acute Chest Pain. The Goal is to use methods that will improve the process to identify for an Interventional procedure only the patients that a PCI is a must to have.

Alternative #1: Corus®  CAD

Alternative #2: High-Sensitivity Cardiac Troponins in Acute Cardiac Care

Alternative #3: Coronary CT Angiography for Acute Chest Pain
After presenting the the Three alternatives, the Editorial by R.F. Redberg, Division of Cardiology, UCSF, will be analyzed.
  • Alternative #1:  First-Line Test to Help Clinicians Exclude Obstructive CAD as a Cause of the Patient’s Symptoms

Corus®  CAD, a blood-based  gene expression test, demonstrated high accuracy with both a high negative predictive value (96 percent) and high sensitivity (89 percent) for assessing  obstructive coronary artery disease  (CAD) in a population of patients referred for stress testing with myocardial perfusion imaging (MPI).

COMPASS enrolled stable patients with symptoms suggestive of CAD who had been referred for MPI at 19 U.S. sites.  A blood sample was obtained in all 431 patients prior to MPI and Corus CAD gene expression testing was performed with study investigators blinded to Corus CAD test results.Following MPI, patients underwent either invasive coronary angiography orcoronary CT angiography, gold-standard anatomical tests for the diagnosis of coronary artery disease.

A Blood Based Gene Expression Test for Obstructive Coronary Artery Disease Tested in Symptomatic Non-Diabetic Patients Referred for Myocardial Perfusion Imaging: The COMPASS Study

http://pharmaceuticalintelligence.com/2012/08/14/obstructive-coronary-artery-disease-diagnosed-by-rna-levels-of-23-genes-cardiodx-heart-disease-test-wins-medicare-coverage/

  • Alternative #2: High-Sensitivity Cardiac Troponins in Acute Cardiac Care

Recommendations for the use of cardiac troponin (cTn) measurement in acute cardiac care have recently been published.[1] Subsequently, a high-sensitivity (hs) cTn T assay was introduced into routine clinical practice.[2] This assay, as others, called highly sensitive, permits measurement of cTn concentrations in significant numbers of apparently illness-free individuals. These assays can measure cTn in the single digit range of nanograms per litre (=picograms per millilitre) and some research assays even allow detection of concentrations <1 ng/L.[2–4] Thus, they provide a more precise calculation of the 99th percentile of cTn concentration in reference subjects (the recommended upper reference limit [URL]). These assays measure the URL with a coefficient of variation (CV) <10%.[2–4]The high precision of hs-cTn assays increases their ability to determine small differences in cTn over time. Many assays currently in use have a CV >10% at the 99th percentile URL limiting that ability.[5–7] However, the less precise cTn assays do not cause clinically relevant false-positive diagnosis of acute myocardial infarction (AMI) and a CV <20% at the 99th percentile URL is still considered acceptable.[8]

We believe that hs-cTn assays, if used appropriately, will improve clinical care. We propose criteria for the clinical interpretation of test results based on the limited evidence available at this time.

References

1. Thygesen K, Mair J, Katus H, Plebani M, Venge P, Collinson P, Lindahl B, Giannitsis E, Hasin Y, Galvani M, Tubaro M, Alpert JS, Biasucci LM, Koenig W, Mueller C, Huber K, Hamm C, Jaffe AS; Study Group on Biomarkers in Cardiology  of the ESC Working Group on Acute Cardiac Care. Recommendations  for the use of cardiac troponin measurement in acute cardiac care. Eur Heart J 2010;31:2197–2204.

2. Saenger AK, Beyrau R, Braun S, Cooray R, Dolci A, Freidank H, Giannitsis E, Gustafson S, Handy B, Katus H, Melanson SE, Panteghini M, Venge P, Zorn M, Jarolim P, Bruton D, Jarausch J, Jaffe AS. Multicenter analytical evaluation of a high sensitivity troponin T assay. Clin Chim Acta 2011;412:748–754.

3. Zaninotto M, Mion MM, Novello E, Moretti M, Delprete E, Rocchi MB, Sisti D, Plebani M. Precision performance at low levels and 99th percentile concentration of the Access AccuTnI assay on two different platforms. Clin Chem Lab Med 2009; 47:367–371.

4. Todd J, Freese B, Lu A, Held D, Morey J, Livingston R, Goix P. Ultrasensitive flow based immunoassays using single-molecule counting. Clin Chem 2007; 53:1990–1995.

5. van de Kerkhof D, Peters B, Scharnhorst V. Performance of Advia Centaur second-generation troponin assay TnI-Ultra compared with the first-generation cTnI assay. Ann Clin Biochem 2008; 45:316–317.

6. Lam Q, Black M, Youdell O, Spilsbury H, Schneider HG. Performance evaluation and subsequent clinical experience with the Abbott automated Architect STAT Troponin-I assay. Clin Chem 2006; 52:298–300.

7. Tate JR, Ferguson W, Bais R, Kostner K, Marwick T, Carter A. The determination of the 99th percentile level for troponin assays in an Australian reference population. Ann Clin Biochem 2008; 45:275–288.

8. Jaffe AS, Apple FS, Morrow DA, Lindahl B, Katus HA. Being rational about (im)-precision: a statement from the Biochemistry Subcommittee of the Joint European Society of Cardiology/American College of Cardiology Foundation/American Heart Association/World Heart Federation Task Force for the definition of myocardial infarction. Clin Chem 2010; 56:921–943.

To the Editor:

Hoffmann et al. (July 26 issue)1 conclude that, among patients with low-to-intermediate-risk acute coronary syndromes, the incorporation of coronary computed tomographic angiography (CCTA) improves the standard evaluation strategy.2 However, it may be difficult to generalize their results, owing to different situations on the two sides of the Atlantic and the availability of high-sensitivity troponin T assays in Europe. In the United States, the Food and Drug Administration has still not approved a high-sensitivity troponin test, and patients in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT-II) trial only underwent testing with the conventional troponin T test. As we found in the biomarker substudy in the ROMICAT-I trial, a single high-sensitivity troponin T test at the time of CCTA accurately ruled out acute myocardial infarction (negative predictive value, 100%) (Table 1TABLE 1Results of High-Sensitivity Troponin T Testing for the Diagnosis of Acute Coronary Syndromes in ROMICAT-I.).3 In addition, patients with acute myocardial infarction can be reliably identified, with up to 100% sensitivity, with the use of two high-sensitivity measurements of troponin T within 3 hours after admission.4,5

It seems plausible to assume that the incorporation of high-sensitivity troponin T assays in this trial would have outperformed CCTA. Therefore, it is important to assess the performance of such testing and compare it with routine CCTA testing in terms of length of stay in the hospital and secondary end points, especially cumulative costs and major adverse coronary events at 28 days.

Mahir Karakas, M.D.
Wolfgang Koenig, M.D.
University of Ulm Medical Center, Ulm, Germany
wolfgang.koenig@uniklinik-ulm.de

References

  1. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med 2012;367:299-308

  2. Redberg RF. Coronary CT angiography for acute chest pain. N Engl J Med 2012;367:375-376

  3. Januzzi JL Jr, Bamberg F, Lee H, et al. High-sensitivity troponin T concentrations in acute chest pain patients evaluated with cardiac computed tomography. Circulation2010;121:1227-1234

  4. Keller T, Zeller T, Ojeda F, et al. Serial changes in highly sensitive troponin I assay and early diagnosis of myocardial infarction. JAMA 2011;306:2684-2693

  5. Thygesen K, Mair J, Giannitsis E, et al. How to use high-sensitivity cardiac troponins in acute cardiac care. Eur Heart J 2012;33:2252-2257

Author/Editor Response

In response to Karakas and Koenig: we agree that high-sensitivity troponin T assays may permit more efficient care of low-risk patients presenting to the emergency department with acute chest pain1 and may also have the potential to identify patients with unstable angina because cardiac troponin T levels are associated with the degree and severity of coronary artery disease.2 Hence, high-sensitivity troponin T assays performed early may constitute an efficient and safe gatekeeper for imaging. CCTA, however, may be useful for ruling out coronary artery disease in patients who have cardiac troponin T levels above the 99th percentile but below levels that are diagnostic for myocardial infarction. The hypothesis that high-sensitivity troponin T testing followed by CCTA, as compared with other strategies, may enable safe and more efficient treatment of patients in the emergency department who are at low-to-moderate risk warrants further assessment. The generalizability of our data to clinical settings outside the United States may also be limited because of differences in the risk profile of emergency-department populations and the use of nuclear stress imaging.3

Udo Hoffmann, M.D., M.P.H.
Massachusetts General Hospital, Boston, MA
uhoffmann@partners.org

W. Frank Peacock, M.D.
Baylor College of Medicine, Houston, TX

James E. Udelson, M.D.
Tufts Medical Center, Boston, MA

Since publication of their article, the authors report no further potential conflict of interest.

References

  1. Than M, Cullen L, Reid CM, et al. A 2-h diagnostic protocol to assess patients with chest pain symptoms in the Asia-Pacific region (ASPECT): a prospective observational validation study. Lancet 2011;377:1077-1084

  2. Januzzi JL Jr, Bamberg F, Lee H, et al. High-sensitivity troponin T concentrations in acute chest pain patients evaluated with cardiac computed tomography. Circulation2010;121:1227-1234

  3. Peacock WF. The value of nothing: the consequence of a negative troponin test. J Am Coll Cardiol 2011;58:1340-1342

  • Alternative #3: Coronary CT Angiography for Acute Chest Pain

The Study concluded:

There was increased diagnostic testing and higher radiation exposure in the CCTA group, with no overall reduction in the cost of care. 

Coronary CT Angiography versus Standard Evaluation in Acute Chest Pain

Udo Hoffmann, M.D., M.P.H., Quynh A. Truong, M.D., M.P.H., David A. Schoenfeld, Ph.D., Eric T. Chou, M.D., Pamela K. Woodard, M.D., John T. Nagurney, M.D., M.P.H., J. Hector Pope, M.D., Thomas H. Hauser, M.D., M.P.H., Charles S. White, M.D., Scott G. Weiner, M.D., M.P.H., Shant Kalanjian, M.D., Michael E. Mullins, M.D., Issam Mikati, M.D., W. Frank Peacock, M.D., Pearl Zakroysky, B.A., Douglas Hayden, Ph.D., Alexander Goehler, M.D., Ph.D., Hang Lee, Ph.D., G. Scott Gazelle, M.D., M.P.H., Ph.D., Stephen D. Wiviott, M.D., Jerome L. Fleg, M.D., and James E. Udelson, M.D. for the ROMICAT-II Investigators

N Engl J Med 2012; 367:299-308 July 26, 2012  http://dx.doi.org/10.1056/NEJMoa1201161

BACKGROUND

It is unclear whether an evaluation incorporating coronary computed tomographic angiography (CCTA) is more effective than standard evaluation in the emergency department in patients with symptoms suggestive of acute coronary syndromes.

METHODS

In this multicenter trial, we randomly assigned patients 40 to 74 years of age with symptoms suggestive of acute coronary syndromes but without ischemic electrocardiographic changes or an initial positive troponin test to early CCTA or to standard evaluation in the emergency department on weekdays during daylight hours between April 2010 and January 2012. The primary end point was length of stay in the hospital. Secondary end points included rates of discharge from the emergency department, major adverse cardiovascular events at 28 days, and cumulative costs. Safety end points were undetected acute coronary syndromes.

RESULTS

The rate of acute coronary syndromes among 1000 patients with a mean (±SD) age of 54±8 years (47% women) was 8%. After early CCTA, as compared with standard evaluation, the mean length of stay in the hospital was reduced by 7.6 hours (P<0.001) and more patients were discharged directly from the emergency department (47% vs. 12%, P<0.001). There were no undetected acute coronary syndromes and no significant differences in major adverse cardiovascular events at 28 days. After CCTA, there was more downstream testing and higher radiation exposure. The cumulative mean cost of care was similar in the CCTA group and the standard-evaluation group ($4,289 and $4,060, respectively; P=0.65).

CONCLUSIONS

In patients in the emergency department with symptoms suggestive of acute coronary syndromes, incorporating CCTA into a triage strategy improved the efficiency of clinical decision making, as compared with a standard evaluation in the emergency department, but it resulted in an increase in downstream testing and radiation exposure with no decrease in the overall costs of care. (Funded by the National Heart, Lung, and Blood Institute; ROMICAT-II ClinicalTrials.gov number, NCT01084239.)

http://www.nejm.org/doi/full/10.1056/NEJMoa1201161#t=abstract

REFERENCES

  1. Roe MT, Harrington RA, Prosper DM, et al. Clinical and therapeutic profile of patients presenting with acute coronary syndromes who do not have significant coronary artery disease. Circulation 2000;102:1101-1106

  2. Miller JM, Rochitte CE, Dewey M, et al. Diagnostic performance of coronary angiography by 64-row CT. N Engl J Med 2008;359:2324-2336

  3. Budoff MJ, Dowe D, Jollis JG, et al. Diagnostic performance of 64-multidetector row coronary computed tomographic angiography for evaluation of coronary artery stenosis in individuals without known coronary artery disease: results from the prospective multicenter ACCURACY (Assessment by Coronary Computed Tomographic Angiography of Individuals Undergoing Invasive Coronary Angiography) trial. J Am Coll Cardiol 2008;52:1724-1732

  4. Marano R, De Cobelli F, Floriani I, et al. Italian multicenter, prospective study to evaluate the negative predictive value of 16- and 64-slice MDCT imaging in patients scheduled for coronary angiography (NIMISCAD-Non Invasive Multicenter Italian Study for Coronary Artery Disease). Eur Radiol 2009;19:1114-1123
  5. Meijboom WB, Meijs MF, Schuijf JD, et al. Diagnostic accuracy of 64-slice computed tomography coronary angiography: a prospective, multicenter, multivendor study. J Am Coll Cardiol 2008;52:2135-2144
  6. Hoffmann U, Bamberg F, Chae CU, et al. Coronary computed tomography angiography for early triage of patients with acute chest pain: the ROMICAT (Rule Out Myocardial Infarction using Computer Assisted Tomography) trial. J Am Coll Cardiol 2009;53:1642-1650

  7. Hollander JE, Chang AM, Shofer FS, et al. One-year outcomes following coronary computerized tomographic angiography for evaluation of emergency department patients with potential acute coronary syndrome. Acad Emerg Med 2009;16:693-698

  8. Rubinshtein R, Halon DA, Gaspar T, et al. Usefulness of 64-slice cardiac computed tomographic angiography for diagnosing acute coronary syndromes and predicting clinical outcome in emergency department patients with chest pain of uncertain origin. Circulation2007;115:1762-1768

  9. Schlett CL, Banerji D, Siegel E, et al. Prognostic value of CT angiography for major adverse cardiac events in patients with acute chest pain from the emergency department: 2-year outcomes of the ROMICAT trial. JACC Cardiovasc Imaging 2011;4:481-491

  10. Goldstein JA, Chinnaiyan KM, Abidov A, et al. The CT-STAT (Coronary Computed Tomographic Angiography for Systematic Triage of Acute Chest Pain Patients to Treatment) trial. J Am Coll Cardiol 2011;58:1414-1422

  11. Litt HI, Gatsonis C, Snyder B, et al. CT angiography for safe discharge of patients with possible acute coronary syndromes. N Engl J Med 2012;366:1393-1403

  12. Shreibati JB, Baker LC, Hlatky MA. Association of coronary CT angiography or stress testing with subsequent utilization and spending among Medicare beneficiaries. JAMA2011;306:2128-2136

  13. Hoffmann U, Truong QA, Fleg JL, et al. Design of the Rule Out Myocardial Ischemia/Infarction Using Computer Assisted Tomography: a multicenter randomized comparative effectiveness trial of cardiac computed tomography versus alternative triage strategies in patients with acute chest pain in the emergency department. Am Heart J2012;163:330-338

  14. Abbara S, Arbab-Zadeh A, Callister TQ, et al. SCCT guidelines for performance of coronary computed tomographic angiography: a report of the Society of Cardiovascular Computed Tomography Guidelines Committee. J Cardiovasc Comput Tomogr 2009;3:190-204

  15. Gerber TC, Carr JJ, Arai AE, et al. Ionizing radiation in cardiac imaging: a science advisory from the American Heart Association Committee on Cardiac Imaging of the Council on Clinical Cardiology and Committee on Cardiovascular Imaging and Intervention of the Council on Cardiovascular Radiology and Intervention. Circulation 2009;119:1056-1065

  16. von Ballmoos MW, Haring B, Juillerat P, Alkadhi H. Meta-analysis: diagnostic performance of low-radiation-dose coronary computed tomography angiography. Ann Intern Med2011;154:413-420[Erratum, Ann Intern Med 2011;154:848.]

  17. Achenbach S, Marwan M, Ropers D, et al. Coronary computed tomography angiography with a consistent dose below 1 mSv using prospectively electrocardiogram-triggered high-pitch spiral acquisition. Eur Heart J 2010;31:340-346

  18. Than M, Cullen L, Reid CM, et al. A 2-h diagnostic protocol to assess patients with chest pain symptoms in the Asia-Pacific region (ASPECT): a prospective observational validation study. Lancet 2011;377:1077-1084

In the EDITORIAL by Redberg RF. Dr. Redberg, Cardiology Division, UCSF made the following points in:

Coronary CT angiography for acute chest pain. N Engl J Med 2012;367:375-376

  • Six million people present to ER annually with Acute Chest Pain, most have other diseases that Heart.
  • Current diagnostic methods lead to admission to the hospital, unnecessary stays and over-treatment – improvement of outcomes is needed.
  • Rule Out Myocardial Infarction Using Computer Assisted Tomography II (ROMICAT-II) 100 patients were randomly assigned to CCTA group or Standard Diagnosis Procedures Group in the ER which involved Stress Test in 74%.

CRITIQUE and Study FLAWS in MGH Study:

  • ROMICAT-II enrolled patients only during “weekday daytime hours, no weekend or nights when the costs are higher.
  • Assumption that a diagnostic test must be done before discharge for low-to-intermediate-risk patients is unproven and probably unwarranted.. No evidence that the tests performed let to improved outcomes.
  • Events rate for patient underwent CCTA, Stress test or no testing at al were less that 1% to have an MI, no one died. Thus, it is impossible to assign a benefit to the CCTA Group. So very low rates were observed in other studies
  • CCTA patients were exposed to substantial dose of Radiation, , contrast die,
  • Patients underwent ECG and Negative Troponin, no evidence that additional testing further reduced the risk.
  • Average age of patients: 54, 47% women.Demographic Characteristics with low incidence of CAD, NEJM, 1979; 300:1350-8
  • Risk of Cancer from radiation in younger population is higher, same in women.
  • Hoffmann’s Study: Radiation burden was clinically significant: Standard Evaluation Group: (4.7+-8.4 mSv), CCTA: (13.9+-10.4 mSv), exposure of 10 mSv have been projected to lead to 1 death from Cancer per 2000 persons, Arch Intern Med 2009; 169:2071-7
  • Middle Age women, increased risk of Breast Cancer from radiation, Arch Intern Med 2012 June 11 (ePub ahead of Print)
  • ROMICAT-II study: discharge diagnosis Acute Coronary Syndrome – less than 10%
  • CCTA Group: more tests, more radiation, more interventions tht the standard-evaluation group.
  • Choose Wisely Campaign – order test only when the benefit will exceed the risks

Dr. Redberd advocates ECG and Troponin, if NORMAL, no further testing.

Epicrisis on Part 1

Redberg’s conclusions are correct for the initial screening. The issue has been whether to do further testing for low or intermediate risk patients.

The most intriguing finding that is not at all surprising is that the CCTA added very little in the suspect group with small or moderate risk. My original studies using a receiver operator characteristic curve were very good, although some patients with CRF or ESRD had extremely high values. The ultra sensitive troponin threw the Area Under the ROC out the window, under the assumption that a perfect assay would exclude AMI, or any injury to the heart. The improved assay does pick up minor elevations of troponin in the absence of MI as a result of plaque rupture. It is possible that 50% of these elevations need medical attention, but then the question is an out of hospital referral or admission and further workup. I have discussed this at some length on several occasions with Dr. Jaffe at Mayo Clinic.

Many of those with minor or intermediate elevation have significant renal insufficiency, but they might also be in CKD Class 3 and not 1 or 2. The coexistence of Type 2 diabetes would go into the standard assessment, but is not mentioned in the study with respect to immediate admission or outcome 28 days after discharge.

The hs troponin I has been in daily use on the Ortho J&J (formerly Kodak) for about 2 years, and the QC standards are very high. I expected the Roche hs-TnT assay to be in use in US as well, but there may have been delays.  Januzzi , Jaffe, and Fred Aplle would be involved in the evaluation in the US, but Paul Collinson in UK, Katus and Mair in Germany, and other Europena centers certainly have been using the Roche Assay.

The biggest problem in these studies is as my mentor called my attention to – the frontrunners aren’t going to support a nose-to-nose up front study. Given that a diagnosis requires more information at minimal cost, especially when diagnosis of the heart that are not MI have to be evaluated as well, it is incomprehensibe to me that such information as

  1. mean arterial blood pressure,
  2. natriuretic peptides,
  3. the calculated EGFR are not used in the evaluation.

It is quite impossible to clear the deck when you have patients who don’t have

  1. ST elevation,
  2. depression, or
  3. T-wave inversion who are seen for vague

(not to mention long QT abnormalities).

  • predordial tightness or shortness of breath
  • pain that resembles gall bladder.

Is this an indication of the obsolescence of the RCT.

A Retrospective Quality and Cost Driven Audit on Effect of hs cTn Assay with On-Site CT Followup. (No treadmill availability)

A retrospective multisite study showed that doing the hs cTn followed by CT on-site was a good choice for US.

I also considered  the selective release of

  • low- moderate-risk patients cardiology followup in a timely manner.

This report is an excellent analysis of my point by Korley and Jaffe in Medscape, and satisfies some several years discussion

I have had with Dr. Jaffe, at Mayo Clinic.  He pointed out the importance of

  • Type 1 and Type 2 AMI

at a discussion with Dr. Fred Apple at a meeting of the Amer Assn for Clinical Chemistry that he fully elaborates on here.
It is really a refinement of other proposals that are being discussed.  It is also timely because hs cTnI is already being used
widely in the US, while there might be a holdup on the hs cTnT.

Highlights

  1. Need for a Universally Accepted Nomenclature
  2. Defining Uniform Criteria for Reference Populations
  3. Discriminating Between Acute and Nonacute Causes of hs-cTn Elevations
  4. Distinguishing Between Type 1 and Type 2 AMI
  5. Analytical Imprecision in Cardiac Troponin Assays
  6. Ruling Out AMI
  7. Investigating the Causes of Positive Troponin Values in Non-AMI Patients
  8. Risk Stratifying Patients With Nonacute Coronary Syndrome Conditions
  9. Conclusions

Abstract

It is only a matter of time before the use of high-sensitivity cardiac
troponin assays (hs-cTn) becomes common throughout the United
States. In preparation  for this inevitability, this article raises a number
of important issues regarding  these assays that deserve consideration.

These include: the need for

  • the adoption  of a universal nomenclature; the importance
  • of defining uniform criteria for reference populations;
  • the challenge of discriminating between acute and nonacute
    causes of hs-cTn elevations, and
  • between type 1 and type 2 acute myocardial infarction (AMI);

factors influencing the analytical precision of hs-cTn;

  • ascertaining the optimal duration  of the rule-out period for AMI;
  • the need for further evaluation to determine the causes
    of a positive hs-cTn in non-AMI patients; and
  • the use of hs-cTn to risk-stratify patients with disease conditions
    other than AMI.

This review elaborates on these critical issues  as a means of
educating clinicians and researchers about them.

Introduction

Recently, clinicians have begun to use the recommended cut-off values
for current generation cardiac troponin (cTn) assays:

  • the 99th percentile upper reference limit (URL).

Previously, there was reluctance to use these cut-off values because

  • of  cTn elevations from non-acute ischemic heart disease conditions.

Thus, there was a tendency to use cut-off values for troponin that equated with the

  • prior gold standard diagnosis developed with less sensitive markers
    • creatinine kinase-MB isoenzyme (CK-MB) or
    • the lowest value at which assay achieved a 10%
      coefficient of variation (CV),

which would reduce false-positive elevations (without plaque rupture).

The use of the 99th percentile URL increases the ability of these assays to detect both

  •   acute myocardial infarction (AMI) and
  •   structural cardiac morbidities.[1]

This change in practice should not be confused with

  •   newer-generation high-sensitivity assays.

Improvements in the analytic performance of cTn assays have resulted in

  •   superior sensitivity and precision.

Improved sensitivity occurs because of

  •   more sensitive antigen binding and detection antibodies,
  •   increases in the concentration of the detection probes on the tag antibodies,
  •   increases in sample volume, and buffer optimization.[2]

Assays now are able to measure

  •   10-fold lower concentrations with high precision

(a CV <10% at the 99th percentile  of the URL).

The high-sensitivity cardiac troponin T (hs-cTnT) assay is already in clinical use
throughout most of the world. It is only a matter of time before high- sensitivity
assays are approved for use in the United States. In preparation for this, as well as

  •   using the 99th percentile URL with contemporary assays,

there are a number of important issues that deserve consideration. Key concepts are included in (Table 1).

Table 1: Key ConceptsThere is a need to develop a universal nomenclature for troponin assays.There is a need for uniform criteria for selecting reference populations.The optimal delta criteria for distinguishing between acute and chronic cardiac injury remain unclear and are likely to be assay-specific.Distinguishing between type 1 and type 2 AMI is challenging, and
more type 2 AMIs will be detected with hsTn assays.Factors affecting the analytical precision of troponin assays (including how we collect samples) will become more important with the use of hs-cTn assays.The optimal duration for ruling out AMI remains unclear;

  • novel approaches to this issue are being developed.

Elevated hs-cTn, regardless of the cause, has important

  • prognostic implications and deserves additional evaluation; 

Many cases of chronic elevations can be evaluated in an outpatient setting.

Hs-cTn can be used to

  • risk-stratify patients with non-ACS cardiovascular comorbidities.

Need for a Universally Accepted Nomenclature

The literature is replete with terms used to refer to cTn assays.
We advocate the use of the term “high-sensitivity cardiac troponin assays”  (hs-cTn) for

  • cTn assays that v   measure cardiac troponin values in
  • in  at least 50% of a reference population.[2,3]

This policy has now been embraced by the journal Clinical Chemistry. High-sensitivity
assays can be further categorized as well (Table 2) with respect to generations of cTn.

Table 2.  Classification of High-Sensitivity Cardiac Troponin Assays 

Category

Description

First Generation                                   Able to measure cTn in
50%–75% of                                       a reference population
Second Generation                              Able to measure cTn in
75%–95% of                                       a reference population
Third Generation                                 Able to measure cTn in
> 95%                                               a reference population
Adapted from Apple and Collinson (3)
  • Ideally, assays should have a CV of <10% at the 99th percentile value.

Assays that do not achieve this level are less sensitive which protects against
false-positive results, and they can be used.[4]

Defining Uniform Criteria for Reference Populations
There is a lack of consistency in the types and numbers of subjects that constitute a reference
population.[2] Often, participants are included after simple screening by check list but without a

  • physical examination,
  • electrocardiogram, or
  • laboratory testing.

At other times, a

  • normal creatinine and/or a normal natriuretic peptide value is required.
  • Imaging to detect structural heart disease is rarely used. 

Because it is known that

  • gender,
  • age,
  • race,
  • renal function,
  • heart failure, and
  • structural heart disease, including
  • increased left ventricular (LV) mass

are associated with increased cTn concentrations,[5,6,7] An assay’s 99th percentile value depends on the composition of the reference group. Thus, the more criteria used, the lower the reference values (Figure 1).[5]

http://img.medscape.com/article/803/159/803159-fig1.jpg

Have no history of

  • vascular disease or diabetes, and
  • not taking cardioactive drugs,
    • based on questionnaire.
Normal defined as those individuals who had
  • no history of vascular or cardiovascular disease,
  • diabetes mellitus,
  • hypertension, or
  • heavy alcohol intake and who were
  • receiving no cardiac medication AND
  • had blood pressure ≤140/90 mmHg;
  • fasting glucose  <110 mg/dL;
  • eGFR >60mL/min;
  • LVEF > 50%; normal lung function; and no significant
  • valvular heart disease,
  • LVH,
  • diastolic HF, or
  • regional wall-motion abnormalities on ECHO.

The appropriate reference value to use clinically also is far from a settled issue.
It might be argued that

  • using a higher 99th percentile value for the elderly
  • allows comparison of the patient to his or her peers, but

in raising the cut-off value, if the increases are caused by comorbidities,

  • those who are particularly healthy will be disadvantaged.[8]

Gender and ethnicity are not comorbidities, and we would urge that those should be taken into account.
Regardless of the assay, there will need to be

  • 99th percentile values for men that are different for women.[2]

The reference population for assay validation studies should ideally be based on  –
demographic characteristics that mirror the U.S. population and include subjects whose

  • blood pressure,
  • serum glucose, and
  • creatinine and
  • natriuretic peptide values are
  • within the normal reference range and
  • who take no cardiac  medications.

These subjects should be

  • free from structural heart disease,
  • documented by echocardiography,
  • cardiac magnetic resonance imaging (MRI) or
  • computed tomography (CT) angiography.

Meeting these criteria will be a major challenge, especially for older individuals.
A conjoint pool of samples collected with manufacturers’ support so that all methods were derived from an

  • identical patient population for their reference ranges would be ideal.

[However, the method of collection and possible freeze-thaw effects is unavoidable].

One large national effort might be advantageous over multiple efforts.

 Discriminating Between Acute and Nonacute Causes of hs-cTn Elevations

With the ability to precisely measure small concentrations of cTn,

  • clinicians will be faced with the challenge of distinguishing patients
    • who have acute problems from those with chronic elevations from other causes.

Using the fourth-generation cTnT assay, approximately 0.7% of patients in
the general population have modest elevations >99th percentile URL.[11]

In the same population, this number was 2% with the hs-cTnT assay.[6]  Only

  • half of them had documentation (even with imaging) of cardiac abnormalities.

If the prevalence of a positive cTnT is 2% in the general population,

  • it will likely be 10% or 20% in the emergency department (ED)
  • and even higher in hospitalized patients, as
  • these patients often have cardiac comorbidities.

Measurement of changes in hs-cTn over time (δ hs-cTn)

  • improves the specificity of hs-cTn for the diagnosis of acute cardiac injury.[12,13]

However, it does so at the cost of sensitivity. With contemporary assays, differences

  • in analytical variation have been used to define an increasing pattern.

At elevated values, CV for most assays is in the range of 5% to 7%, so

  • a change of 20% ensures that a given change is not caused

by analytical variation alone.[10]

At values near the 99th percentile URL, higher change values are necessary.[13]  The situation with hs-cTn assays is much more complex, as follows:

1. Change criteria are unique for each assay.
2. It will be easy to misclassify patients with coronary artery disease who may present with a noncardiac cause of chest pain

  • but have elevated values.

They could be having unstable ischemia or elevations caused by structural cardiac abnormalities and noncardiac discomfort.

If hs-cTn is rising significantly, the issue is easy but

  • if the values are not rising, a diagnosis of AMI still might be made.
  • If so, some patients may be included as having AMI without a changing pattern.
  • This occurred in 14% patients studied by Hammarsten et al.[14]

If patients with elevated hs-cTn without a changing pattern are not called AMI,

  • should they be called patients with “unstable angina and cardiac injury” or patients with structural heart disease and noncardiac chest pain?

Perhaps both exist?

3. The release of biomarkers is flow-dependent.Thus, there may not always be rapid access to the circulation. An area of injury distal to a totally occluded vessel (when collateral channels close) may be different in terms of the dynAMIcs of

  • hs-cTn change than an intermittently occluded coronary artery.
4. Conjoint biological and analytical variation can be measured.

  • They are assay-dependent, and the reference change values range from 35% to 85%.[2]

The use of criteria less than that (which may be what is needed clinically) will thus
likely include individuals with changes caused by

  • conjoint biological and analytical variation alone.

This has been shown to be the case in

  • many patients with nonacute cardiovascular diagnoses.[14,15]
5. Most evaluations have attempted to define the optimal delta, often with receiver operator curve analysis. Such an approach is based on the concept that sensitivity and specificity deserve equivalent weight.[But higher deltas improve specificity more and lower ones improve sensitivity and it is not clear that all physicians want the same tradeoffs in this regard.]ED physicians often prefer high-sensitivity so that their miss rate is low (<1%),[16] whereas hospital clinicians want increased specificity. This tension will need to be addressed in defining the optimal delta.
6. The delta associated with AMI may be different from that associated with other cardiac injury.[14] In addition, women have less marked elevations of cTn in response to coronary artery disease[17] and in earlier studies were less apt to have elevated values.[18] Given their pathology is at times different,

  • it may be that different metrics may be necessary based on gender
7. Some groups have assumed that if a change is of a given magnitude over 6 hours, it can be divided by 6 and the 1-h values can be used.

  • This approach is not data driven, and biomarker release is more likely to be discontinuous rather than continuous.[19]

In addition, the values obtained with this approach are too small to be distinguished from a lack of change with most assays.

These issues pose a major challenge even for defining the ideal delta change value and provide the reasons why

  • the use of this approach will reduce sensitivity[20,21] (Figure 2).

http://img.medscape.com/article/803/159/803159-fig2.jpg

Defining the Optimal Delta: Tension Between Sensitivity and Specificity

There is a reciprocal relationship between sensitivity and specificity. With marked percentage changes,

  • specificity is improved at the expense of sensitivity, and
  • at lower values, the opposite occurs.

In addition, there is controversy in regard to the metrics that should be used with high-sensitivity assays.
The Australian-New Zealand group proposed

  • a 50% change for hs-cTnT for values below 53 ng/l and
  • a 20% change above that value.[22]
  • The 20% change is much less than conjoint biological and analytical variation.

A number of publications have suggested the superiority of

  • absolute δ cTn compared to relative δ cTn in discriminating between AMI and non-AMI causes of elevated cTn.[23,24,25]
  • The utility of the absolute or relative δ cTn appears to depend on the initial cTn concentration, and
  • the major benefit may be at higher values.[23]

A recent publication by Apple et al.[26] calculates deltas in several different ways with a contemporary assay and

  • provides a template for how to do such studies optimally.[26]

If all studies were carried out in a similar fashion, it would help immensely. In the long run, institutions will need to
define the approach they wish to take. We believe this discussion is a critical one and should include

  • laboratory,
  • ED, and
  • cardiology professionals.

Distinguishing Between Type 1 and Type 2 AMI

Although δ cTn is helpful in distinguishing between AMI and nonacute causes of Tn release,

  • it may or may not be useful in discerning type 1 from type 2 AMI.

As assay sensitivity increases, it appears that the frequency of type 2 AMI increases.
Making this distinction is not easy.

Type 1 AMI is caused by a primary coronary event, usually plaque rupture.

  • It is managed acutely with aggressive anticoagulation and
  • revascularization (percutaneous coronary intervention or coronary artery bypass).[10]

Type 2 AMI typically evolves secondary to ischemia from an oxygen demand/supply mismatch

  • severe tachycardia and
  • hypo- or hypertension and the like,
  • with or without a coronary abnormality.

These events usually are treated by addressing the underlying abnormalities.

They are particularly common in patients who are

  • critically ill and those who
  • are postoperative.[27]

However, autopsy studies from patients with postoperative AMI often manifest plaque rupture.[28]
Thus, the more important events, even if less common, may be type 1 AMIs. Type 2 events
seem more common in women,  who tend to have

  • more endothelial dysfunction,
  • more plaque erosion, and
  • less fixed coronary artery disease.[28-30]

Additional studies are needed to determine how best to make this clinical distinction.
For now, clinical judgment is recommended.

Analytical Imprecision in Cardiac Troponin Assays

All analytical problems will be more critical with hs-cTn assays. Cardiac troponin I (cTnI) and cardiac troponin T (cTnT) are measured using enzyme linked immune- sorbent assays.

  •   quantification of hs-cTn can be influenced by interference by reagent antibodies to analyte (cTn), leading to false- positive or negative results.[31]
  •   Autoantibodies to cTnI or cTnT are found in 5% to 20% of individuals and can reduce detection of cTn.[32,33]
  •   Additionally, fetal cTn isoforms can be re-expressed in diseased skeletal muscle and detected by the cTnT assays, resulting in false-positive values.[34]

Several strategies, including the use of

  •   blocking reagents,
  •  assay redesign, and use of
  •  antibody fragments,

have been used to reduce interference.[35–36]

There are differences in measured cTn values based on specimen type (serum versus heparinized plasma versus EDTA plasma).
In addition, hemolysis may affect the accuracy of cTn measurement,[37] and with blood draws from peripheral IV lines, common in ICU.

Ruling Out AMI

Studies evaluating the diagnostic performance of hs-cTn assays for the early diagnosis of AMI usually define AMI on

  • the basis of a rising and/or falling pattern of current generation cTn values.[21,38]

However, defining AMI on the basis of the less sensitive current generation assay results in an underestimation of the true prevalence of AMI and

  • an overestimation of negative predictive value of the experimental assay.
  • shortens the time it takes to rule in all the AMIs and
  • to definitively exclude AMI as it
  • ignores the new AMIs more sensitively detected by the hs-cTn assay.

Thus, in the study by Hammarsten et al.,[14]

  • the time to exclude all AMIs was 8.5 hours when all of the AMIs detected
    with the high-sensitivity assay were included, whereas
  • others that do not include these additional events report this can be done
    in 3 to 4 hours.[21,29,38]

In our view, Hammarsten is correct.

This does not mean that hs-cTn cannot help in excluding AMI. Body et al.[39] reported that patients who present with undetectable values (less than the LOB of the hs-cTnT assay) were unlikely to have adverse events during follow-up. If that group of patients is added to those who present later than 6 hours, then perhaps a significant proportion of patients

 

  • with possible acute coronary syndrome (ACS) could
  • have that diagnosis excluded with the initial value.[40]
    • studies need to continue to evaluate cTn values for at least 6 h
      to define the frequency of additional AMIs detected in that manner.

Using follow-up evaluations of patients with small event rates

  • who are likely to have additional care during the follow-up period are likely to be underpowered.

It may be that better initial risk stratification may help with this, as recently reported.[16,41]
Low-risk patients who have good follow-up after an ED visit

  • may be a group that can be released as early as 2 h after presentation.[16]

Investigating the Causes of Positive Troponin Values in Non-AMI Patients

Elevated Tn values (including those obtained with high-sensitivity assays) are associated with

  • a 2-fold higher risk for longer-term all-cause mortality and
  • cardiovascular death than a negative troponin values.[6,42-44]

This association is dose-dependent.

  • If values are rising, they are indicative of acute cardiac injury.

Those patients should be admitted because the risk is often short-term. However,

  • if the values are stable, assuming the timing of any acute event would
    allow detection of a changing pattern,
  • the risk, although substantive, in our view, often plays out in the longer term.[44]
  • Many of these individuals, assuming they are doing well clinically, can be
    evaluated outside of the hospital, in our view.
  • However, because such elevations are an indicator of a subclinical
    cardiovascular injury,  such evaluations should be early and aggressive.

Data from several studies suggest that there may well be risk far below the 99th percentile URL value.
Thus, it may evolve that patients in the upper ranges of the normal range also require some degree of cardiovascular evaluation.

Risk Stratifying Patients With Nonacute Coronary Syndrome
Conditions

Patients who have a rising pattern of values have a higher risk of mortality than those with negative values regardless of the cause.
Investigations are ongoing to determine how well results from hs-cTn testing help to risk-stratify patients with

  • pulmonary embolism,[45]
  • congestive heart failure,[46]
  • sepsis,[47]
  • hypertensive emergency,[48] and
  • chronic obstructive pulmonary disease.[49]

Presently, the studies suggest that cTn values classify patients into clinically relevant  risk subgroups. Studies are needed

  • to evaluate the incremental prognostic benefit of hs-cTn.

Conclusions

Routine use of hs-cTn assays in the United States is inevitable. These assays hold
the promise of

  • improving the sensitivity of AMI diagnoses,
  • shortening the duration of AMI evaluation and
  • improving the risk stratification of other noncardiac diagnoses.

However, to be able to fully realize their potential, additional studies are needed to address the

  • knowledge gaps we have identified. In the interim, clinicians need to
    • learn how to use the 99th% URL and
    • the concept of changing values

John Adan, MD, FACC

In 2008 CMS commissioned Yale University to analyze 30 days mortality after myocardial infarction in their hospitals.

The study has been based on review of medical records. Consensus criteria for diagnosis of myocardial infarction include

  • clinical symptoms,
  • EKG,
  • troponins,
  • CK MB,
  • ECHO,
  • cath,
  • histopathology, etc.

How the reviewed hospitals performed diagnostic coding is unknown. In clinical practice we are bombarded by consults

  • for elevated troponins due to causes other than myocardial infarction, like
    • pneumonia,
    • accelerated hypertension,
    • arrhythmias,
    • renal failure, etc.

The metric started out over 19%. Now it is below 15%, on average.

CT Angiography (CCTA) Reduced Medical Resource Utilization compared to Standard Care reported in JACC
Aviva Lev-Ari, PhD, RN
http://pharmaceuticalintelligence.com/2013/05/16/ct-angiography-ccta-reduced-medical-resource-utilization-compared-
to-standard-care-reported-in-jacc/?goback=%2Egde_4346921_member_241569351

typical changes in CK-MB and cardiac troponin ...

typical changes in CK-MB and cardiac troponin in Acute Myocardial Infarction (Photo credit: Wikipedia)

Phosphotungstic acid-haematoxylin staining dem...

Phosphotungstic acid-haematoxylin staining demonstrating contraction band necrosis in an individual that had a myocardial infarction (heart attack). (Photo credit: Wikipedia)

English: Troponin(SVG Version) 日本語: トロポニン(SVG修正版)

English: Troponin(SVG Version) 日本語: トロポニン(SVG修正版) (Photo credit: Wikipedia)

Read Full Post »

Observations on Finding the Genetic Links in Common Disease: Whole Genomic Sequencing Studies

Author: Larry H Bernstein, MD, FCAP

In this article I will address the following article by Dr. SJ Williams.

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

 

In the November 23, 2012 issue of Science, Jocelyn Kaiser reports (Genetic Influences On Disease  Remain Hidden in News and  Analysis) on the difficulties that many genomic studies are encountering correlating genetic variants to high risk of type 2 diabetes and heart disease. American Society of  Human Genetics annual 2012 meeting, results of DNA sequencing studies reporting on genetic variants and links to high risk type 2 diabetes and heart disease, part of an international effort to determine the genetic events contributing to complex, common diseases like diabetes.
The key point is that these disease links are challenged by the identification of genetic determinants that do not follow Mendelian Genetics.  There are many disease associated gene variants, and they have not been deleted as a result of natural selection.  In the case of diabetes (type 2), the genetic risk is a low as 26%.

Gene-wide-association studies (GAWS) have identified single nucleotide polymorphisms (SNPs) with associations for common diseases, most of these individually carry only only 20-40% of risk. This is not sufficient for prediction
and use in personalized  treatment.

What is the implication of this.  Researchers have gone to exome-sequencing and  to whole genome sequencing for answers. SNPs can be easily done  by microarray, and in a clinic setting. GWAS is difficult and has inherent complexity, and it has had high cost of use. But the cost of the technology has been dropping precipitously. Technology is being redesigned for more rapid diagnosis and use in clinical research and personalized medicine.  It appears that this is not  yet a game changer.

My own thinking is that the answer doesn’t  fully lie in the genome sequencing, but that it must turn on the very large weight of importance in the regulatory function in the genome, that which was once “considered” dark matter.  In the regulatory function you have a variety of interactions and adaptive changes to the proximate environment, and this is a key to the nascent study of metabolomics.

Three projects highlighted are:
1.  National Heart, Lung and Blood Institute Exome Sequencing Project (ESP)[2]: heart, lung, blood

  • A majority of variants linked to any disease are rare
  • Groups of variants in the same gene confirmed a link between
    APOC3 and risk for early-onset heart attack

2.  T2D-GENES Consortium
3.  GoT2D

  • SNP and PAX4 gene association for type 2 diabetes in East Asians
  • No new rare variants above 1.5% frequency for diabetes

http://www.phgfoundation.org/news/5164/

The unsupported conclusion from this has been

  1. the common disease-common variant hypothesis, which predicts that common disease-causing genetic variants exist in all human populations, but   (common unexplained complexity?) each individual variant will necessarily only have a small effect on disease susceptibility (i.e. a low associated relative risk).
  1. the common disease, many rare variants hypothesis, which postulates that disease is caused by multiple strong-effect variants, (an alternative complexity situation?) Dickson et al. (2010)  PLoS Biol 2010 8(1):e1000294

The reality is that it has been difficult to associate any variant with prediction of risk, but an alternative approach appears to be intron sequencing and missing information on gene-gene interactions.

Jocelyn Kaiser’s Science article notes this in a brief interview with Harry Dietz of Johns Hopkins University where he suspects that “much of the missing heritability lies in gene-gene interactions”.

Oliver Harismendy and Kelly Frazer and colleagues’ recent publication in Genome Biology  http://genomebiology.com/content/11/11/R118 support this notion.  The authors used targeted resequencing
of two endocannabinoid metabolic enzyme genes (fatty-acid-amide hydrolase (FAAH) and monoglyceride lipase (MGLL) in 147 normal weight and 142 extremely obese patients.

English: The human genome, categorized by func...

English: The human genome, categorized by function of each gene product, given both as number of genes and as percentage of all genes. (Photo credit: Wikipedia)

Read Full Post »

Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging

Curator: Aviva Lev-Ari, PhD, RN

Article ID #52: Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging. Published on 5/17/2013

WordCloud Image Produced by Adam Tubman

UPDATED on 7/12/2021

  • Abstract. Synthetic biology is a field of scientific research that applies engineering principles to living organisms and living systems.
  • Introduction. This article is intended as a perspective on the field of synthetic biology. …
  • Genetic Manipulation—Plasmids. …
  • Genetic Manipulations—Genome. …
  • An Early Example of Synthetic Biology. …

UPDATED on 11/6/2018

Which biological systems should be engineered?

To solve real-world problems using emerging abilities in synthetic biology, research must focus on a few ambitious goals, argues Dan Fletcher, Professor of bioengineering and biophysics, and chair of the Department of Bioengineering at the University of California, Berkeley, USA. He is also a Chan Zuckerberg Biohub Investigator.
Start Quote

Artificial blood cells. Blood transfusions are crucial in treatments for everything from transplant surgery and cardiovascular procedures to car accidents, pregnancy-related complications and childhood malaria (see go.nature.com/2ozbfwt). In the United States alone, 36,000 units of red blood cells and 7,000 units of platelets are needed every day (see go.nature.com/2ycr2wo).

But maintaining an adequate supply of blood from voluntary donors can be challenging, especially in low- and middle-income countries. To complicate matters, blood from donors must be checked extensively to prevent the spread of infectious diseases, and can be kept for only a limited time — 42 days or 5 days for platelets alone. What if blood cells could be assembled from purified or synthesized components on demand?

In principle, cell-like compartments could be made that have the oxygen-carrying capacity of red blood cells or the clotting ability of platelets. The compartments would need to be built with molecules on their surfaces to protect the compartments from the immune system, resembling those on a normal blood cell. Other surface molecules would be needed to detect signals and trigger a response.

In the case of artificial platelets, that signal might be the protein collagen, to which circulating platelets are exposed when a blood vessel ruptures5. Such compartments would also need to be able to release certain molecules, such as factor V or the von Willebrand clotting factor. This could happen by building in a rudimentary form of exocytosis, for example, whereby a membrane-bound sac containing the molecule would be released by fusing with the compartment’s outer membrane.

It is already possible to encapsulate cytoplasmic components from living cells in membrane compartments6,7. Now a major challenge is developing ways to insert desired protein receptors into the lipid membrane8, along with reconstituting receptor signalling.

Red blood cells and platelets are good candidates for the first functionally useful synthetic cellular system because they lack nuclei. Complex functions such as nuclear transport, protein synthesis and protein trafficking wouldn’t have to be replicated. If successful, we might look back with horror on the current practice of bleeding one person to treat another.

Micrograph of red blood cells, 3 T-lymphocytes and activated platelets

Human blood as viewed under a scanning electron microscope.Credit: Dennis Kunkel Microscopy/SPL

Designer immune cells. Immunotherapy is currently offering new hope for people with cancer by shaping how the immune system responds to tumours. Cancer cells often turn off the immune response that would otherwise destroy them. The use of therapeutic antibodies to stop this process has drastically increased survival rates for people with multiple cancers, including those of the skin, blood and lung9. Similarly successful is the technique of adoptive T-cell transfer. In this, a patient’s T cells or those of a donor are engineered to express a receptor that targets a protein (antigen) on the surface of tumour cells, resulting in the T cells killing the cancerous cells (called CAR-T therapies)10. All of this has opened the door to cleverly rewiring the downstream signalling that results in the destruction of tumour cells by white blood cells11.

What if researchers went a step further and tried to create synthetic cells capable of moving towards, binding to and eliminating tumour cells?

In principle, untethered from evolutionary pressures, such cells could be designed to accomplish all sorts of tasks — from killing specific tumour cells and pathogens to removing brain amyloid plaques or cholesterol deposits. If mass production of artificial immune cells were possible, it might even lessen the need to tailor treatments to individuals — cutting costs and increasing accessibility.

To ensure that healthy cells are not targeted for destruction, engineers would also need to design complex signal-processing systems and safeguards. The designer immune cells would need to be capable of detecting and moving towards a chemical signal or tumour. (Reconstituting the complex process of cell motility is itself a major challenge, from the delivery of energy-generating ATP molecules to the assembly of actin and myosin motors that enable movement.)

Researchers have already made cell-like compartments that can change shape12, and have installed signalling circuits within them13. These could eventually be used to control movement and mediate responses to external signals.

Smart delivery vehicles. The relative ease of exposing cells in the lab to drugs, as well as introducing new proteins and engineering genomes, belies how hard it is to deliver molecules to specific locations inside living organisms. One of the biggest challenges in most therapies is getting molecules to the right place in the right cell at the right time.

Harnessing the natural proclivity of viruses to deliver DNA and RNA molecules into cells has been successful14. But virus size limits cargo size, and viruses don’t necessarily infect the cell types researchers and clinicians are aiming at. Antibody-targeted synthetic vesicles have improved the delivery of drugs to some tumours. But getting the drug close to the tumour generally depends on the vesicles leaking from the patient’s circulatory system, so results have been mixed.

Could ‘smart’ delivery vehicles containing therapeutic cargo be designed to sense where they are in the body and move the cargo to where it needs to go, such as across the blood–brain barrier?

This has long been a dream of those in drug delivery. The challenges are similar to those of constructing artificial blood and immune cells: encapsulating defined components in a membrane, incorporating receptors into that membrane, and designing signal-processing systems to control movement and trigger release of the vehicle’s contents.

The development of immune-cell ‘backpacks’ is an exciting step in the right direction. In this, particles containing therapeutic molecules are tethered to immune cells, exploiting the motility and targeting ability of the cells to carry the molecules to particular locations15.

A minimal chassis for expression. In each of the previous examples, the engineered cell-like system could conceivably be built to function over hours or days, without the need for additional protein production and regulation through gene expression. For many other tasks, however, such as the continuous production of insulin in the body, it will be crucial to have the ability to express proteins, upregulate or downregulate certain genes, and carry out functions for longer periods.

Engineering a ‘minimal chassis’ that is capable of sustained gene expression and functional homeostasis would be an invaluable starting point for building synthetic cells that produce proteins, form tissues and remain viable for months to years. This would require detailed understanding and incorporation of metabolic pathways, trafficking systems and nuclear import and export — an admittedly tall order.

It is already possible to synthesize DNA in the lab, whether through chemically reacting bases or using biological enzymes or large-scale assembly in a cell16. But we do not yet know how to ‘boot up’ DNA and turn a synthetic genome into a functional system in the absence of a live cell.

Since the early 2000s, biologists have achieved gene expression in synthetic compartments loaded with cytoplasmic extract17. And genetic circuits of increasing complexity (in which the expression of one protein results in the production or degradation of another) are now the subject of extensive research. Still to be accomplished are: long-lived gene expression, basic protein trafficking and energy production reminiscent of live cells.

End Quote

SOURCE

https://www.nature.com/articles/d41586-018-07291-3?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20181106

UPDATED on 10/14/2013

Genetics of Atherosclerotic Plaque in Patients with Chronic Coronary Artery Disease

372/3:15 Genetic influence on LpPLA2 activity at baseline as evaluated in the exome chip-enriched GWAS study among ~13600 patients with chronic coronary artery disease in the STABILITY (STabilisation of Atherosclerotic plaque By Initiation of darapLadIb TherapY) trial. L. Warren, L. Li, D. Fraser, J. Aponte, A. Yeo, R. Davies, C. Macphee, L. Hegg, L. Tarka, C. Held, R. Stewart, L. Wallentin, H. White, M. Nelson, D. Waterworth.

Genetic influence on LpPLA2 activity at baseline as evaluated in the exome chip-enrichedGWASstudy among ~13600 patients with chronic coronary artery disease in the STABILITY (STabilisation of Atherosclerotic plaque By Initiation of darapLadIb TherapY) trial.

L. Warren1, L. Li1, D. Fraser1, J. Aponte1, A. Yeo2, R. Davies3, C. Macphee3, L. Hegg3,

L. Tarka3, C. Held4, R. Stewart5, L. Wallentin4, H. White5, M. Nelson1, D.

Waterworth3.

1) GlaxoSmithKline, Res Triangle Park, NC;

2) GlaxoSmithKline, Stevenage, UK;

3) GlaxoSmithKline, Upper Merion, Pennsylvania, USA;

4) Uppsala Clinical Research Center, Department of Medical Sciences, Uppsala University, Uppsala, Sweden;

5) 5Green Lane Cardiovascular Service, Auckland Cty Hospital, Auckland, New Zealand.

STABILITY is an ongoing phase III cardiovascular outcomes study that compares the effects of darapladib enteric coated (EC) tablets, 160 mg versus placebo, when added to the standard of care, on the incidence of major adverse cardiovascular events (MACE) in subjects with chronic coronary heart disease (CHD). Blood samples for determination of the LpPLA2 activity level in plasma and for extraction of DNA was obtained at randomization. To identify genetic variants that may predict response to darapladib, we genotyped ~900K common and low frequency coding variations using Illumina OmniExpress GWAS plus exome chip in advance of study completion. Among the 15828 Intent-to-Treat recruited subjects, 13674 (86%) provided informed consent for genetic analysis. Our pharmacogenetic (PGx) analysis group is comprised of subjects from 39 countries on five continents, including 10139 Whites of European heritage, 1682 Asians of East Asian or Japanese heritage, 414 Asians of Central/South Asian heritage, 268 Blacks, 1027 Hispanics and 144 others. Here we report association analysis of baseline levels of LpPLA2 to support future PGx analysis of drug response post trial completion. Among the 911375 variants genotyped, 213540 (23%) were rare (MAF < 0.5%).

Our analyses were focused on the drug target, LpPLA2 enzyme activity measured at baseline. GWAS analysis of LpPLA2 activity adjusting for age, gender and top 20 principle component scores identified 58 variants surpassing GWAS-significant threshold (5e-08).

Genome-wide stepwise regression analyses identified multiple independent associations from PLA2G7, CELSR2, APOB, KIF6, and APOE, reflecting the dependency of LpPLA2 on LDL-cholesterol levels. Most notably, several low frequency and rare coding variants in PLA2G7 were identified to be strongly associated with LpPLA2 activity. They are V279F (MAF=1.0%, P= 1.7e-108), a previously known association, and four novel associations due to I1317N (MAF=0.05%, P=4.9e-8), Q287X (MAF=0.05%, P=1.6e-7), T278M (MAF=0.02%, P=7.6e-5) and L389S (MAF=0.04%, P=4.3e-4).

All these variants had enzyme activity lowering effects and each appeared to be specific to certain ethnicity. Our comprehensive PGx analyses of baseline data has already provided great insight into common and rare coding genetic variants associated with drug target and related traits and this knowledge will be invaluable in facilitating future PGx investigation of darapladib response.

SOURCE

http://www.ashg.org/2013meeting/pdf/46025_Platform_bookmark%20for%20Web%20Final%20from%20AGS.pdf

Synthetic Biology: On Advanced Genome Interpretation for

  • Gene Variants and
  • Pathways,
  • Inversion Polymorphism,
  • Passenger Deletions,
  • De Novo Mutations,
  • Whole Genome Sequencing w/Linkage Analysis

What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging?

In a recent publication by my colleague, Stephen J. Williams, Ph.D. on  5/15/2013 titled

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

http://pharmaceuticalintelligence.com/2013/05/15/finding-the-genetic-links-in-common-disease-caveats-of-whole-genome-sequencing-studies/

we learned that:

  • Groups of variants in the same gene confirmed link between APOC3 and higher risk for early-onset heart attack
  • No other significant gene variants linked with heart disease

APOC3 – apolipoprotein C-III – Potential Relevance to the Human Aging Process

Main reason for selection
Entry selected based on indirect or inconclusive evidence linking the gene product to ageing in humans or in one or more model systems
Description
APOC3 is involved in fat metabolism and may delay the catabolism of triglyceride-rich particles. Changes in APOC3 expression levels have been reported in aged mice [1754]. Results from mice suggest that FOXO1 may regulate the expression of APOC3 [1743]. Polymorphisms in the human APOC3 gene and promoter have been associated with lipoprotein profile, cardiovascular health, insulin (INS) sensitivity, and longevity [1756]. Therefore, APOC3 may impact on some age-related diseases, though its exact role in human ageing remains to be determined.

Cytogenetic information

Cytogenetic band
11q23.1-q2
Location
116,205,833 bp to 116,208,997 bp
Orientation
Plus strand

Display region using the UCSC Genome Browser

Protein information

Gene Ontology
Process: GO:0006869; lipid transport
GO:0016042; lipid catabolic process
GO:0042157; lipoprotein metabolic process
Function: GO:0005319; lipid transporter activity
Cellular component: GO:0005576; extracellular region
GO:0042627; chylomicron

Protein interactions and network

No interactions in records.

Retrieve sequences for APOC3

Promoter
Promoter
ORF
ORF
CDS
CDS

Homologues in model organisms

Bos taurus
APOC3_BOVI
Mus musculus
Apoc3
Pan troglodytes
APOC3

In other databases

AnAge
This species has an entry in AnAge

Selected references

  • [2125] Pollin et al. (2008) A null mutation in human APOC3 confers a favorable plasma lipid profile and apparent cardioprotection.PubMed
  • [1756] Atzmon et al. (2006) Lipoprotein genotype and conserved pathway for exceptional longevity in humansPubMed
  • [1755] Araki and Goto (2004) Dietary restriction in aged mice can partially restore impaired metabolism of apolipoprotein A-IV and C-IIIPubMed
  • [1743] Altomonte et al. (2004) Foxo1 mediates insulin action on apoC-III and triglyceride metabolismPubMed
  • [1754] Araki et al. (2004) Impaired lipid metabolism in aged mice as revealed by fasting-induced expression of apolipoprotein mRNAs in the liver and changes in serum lipidsPubMed
  • [1753] Panza et al. (2004) Vascular genetic factors and human longevityPubMed
  • [1752] Anisimov et al. (2001) Age-associated accumulation of the apolipoprotein C-III gene T-455C polymorphism C 

http://genomics.senescence.info/genes/entry.php?hgnc=APOC3

Apolipoprotein C-III is a protein component of very low density lipoprotein (VLDL). APOC3 inhibitslipoprotein lipase and hepatic lipase; it is thought to inhibit hepatic uptake[1] of triglyceride-rich particles. The APOA1, APOC3 and APOA4 genes are closely linked in both rat and human genomes. The A-I and A-IV genes are transcribed from the same strand, while the A-1 and C-III genes are convergently transcribed. An increase in apoC-III levels induces the development of hypertriglyceridemia.

Clinical significance

Two novel susceptibility haplotypes (specifically, P2-S2-X1 and P1-S2-X1) have been discovered in ApoAI-CIII-AIV gene cluster on chromosome 11q23; these confer approximately threefold higher risk ofcoronary heart disease in normal[2] as well as non-insulin diabetes mellitus.[3]Apo-CIII delays the catabolism of triglyceride rich particles. Elevations of Apo-CIII found in genetic variation studies may predispose patients to non-alcoholic fatty liver disease.

  1. ^ Mendivil CO, Zheng C, Furtado J, Lel J, Sacks FM (2009). “Metabolism of VLDL and LDL containing apolipoprotein C-III and not other small apolipoproteins – R2”.Arteriosclerosis, Thrombosis and Vascular Biology 30 (2): 239–45. doi:10.1161/ATVBAHA.109.197830PMC 2818784PMID 19910636.
  2. ^ Singh PP, Singh M, Kaur TP, Grewal SS (2007). “A novel haplotype in ApoAI-CIII-AIV gene region is detrimental to Northwest Indians with coronary heart disease”. Int J Cardiol 130 (3): e93–5. doi:10.1016/j.ijcard.2007.07.029PMID 17825930.
  3. ^ Singh PP, Singh M, Gaur S, Grewal SS (2007). “The ApoAI-CIII-AIV gene cluster and its relation to lipid levels in type 2 diabetes mellitus and coronary heart disease: determination of a novel susceptible haplotype”. Diab Vasc Dis Res 4 (2): 124–29. doi:10.3132/dvdr.2007.030PMID 17654446.

In 2013 we reported on the discovery that there is a

Genetic Associations with Valvular Calcification and Aortic Stenosis

N Engl J Med 2013; 368:503-512

February 7, 2013DOI: 10.1056/NEJMoa1109034

METHODS

We determined genomewide associations with the presence of aortic-valve calcification (among 6942 participants) and mitral annular calcification (among 3795 participants), as detected by computed tomographic (CT) scanning; the study population for this analysis included persons of white European ancestry from three cohorts participating in the Cohorts for Heart and Aging Research in Genomic Epidemiology consortium (discovery population). Findings were replicated in independent cohorts of persons with either CT-detected valvular calcification or clinical aortic stenosis.

CONCLUSIONS

Genetic variation in the LPA locus, mediated by Lp(a) levels, is associated with aortic-valve calcification across multiple ethnic groups and with incident clinical aortic stenosis. (Funded by the National Heart, Lung, and Blood Institute and others.)

SOURCE:

N Engl J Med 2013; 368:503-512

Related Research by Author & Curator of this article:

Artherogenesis: Predictor of CVD – the Smaller and Denser LDL Particles

Cardiovascular Biomarkers

Genetics of Conduction Disease: Atrioventricular (AV) Conduction Disease (block): Gene Mutations – Transcription, Excitability, and Energy Homeostasis

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013

Hypertriglyceridemia concurrent Hyperlipidemia: Vertical Density Gradient Ultracentrifugation a Better Test to Prevent Undertreatment of High-Risk Cardiac Patients

Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

Personalized Cardiovascular Genetic Medicine at Partners HealthCare and Harvard Medical School

Genomics Orientations for Individualized Medicine Volume One

Market Readiness Pulse for Advanced Genome Interpretation and Individualized Medicine

We present below the MARKET LEADER in Interpretation of the Genomics Computations Results in the emerging new ERA of Medicine:  Genomic Medicine, Knome.com and its home grown software power house.

A second Case study in the  Advanced Genome Interpretation and Individualized Medicine presented following the Market Leader, is the Genome-Phenome Analyzer by SimulConsult, A Simultaneous Consult On Your Patient’s Diagnosis, Chestnut Hill, MA

 

2012: The Year When Genomic Medicine Started Paying Off

Luke Timmerman

An excerpt of an interesting article mentioning Knome [emphasis ours]…

Remember a couple of years ago when people commemorated the 10-year anniversary of the first draft human genome sequencing? The storyline then, in 200, was that we all went off to genome camp and only came home with a lousy T-shirt. Society, we were told, invested huge scientific resources in deciphering the code of life, and there wasn’t much of a payoff in the form of customized, personalized medicine.

That was an easy conclusion to reach then, when personalized medicine advocates could only point to a couple of effective targeted cancer drugs—Genentech’s Herceptin and Novartis’ Gleevec—and a couple of diagnostics. But that’s changing. My inbox the past week has been full of analyst reports from medical meetings, which mostly alerted readers to mere “incremental” advances with a number of genomic-based medicines and diagnostics. But that’s a matter of focusing on the trees, not the forest. This past year, we witnessed some really impressive progress from the early days of “clinical genomics” or “medical genomics.” The investment in deep understanding of genomics and biology is starting to look visionary.

The movement toward clinical genomics gathered steam back in June at the American Society of Clinical Oncology annual meeting. One of the hidden gem stories from ASCO was about little companies like Cambridge, MA-based Foundation Medicine and Cambridge, MA-based Knome that started seeing a surprising surge in demand from physicians for their services to help turn genomic data into medical information. The New York Times wrote a great story a month later about a young genomics researcher at Washington University in St. Louis who got cancer, had access to incredibly rich information about his tumors, and—after some wrestling with his insurance company—ended up getting a targeted drug nobody would have thought to prescribe without that information. And last month, I checked back on Stanford University researcher Mike Snyder, who made headlines this year using a smorgasbord of “omics” tools to correctly diagnose himself early with Type 2 diabetes, and then monitor his progress back into a healthy state–read the entire article

http://www.knome.com/knome-blog/2012-the-year-when-genomic-medicine-started-paying-off/

Knome and Real Time Genomics Ink Deal to Integrate and Sell the RTG Variant Platform on knoSYS™100 System

Partnership to bring accurate and fast genome analysis to translational researchers

CAMBRIDGE, MA –  May 6, 2013 – Knome Inc., the genome interpretation company, and Real Time Genomics, Inc., the genome analytics company, today announced that the Real Time Genomics (RTG) Variant platform will be integrated into every shipment of the knoSYS™100 interpretation system. The agreement enables customers to easily purchase the RTG analytics engine as an upgrade to the system. The product will combine two world-class commercial platforms to deliver end-to-end genome analytics and interpretation with superior accuracy and speed. Financial terms of the agreement were not disclosed.

“In the past year demand for genome interpretation has surged as translational researchers and clinicians adopt sequencing for human disease discovery and diagnosis,” said Wolfgang Daum, CEO of Knome. “Concomitant with that demand is the need for accurate and easy-to-use industrial grade analysis that meets expectations of clinical accuracy. The RTG platform is both incredibly fast and truly differentiating to customers doing family studies, and we are excited to add such a powerful platform to the knoSYS ecosystem.”

The partnership simplifies the purchasing process by allowing knoSYS customers to purchase the RTG platform directly from Knome sales representatives.

“The Knome system is a perfect complementary channel to further expand our commercial effort to bring the RTG platform to market,” said Steve Lombardi, CEO of Real Time Genomics. “Knome has built a recognizable brand around human clinical genome interpretation, and by delivering the RTG platform within their system, both companies are simplifying genomics to help customers understand human disease and guide clinical actions.”

About Knome

Knome Inc. (www.knome.com) is a leading provider of human genome interpretation systems and services. We help clients in two dozen countries identify the genetic basis of disease, tumor growth, and drug response. Designed to accelerate and industrialize the process of interpreting whole genomes, Knome’s big data technologies are helping to pave the healthcare industry’s transition to molecular-based, precision medicine.

About Real Time Genomics

Real Time Genomics (www.realtimegenomics.com) has a passion for genomics.  The company offers software tools and applications for the extraction of unique value from genomes.  Its competency lies in applying the combination of its patented core technology and deep computational expertise in algorithms to solve problems in next generation genomic analysis.  Real Time Genomics is a private San Francisco based company backed by investment from Catamount Ventures, Lightspeed Venture Partners, and GeneValue Ltd.

http://www.knome.com/knome-blog/knome-and-real-time-genomics-ink-deal-to-integrate-and-sell-the-rtg-variant-platform-on-knosys100-system/

Direct-to-Consumer Genomics Reinvents Itself

Malorye Allison

An excerpt of an interesting article mentioning Knome [emphasis ours]:

Cambridge, Massachusetts–based Knome made one of the splashiest entries into the field, but has now turned entirely to contract research. The company began providing DTC whole-genome sequencing to independently wealthy individuals at a time when the price was still sky high. The company’s first client, Dan Stoicescu, was a former biotech entrepreneur who paid $350,000 to have his genome sequenced in 2008 so he could review it “like a stock portfolio” as new genetic discoveries unfolded4. About a year later, the company was auctioning off a genome, with such frills as a dinner with renowned Harvard genomics researcher George Church, at a starting price of $68,000; at the time, a full-genome sequence came at the price of $99,000, indicating that the cost of genome sequencing has been plummeting steadily.

Now, the company’s model is very different. “We stopped working with the ‘wealthy healthy’ in 2010,” says Jonas Lee, Knome’s chief marketing officer. “The model changed as sequencing changed.” The new emphasis, he says, is now on using Knome’s technology and technical expertise for genome interpretation. Knome’s customers are researchers, pharmaceutical companies and medical institutions, such as Johns Hopkins University School of Medicine in Baltimore, which in January signed the company up to interpret 1,000 genomes for a study of genetic variants underlying asthma in African American and African Caribbean populations.

Knome is trying to advance the clinical use of genomics, working with groups that “want to be prepared for what’s ahead,” Lee says. “We work with at least 50 academic institutions and 20 pharmaceutical companies looking at variants and drug response.” Cancer and idiopathic genetic diseases are the first sweet spots for genomic sequencing, he says. Although cancer genomics has been hot for a while, a recent string of discoveries of Mendelian diseases5 made by whole-genome sequencing has lit up that field, too. Lee is also confident, however, that “chronic diseases like heart disease are right behind those.” The company also provides software tools. The price for its KnomeDiscovery sequencing and analysis service starts at about $12,000 per sample–read the entire article here.

http://www.knome.com/knome-blog/direct-to-consumer-genomics-reinvents-itself/

Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves

VIEW VIDEO

http://www.colbertnation.com/the-colbert-report-videos/419824/october-04-2012/george-church

 

Knome Software Makes Sense of the Genome

The startup’s software takes raw genome data and creates a usable report for doctors.

DNA decoder: Knome’s software can tease out medically relevant changes in DNA that could disrupt individual gene function or even a whole molecular pathway, as is highlighted here—certain mutations in the BRCA2 gene, which affects the function of many other genes, can be associated with an increased risk of breast cancer.

A genome analysis company called Knome is introducing software that could help doctors and other medical professionals identify genetic variations within a patient’s genome that are linked to diseases or drug response. This new product, available for now only to select medical institutions, is a patient-focused spin on Knome’s existing products aimed at researchers and pharmaceutical companies. The Knome software turns a patient’s raw genome sequence into a medically relevant report on disease risks and drug metabolism. The software can be run within a clinic’s own network—rather than in the cloud, as is the case with some genome-interpretation services—which keeps the information private.

Advances in DNA sequencing technology have sharply reduced the amount of time and money required to identify all three billion base pairs of DNA in a person’s genome. But the use of genomic information for medical decisions is still limited because the process creates such large volumes of data. Less than five years ago, Knome, based in Cambridge, Massachusetts, made headlines by offering what seemed then like a low price—$350,000—for a genome sequencing and profiling package. The same service now costs just a few thousand dollars.

Today, genome profiling has two main uses in the clinic. It’s part of the search for the cause of rare genetic diseases, and it generates tumor-specific profiles to help doctors discover the weaknesses of a patient’s particular cancer. But within a few years, the technique could move beyond rare diseases and cancer. The information gleaned from a patient’s genome could explain the origin of specific disease, could help save costs by allowing doctors to pretreat future diseases, or could improve the effectiveness and safety of medications by allowing doctors to prescribe drugs that are tuned to a person’s ability to metabolize drugs.

But teasing out the relevant genetic information from a patient’s genome is not trivial. To find the particular genetic variant that causes a specific disease or drug response can require expertise from many disciplines—from genetics to statistics to software engineering—and a lot of time. In any given patient’s genome, millions of places in that genome will differ from the standard of reference. The vast majority of these differences, or variants, will be unrelated to a patient’s medical condition, but determining that can take between 20 minutes and two hours for each variant, says Heidi Rehm, a clinical geneticist who directs the Laboratory for Molecular Medicine at Partners Healthcare Center for Personalized Genetic Medicine in Boston, and who will soon serve on the clinical advisory board of Knome. “If you scale that to … millions of variants, it becomes impossible.”

A software package like Knome’s can help whittle down the list based on factors such as disease type, the pattern of inheritance in a family, and the effects of given mutations on genes. Other companies have introduced Web- or cloud-based services to perform such an analysis, but Knome’s software suite can operate within a hospital’s network, which is critically important for privacy-concerned hospitals.

The greatest benefit of the widespread adoption of genomics in the clinic will come from the “clinical intelligence” doctors gain from networks of patient data, says Martin Tolar, CEO of Knome. Information about the association between certain genetic variants and disease or drug response could be anonymized—that is, no specific patient could be tied to the data—and shared among large hospital networks. Knome’s software will make it easy to share that kind of information, says Tolar.

“In the future, you could be in the situation where your physician will be able to pull the most appropriate information for your specific case that actually leads to recommendations about drugs and so forth,” he says.

http://www.technologyreview.com/news/428179/knome-software-makes-sense-of-the-genome/

An End-to-end Human Genome Interpretation System

The knoSYS™100 seamlessly integrates an interpretation application (knoSOFT) and informatics engine (kGAP) with a high-performance grid computer. Designed for whole genome, exome, and targeted NGS data, the knoSYS™100 helps labs quickly go “from reads to reports.”


 


Advanced Interpretation and Reporting Software

The knoSYS™100 ships with knoSOFT, an advanced application for managing sequence data through the informatics pipeline, filtering variants, running gene panels, classifying/interpreting variants, and reporting results.

knoSOFT has powerful and scalable multi-sample comparison features–capable of performing family studies, tumor/normal studies, and large case-control comparisons of hundreds of whole genomes.

Multiple simultaneous users (10) are supported, including technicians running sequence data through informatics pipeline, developers creating next-generation gene panels, geneticists researching causal variants, and production staff processing gene panels.

http://www.knome.com/knosys-100-overview/

Publications

View our collection of journal articles and genome research papers written by Knome employees, Knome board members, and other industry experts.

Publications by Knome employees and board members

The Top Two Axes of Variation of the Combined Dataset (MS, BD, PD, and IBD)

21 Aug 2012

Discerning the Ancestry of European Americans in Genetic Association Studies

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Genetic association studies analyze both phenotypes (such as disease status) and genotypes (at sites of DNA variation) of a given set of individuals. … more

Pedigree and genetic risk prediction workflow

20 Aug 2012

Phased Whole-Genome Genetic Risk in a Family Quartet Using a Major Allele Reference Sequence

Co-authored by Dr. George Church and Dr. Heidi Rehm, Clinical and Scientific Board Members for Knome

Author summary: An individual’s genetic profile plays an important role in determining risk for disease and response to medical therapy. The development of technologies that facilitate rapid whole-genome sequencing will provide unprecedented power in the estimation of disease risk. Here we develop methods to characterize genetic determinants of disease risk and … more

20 Aug 2012

A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Schizophrenia is a highly heritable disease. While the drugs commonly used to treat schizophrenia offer important relief from some symptoms, other symptoms are not well treated, and the drugs cause serious adverse effects in many individuals. This has fueled intense interest over the years in identifying genetic contributors to … more

fetchObject

20 Aug 2012

Whole-Genome Sequencing of a Single Proband Together with Linkage Analysis Identifies a Mendelian Disease Gene

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Metachondromatosis (MC) is an autosomal dominant condition characterized by exostoses (osteochondromas), commonly of the hands and feet, and enchondromas of long bone metaphyses and iliac crests. MC exostoses may regress or even resolve over time, and short stature … more

19 Aug 2012

Exploring Concordance and Discordance for Return of Incidental Findings from Clinical Sequencing Co-authored by Dr. Heidi Rehm, Clinical and Scientific board member for Knome

Introduction: There is an increasing consensus that whole-exome sequencing (WES) and whole-genome sequencing (WGS) will continue to improve in accuracy and decline in price and that the use of these technologies will eventually become an integral part of clinical medicine.1–7 … more

Publications by industry experts and thought-leaders

22 Aug 2012

Rate of De Novo Mutations and the Importance of Father’s Age to Disease Risk

Augustine Kong, Michael L. Frigge, Gisli Masson, Soren Besenbacher, Patrick Sulem, Gisli Magnusson, Sigurjon A. Gudjonsson, Asgeir Sigurdsson, Aslaug Jonasdottir, Adalbjorg Jonasdottir, Wendy S. W. Wong, Gunnar Sigurdsson, G. Bragi Walters, Stacy Steinberg, Hannes Helgason, Gudmar Thorleifsson, Daniel F. Gudbjartsson, Agnar Helgason, Olafur Th. Magnusson, Unnur Thorsteinsdottir, & Kari Stefansson

Abstract: Mutations generate sequence diversity and provide a substrate for selection. The rate of de novo mutations is therefore of major importance to evolution. Here we conduct a study of genome-wide mutation rates by sequencing the entire genomes of 78 … more

15 Aug 2012

Passenger Deletions Generate Therapeutic Vulnerabilities in Cancer

Florian L. Muller, Simona Colla, Elisa Aquilanti, Veronica E. Manzo, Giannicola Genovese, Jaclyn Lee, Daniel Eisenson, Rujuta Narurkar, Pingna Deng, Luigi Nezi, Michelle A. Lee, Baoli Hu, Jian Hu, Ergun Sahin, Derrick Ong, Eliot Fletcher-Sananikone, Dennis Ho, Lawrence Kwong, Cameron Brennan, Y. Alan Wang, Lynda Chin, & Ronald A. DePinho

Abstract: Inactivation of tumour-suppressor genes by homozygous deletion is a prototypic event in the cancer genome, yet such deletions often encompass neighbouring genes. We propose that homozygous deletions in such passenger genes can expose cancer-specific therapeutic vulnerabilities when the collaterally … more

1 Jul 2012

Structural Diversity and African Origin of the 17q21.31 Inversion Polymorphism

Karyn Meltz Steinberg, Francesca Antonacci, Peter H Sudmant, Jeffrey M Kidd, Catarina D Campbell, Laura Vives, Maika Malig, Laura Scheinfeldt, William Beggs, Muntaser Ibrahim, Godfrey Lema, Thomas B Nyambo, Sabah A Omar, Jean-Marie Bodo, Alain Froment, Michael P Donnelly, Kenneth K Kidd, Sarah A Tishkoff, & Evan E Eichler

Abstract: The 17q21.31 inversion polymorphism exists either as direct (H1) or inverted (H2) haplotypes with differential predispositions to disease and selection. We investigated its genetic diversity in 2,700 individuals, with an emphasis on African populations. We characterize eight structural haplotypes … more

http://www.knome.com/publications/

knome’s Systems & Software

Technical specifications

Connections and communications

Two networks: 40-Gigabit Infiniband QDR via a Mellanox Switch for storage traffic and HP ProCurve switch for network traffic

High performance computing cluster

Four nodes, each node with two 8-core/16 thread, 2.4Ghz, 64 bit Intel® Xeon® E5-2660 processor with 20MB cache, 128GB of DDR3 ECC 1600 memory; 2x2TB SATA drives (7,200RPM)

Metadata server

2x2TB 3.5″ drives with 6GB/sec SATA, RAID 1 and 2x300GB SSD (RAID 1)

Object storage server

Lustre array: Two 12x4TB arrays of 12 3.5″ drives with 6GB/sec serial SATA channels, each OSS powered by a 6-core Intel Xeon 64-bit processor running at 20GHz with 32GB RAM.

knoSYS_server

96TB total, 64TB useable storage (redundancy for failure tolerance). Expandable 384TB total.

Data sources

Reference genome GRCh37 (HG19)

dbSNP, v137

Condel (SIFT and PolyPhen-2)

HPO

OMIM

Exome Variant server, with allelisms and allele frequencies

1000 Genomes, with allelisms and allele frequencies

Human Gene Mutation db (HGMD)

Phastcons 46, mammalian conservation

PhyloP

Input/output formats

Input formats: kGAP accepts Illumina FASTQ and VCF 4.1 files as inputs

Output formats: annotated VCF files

Electrical and operating requirements

Line voltage: 110V to 120V AC, 200-240V (single phase)

Frequency: 50Hz to 60Hz

Current: 30A, RoSH compliant

Connection: NEMA L5-30

Operating temperature: 50° to 95° F

UPS included

Maximum operating altitude: 10,000 feet

Power consumption: 2,800 VA (peak)

Size and weight

Height 49.2 Inches (1250 mm)
Width 30.7 Inches (780 mm)
Depth 47.6 Inches (1210 mm)
Weight 394 lbs (179 kg)

Noise generation and heat dissipation

Enclosure provides 28dB of acoustic noise reduction; system suitable for placing in working lab environment

7200w of active heat dissipation

Included in the package

knoSYS™100 hardware

Knome software: knoSOFT, kGAP

Operating system: Linux (CentOS 6.3)

http://www.knome.com/knosys-100-specifications/

Our research services group uses a set of advanced software tools designed for whole genome and exome interpretation. These tools are also available to our clients through our knomeBASE informatics service. In addition to various scripts, libraries, and conversion utilities, these tools include knomeVARIANTS and knomePATHWAYS.

knomeVARIANTS

Genome_software_knomeVARIANTS

knome VARIANTS is a query kit that lets users search for candidate causal variants in studied genomes. It includes a query interface (see above), scripting libraries, and data conversion utilities.

Users select cases and controls, input a putative inheritance mode, and add sensible filter criteria (variant functional class, rarity/novelty, location in prior candidate regions, etc.) to automatically generate a sorted short-list of leading candidates. The application includes a SQL query interface to let users query the database as they wish, including by complex or novel sets of criteria.

In addition to querying, the application lets users export subsets of the database for viewing in MS Excel. Subsets can be output that target common research foci, including the following:

  • Sites implicated in phenotypes, regardless of subject genotypes
  • Sites where at least one studied genome mismatches the reference
  • Sites where a particular set of one or more genomes, but no other genomes, show a novel variant
  • Sites in phenotype-implicated genes
  • Sites with nonsense, frameshift, splice-site, or read-through variants, relative to reference
  • Sites where some but not all subject genome were called

knomePATHWAYS

Genome_software_knomePATHWAYS

knomePATHWAYS is a visualization tool that overlays variants found in each sample genome onto known gene interaction networks in order to help spot functional interactions between variants in distinct genes, and pathways enriched for variants in cases versus controls, differential drug responder groups, etc.

knomePATHWAYS integrates reference data from many sources, including GO, HPRD, and MsigDB (which includes KEGG and Reactome data). The application is particularly helpful in addressing higher-order questions, such as finding candidate genes and protein pathways, that are not readily addressed from tabular annotation data alone.

http://www.knome.com/interpretation-toolkit/

Genome-Phenome Analyzer by SimulConsult

A Simultaneous Consult On Your Patient’s Diagnosis

Clinicians can get a “simultaneous consult” about their patient’s diagnosis using SimulConsult’s diagnostic decision support software.

Using the free “phenome” version, medical professionals can enter patient findings into the software and get an initial differential diagnosis and suggestions about other useful findings, including tests.  The database used by the software has > 4,000 diagnoses, most complete for genetics and neurology.  It includes all genes in GeneTests and all diseases in GeneReviews.  The information about diseases is entered by clinicians, referenced to the literature and peer-reviewed by experts.  The software takes into account pertinent negatives, temporal information, and cost of tests, information ignored in other diagnostic approaches.  It transforms medical diagnosis by lowering costs, reducing errors and eliminating the medical diagnostic odysseys experienced by far too many patients and their families.

http://www.simulconsult.com/index.html

Using the “genome-phenome analyzer” version, a lab can combine a genome variant table with the phenotypic data entered by the referring clinician, thereby using the full power of genome + phenome to arrive at a diagnosis in seconds.  An innovative measure of pertinence of genes focuses attention on the genes accounting for the clinical picture, even if more than one gene is involved.  The referring clinician can use the results in the free phenome version of the software, for example adding information from confirmatory tests or adding new findings that develop over time.  For details, click here.

http://www.simulconsult.com/genome/index.html

Michael M. Segal MD, PhD, Founder,Chairman and Chief Scientist.  Dr. Segal did his undergraduate work at Harvard and his MD and PhD at Columbia, where his thesis project outlined rules for the types of chemical synapses that will form in a nervous system.  After his residency in pediatric neurology at Columbia, he moved to Harvard Medical School, where he joined the faculty and developed the microisland system for studying small numbers of brain neurons in culture.  Using this system, he developed a simplified model of epilepsy, work that won him national and international young investigator awards, and set the stage for later work on the molecular mechanism of attention deficit disorder.  Dr. Segal has a long history of interest in computers, and patterned the SimulConsult software after the way that experienced clinicians actually think about diagnosis.  He is on the Electronic Communication Committee of the Child Neurology Society and the Scientific Program Committee of the American Medical Informatics Association.

http://www.simulconsult.com/company/management.html

Read Full Post »

Treatment, Prevention and Cost of Cardiovascular Disease: Current & Predicted Cost of Care and the Potential for Improved Individualized Care Using Clinical Decision Support Systems

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

Author and Curator: Larry H Bernstein, MD, FACP

and

Curator: Aviva Lev-Ari, PhD, RN

This article has the following FIVE parts:

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

4. Arterial Elasticity in Quest for a Drug Stabilizer: Isolated Systolic Hypertension
caused by Arterial Stiffening Ineffectively Treated by Vasodilatation Antihypertensives

5. Clinical Decision Support Systems: Realtime Clinical Expert Support — Biomarkers of Cardiovascular Disease : Molecular Basis and Practical Considerations

 

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

PA Heidenreich, NM Albert, LA Allen, DA Bluemke, J Butler, et al. Circulation: Heart Failure 2013;6.
Print ISSN: 1941-3289, Online ISSN: 1941-3297.

Heart failure (HF) poses a major burden on productivity and cost of national healthcare expenditures

  • among older Americans, more are hospitalized for HF than for any other medical condition.

As the population ages, the prevalence of HF is expected to increase.

The purpose of this report is to

  • provide an in-depth look at how the changing demographics in the United States will impact the prevalence and cost of care for HF for different US populations.

 Projections of HF Prevalence

Prevalence estimates for HF were determined from

 Projections of the US Population With HF From 2010 to 2030 for Different Age Groups

Year

All ages

18-44 y

45-64 y

65-79 y

> 80

2012 5 813 262 396 578 1 907 141 2 192 233 1 317 310
2015 6 190 606 402 926 1 949 669 2 483 853 1 354 158
2020 6 859 623 417 600 1 974 585 3 004 002 1 463 436
2025 7 644 674 434 635 1 969 852 3 526 347 1 713 840
2030 8 489 428 450 275 2 000 896 3 857 729 2 180 528

Future Costs of HF

The future costs of HF were estimated by methods developed by the American Heart Association

  • project the prevalence and costs of HF from 2012 to 2030
  • factor out  the costs attributable to comorbid conditions.

The model does this by assuming that

(1) HF prevalence percentages will remain constant by age, sex, and race/ethnicity;

(2) the costs of technological innovation will rise at the current rate.

HF prevalence and costs (direct and indirect) were projected using the following steps:

1. HF prevalence and average cost per person were estimated by age group (18–44, 45–64, 65–79, ≥80 years), gender (male, female), and race/ethnicity (white non-Hispanic, white Hispanic, black, other) [32]. The initial HF cost per person and rate of increase in cost was determined for each demographic group, as a percentage of total healthcare expeditures.

2. Inflation is separately addressed by correcting dollar values from Medical Expenditure Panel Survey (MEPS) to 2010 dollars.

3. Nursing home spending triggered an adjustment. The estimates project the incremental cost of care attributable to heart failure (HF).

4. Total HF population prevalence and costs were projected by multiplying the US Census–projected population of each demographic group by the percentage prevalence and average cost

5. The total work loss and home productivity loss costs were generated by multiplying per capita work days lost attributable to HF by (1) prevalence of HF, (2) the probability of employment given HF (for work loss costs only), (3) mean per capita daily earnings, and (4) US Census population projection counts.

Projections of Indirect Costs

Indirect costs of lost productivity from morbidity and premature mortality were estimated as detailed below.
Morbidity costs represent the value of lost earnings attributable to HF and include loss of work among

  • currently employed individuals and those too sick to work, as well as
  • home productivity loss, which is the value of household services performed by household members who do not receive pay for the services.

Total Costs Attributable to Heart Failure (HF)

Projections of Total Cost of Care ($ Billions) for HF for Different Age Groups of the US Population

Year All 18–44 45–64 65–79 ≥ 80
2012
Medical 20.9 0.33 3.67 8.46 8.42
Indirect: Morbidity 5.42 0.52 1.92 2.05 0.93
Indirect: Mortality 4.35 0.66 2.53 0.98 0.18
Total 30.7 1.51 8.12 11.5 9.53
2020
Medical 31.1 0.43 4.58 14.2 11.8
Indirect: Morbidity 7.09 0.66 2.20 3.11 1.12
Indirect: Mortality 5.39 0.79 2.89 1.49 0.22
Total 43.6 1.88 9.67 18.8 13.2
2030
Medical 53.1 0.59 5.86 23.3 23.4
Indirect: Morbidity 9.80 0.91 2.54 4.48 1.87
Indirect: Mortality 6.84 0.98 3.32 2.16 0.37
Total 69.7 2.48 11.7 29.9 25.6

Excludes HF care costs that have been attributed to comorbid conditions.

Cost of Care

Total medical costs are projected to increase from $20.9 billion in 2012 to $53.1 billion in 2030, a 2.5-fold increase. Assuming continuation of current hospitalization practices, the majority (80%) of the costs stem from

  • hospitalization. Also, the majority of increase is from directs costs. Indirect costs are expected to rise as well, but at a lower rate, from $9.8 billion to $16.6 billion, an increase of 69%.

Direct costs (cost of medical care) are expected to increase at a faster rate than indirect costs because of premature deaths and lost productivity.

The total cost of HF (direct and indirect costs) is expected to increase in 2030 from the current $30.7 billion to at least $69.8 billion. This will amount to $244 for every US adult in 2030.

Thus the burden of HF for the US healthcare system will grow substantially during the next 18 years if current trends continue.

It is estimated that

  • by 2030, the prevalence of HF in the United States will increase by 25%, to 3.0%.
  • >8 million people in the US (1 in every 33) will have HF by 2030.
  • the projected total direct medical costs of HF between 2012 and 2030 (in 2010 dollars) will increase from $21 billion to $53 billion.
  • Total costs, including indirect costs for HF, are estimated to increase from $31 billion in 2012 to $70 billion in 2030.
  • If one assumes all costs of cardiac care for HF patients are attributable to HF
    (no cost attribution to comorbid conditions), the 2030 projected cost estimates of treating patients with HF will be 3-fold higher ($160 billion in direct costs).

Projections can be lowered if action is taken to reduce the health and economic burden of HF. Strategies, plans, and implementation to prevent HF and improve the efficiency of care are needed.

Causes and Stages of HF

If the projections for accelerating HF costs are to be avoided, attention to the different causes of HF and their risk factors is warranted.
HF is a clinical syndrome that results from a variety of cardiac disorders

  1. idiopathic dilated cardiomyopathy
  2. cardiac valvular disease
  3. pericarditis or pericardial effusion
  4. ischemic heart disease
  5. primary or secondary hypertension
  6. renovascular disease
  7. advanced liver disease with decreased venous return
  8. pulmonary hypertension
  9. prolonged hypoalbuminemia with generalized interstitial edema
  10. diabetic nephropathy
  11. heart muscle infiltration disease such as primary or secondary amyloidosis
  12. myocarditis
  13. rhythm disorders
  14. congenital diseases
  15. accidental trauma (war, chest trauma)
  16. toxicities (methamphetamine, cocaine, heavy metals, chemotherapy)

HF generally causes symptoms:

  • shortness of breath
  • fatigue
  • swelling (edema)
  • inability to lay flat (orthopnea, paroxysmal nocturnal dyspnea)
  • possibly cough, wheezing

In the Western world the predominant causes of HF are:

  • coronary artery disease
  • valvular disease
  • hypertension
  • viral, alcohol, methamphetamine or other drug  toxicity cardiomyopathy
  • stress (catechol toxicity, takotsubo “broken heart” cardiomyopathy)
  • atrial fibrillation/rapid heart rates
  • thyroid disease

In 2001, the American College of Cardiology and AHA practice guidelines for chronic HF promoted a classification system that encompasses 4 stages of HF.

  • Stage A: Patients at high risk for developing HF in the future but no functional or structural heart disorder.
  • Stage B: a structural heart disorder but no symptoms.
  • Stage C: previous or current symptoms of heart failure, manageable with medical treatment.
  • Stage D: advanced disease requiring hospital-based support, a heart transplant or palliative care.

Stages A and B are considered precursors to the clinical HF and are meant

  1. to alert healthcare providers to known risk factors for HF and
  2. the available therapies aimed at mitigating disease progression.

Stage A patients have risk factors for HF hypertension, atherosclerotic heart disease, and/or diabetes mellitus.

Patients with stage B are asymptomatic patients who have  developed structural heart disease from a variety of potential insults to the heart muscle such as myocardial infarction or valvular heart disease.

Stages C and D represent the symptomatic phases of HF, with stage C manageable and stage D failing medical management, resulting in marked symptoms at rest or with minimal activity despite optimal medical therapy.

Therapeutic interventions include:

  • dietary salt restriction and diuretics
  • medications known to prolong survival (beta blockers, ACE inhibitors, aldosterone inhibitors)
  • implantable devices such as pacemakers and defibrillators
  • stoppage of tobacco, toxic drugs, excess alcohol

Classic demographic risk factors for the development of HF include

  • older age, male gender, ethnicity, and low socioeconomic status.
  • comorbid disease states contribute to the development of HF
    • Ischemic heart disease
    • Hypertension

Diabetes mellitus, insulin resistance, and obesity are also linked to HF development,

  • with diabetes mellitus increasing the risk of HF by ≈2-fold in men and up to 5-fold in women.

Smoking remains the single largest preventable cause of disease and premature death in the United States.

Translation of Scientific Evidence into Clinical Practice

In multiple studies, failures to apply evidence-based management strategies are blamed for avoidable hospitalizations and/or deaths from HF

Improved implementation of guidelines can delay, mitigate or prevent the onset of HF, and improve survival. Performance improvement programs have facilitated the implementation of evidence-based therapies in both hospital and ambulatory care settings.

Care transition programs by hospitals have become more widespread

  • in an effort to reduce avoidable readmissions.

The interventions used by these programs include

  • initiating discharge planning early in the course of hospital care,
  • actively involving patients and families or caregivers in the plan of care,
  • providing new processes and systems that ensure patient understanding of the plan of care before discharge from the hospital, and
  • improving quality of care by continually monitoring adherence to national evidence-based guidelines with appropriate adaptations for individual differences in needs and responses.

In multiple studies,adherence to the HF plan of care was associated with reduced all-cause mortality as well as HF hospitalization.

It is anticipated that care transition programs may increase appropriate admissions while decreasing inappropriate admissions

This would have a potentially benenficial impact on the 30-day all-cause readmission rate that has become

  • a focus of public reporting in pay for performance.

More than a quarter of Medicare spending occurs in the last year of life, and

  • the costs of care during the last 6 months for a patient with HF have been increasing (11% from 2000 to 2007).

Improving end-of-life care cost effectiveness for patients with stage D HF will require ongoing

  • improved prediction of outcomes
  • integration of multiple aspects of care
  • educated examination of alternatives and priorities
  • improved decision-making
  • unbiased allocation of resources and coverage for this process rather than unbalanced coverage favoring catastrophic care

Palliative care, including formal hospice care, is increasingly advocated for patients with advanced HF.
Offering palliative care to patients with HF may lead to

  • more conservative (and less expensive) treatment
  • consistent with many patients’ goals for care

The use of hospice services is growing among the HF population,

  • HF now the second most common reason for entering hospice
  • but hospice declaration may impose automated restrictions on care that can impose an impediment to election of hospice

A recent study of patients in hospice care found that

  • patients with HF were more likely than patients with cancer to use hospice services longer than 6 months or to be discharged from hospice care alive.

Highlights:

1. Increasing incidence and costs of care for heart failure projected from 2012 to 2030

2. Direct costs rising at greater rate than indirect costs

3. American Heart Association has defined 4 stages of HF, the last 2 of which are advanced

4. Stages C & D are clinically overt and contribute to rehospitalization

5. Stage D accounts for a significant use of end-of-life hospice care

6. There are evidence-based guidelines for the provision of coordinated care that are not widely applied at present

Basic questions raised:

1. If stages A & B are under the radar, then what measures can best trigger the use of evidence-based guidelines for care?
2. Why are evidence-based guidelines commonly not deployed?

  • Flaws in the “evidence” due to bias, design errors, limted ability to extrapolate to the patients it should address
  • Delays in education, convincing of caretakers, and deployment
  • Inadequate resources
  • Financial or other disincentives

The arguments for introducing coordinated care and for evidence-based guidelines is strong.

Arguments AGAINST slavish imposition of evidence based medicine include genetic individuality (what is best on average is not necessarily best for each genetically and behaviorly distinct individual). Strict adherence to evidence-based guidelines also stifles innovative explorations. None-the-less, deviations from evidence-based plans should be cautious, well-documented, and well-informed, not due to mal-aligned incentives, ignorance, carelessness or error.

The question of when and how to intervene most cost effectively is unanswered. If some patients are salt-sensitive as a contribution to the prevalence of hypertension and heart failure, should EVERYONE be salt restricted or should there be a more concerted effort to define who is salt sensitive? What if it proved more cost-effective to restrict salt intake for everyone, even though many might be fine with high sodium intake, and some might even benefit from or require high sodium intake? Is it reasonable to impose costs, hurdles, even possible harm on some as a cheaper way to achieve “greater good”?
These issues are highly relevant to the proposed emphasis on holistic solutions.

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

By GINA KOLATA   2013.05.13  New York Times

Scientists are studying the genetic makeup of the Del Sontro family for

  • telltale mutations or aberrations in the DNA.

Robin Ashwood, one of Mr. Del Sontro’s sisters, found out she had extensive heart disease even though her electrocardiograms was normal. Six of her seven siblings also have heart disease, despite not having any of the traditional risk factors. Then, after a sister, just 47 years old, found out she had advanced heart disease, Mr. Del Sontro, then 43, went to a cardiologist. An X-ray of his arteries revealed the truth. Like his grand-father, his mother, his four brothers and two sisters, he had heart disease.

Now he and his extended family have joined an extraordinary federal research project that is using genetic sequencing to find factors that increase the risk of heart disease beyond the usual suspects — high cholesterol, high blood pressure, smoking and diabetes.“We don’t know yet how many pathways there are to heart disease,” said Dr. Leslie Biesecker, who directs the study Mr. Del Sontro joined. “That’s the power of genetics. To try and dissect that.”

“I had bought the dream: if you just do the right things and eat the right things, you will be O.K.,” said Mr. Del Sontro, whose cholesterol and blood pressure are reassuringly low.

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

GF Mitchell, Shih-Jen Hwang, RS Vasan, MG Larson.

Circulation. 2010;121:505-511.  http://circ.ahajournals.org/content/121/4/505
http://dx.doi.org/10.1161/CIRCULATIONAHA.109.886655

Various measures of arterial stiffness and wave reflection have been proposed as cardiovascular risk markers.
Prior studies have not assessed relations of a comprehensive panel of stiffness measures to prognosis.
First-onset major cardiovascular disease events in relation to arterial stiffness

  • pulse wave velocity [PWV]
  • wave reflection
    • augmentation index
    • carotid-brachial pressure amplification)
  • central pulse pressure

were analyzed  in 2232 participants (mean age, 63 years; 58% women) in the Framingham Heart Study by a proportional hazards model. During median follow-up of 7.8 (range, 0.2 to 8.9) years,

  • 151 of 2232 participants (6.8%) experienced an event.

In multivariable models adjusted for

  • age
  • sex
  • systolic blood pressure
  • use of antihypertensive therapy
  • total and high-density lipoprotein cholesterol concentrations
  • smoking
  • presence of diabetes mellitus

higher aortic PWV was associated with a 48% increase in cardiovascular disease risk (95% confidence interval, 1.16 to 1.91 per SD; P 0.002).

After PWV was added to a standard risk factor model, integrated discrimination improvement was 0.7% (95% confidence interval, 0.05% to 1.3%; P 0.05).

In contrast,

  • augmentation index,
  • central pulse pressure, and
  • pulse pressure amplification

were not related to cardiovascular disease outcomes in multivariable models.

Higher aortic stiffness assessed by PWV

  • is associated with increased risk for a first cardiovascular event.

Aortic PWV improves risk prediction when added to standard risk factors and may represent

  • a valuable biomarker of cardiovascular disease risk

We shall here visit a recent article by Justin D. Pearlman and Aviva Lev-Ari, PhD, RN, on

Pros and Cons of Drug Stabilizers for Arterial  Elasticity as an Alternative or Adjunct to Diuretics and Vasodilators in the Management of Hypertension, titled

4. Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

http://pharmaceuticalintelligence.com/2013/05/11/arterial-elasticity-in-quest-for-a-drug-stabilizer-isolated-systolic-hypertension-caused-by-arterial-stiffening-ineffectively-treated-by-vasodilatation-antihypertensives/

Speaking at the 2013 International Conference on Prehypertension and Cardiometabolic Syndrome, meeting cochair Dr Reuven Zimlichman (Tel Aviv University, Israel) argued that there is a growing number of patients for whom the conventional methods are inappropriate for

  • the definitions of hypertension
  • the risk-factor tables used to guide treatment

Most antihypertensives today work by producing vasodilation or decreasing blood volume which may be

  • ineffective treatments for patients in whom average arterial diameter and circulating volume are not the causes of hypertension and as targets of therapy may promote decompensation

In the future, he predicts, “we will have to start looking for a totally different medication that will aim to

  • improve or at least to stabilize arterial elasticity: medication that might affect factors that determine the stiffness of the arteries, like collagen, like fibroblasts.

Those are not the aim of any group of antihypertensive medications today.”

Zimlichman believes existing databases could be used to develop algorithms that focus on

  • inelasticity as a mechanism of hypertensive disease

He also points out that

  • ambulatory blood-pressure-monitoring devices can measure elasticity

http://www.theheart.org/article/1502067.do

A related article was published on the relationship between arterial stiffening and primary hypertension.

Arterial stiffening provides sufficient explanation for primary hypertension.

KH Pettersen, SM Bugenhagen, J Nauman, DA Beard, SW Omholt.

By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that

  • arterial stiffening seems sufficient to explain age-related emergence of hypertension.

Specifically,

  • the empirically observed chronic changes in pulse pressure with age
  • the capacity of hypertensive individuals to regulate short-term changes in blood pressure becomes impaired

The results suggest that a major target for treating chronic hypertension in the elderly  may include

  • the reestablishment of a proper baroreflex response.

http://arxiv.org/abs/1305.0727v2?goback=%2Egde_4346921_member_240018699

5. Clinical Decision Support Systems: Realtime Clinical Expert Support: Biomarkers of Cardiovascular Disease — Molecular Basis and Practical Considerations

RS Vasan.  Circulation. 2006;113:2335-2362

http://dx.doi.org/10.1161/CIRCULATIONAHA.104.482570

http://circ.ahajournals.org/content/113/19/2335

Substantial data indicate that CVD is a life course disease that begins with the evolution of risk factors that contribute to

  • subclinical atherosclerosis.

Subclinical disease culminates in overt CVD. The onset of CVD itself portends an adverse prognosis with greater

  • risks of recurrent adverse cardiovascular events, morbidity, and mortality.

Clinical assessment alone has limitations. Clinicians have used additional tools to aid clinical assessment and to enhance their ability to identify the “vulnerable” patient at risk for CVD, as suggested by a recent National Institutes of Health (NIH) panel.

Biomarkers are one such tool to better identify high-risk individuals, to diagnose disease conditions promptly for diagnosis, prognosis, and treatment guidance.

Biological marker (biomarker): A laboratory test value that is objectively measured and evaluated as an indicator of

  1. normal biological processes,
  2. pathogenic processes, or
  3. pharmacological responses to a therapeutic intervention.

Type 0 biomarker: A marker of the natural history of a disease

  • Type 0 correlates longitudinally with known clinical indices/predicts outcomes.

Type I biomarker: A marker that captures the effects of a therapeutic intervention

  • Type I assesses an aspect of treatment mechanism of action.

Type 2 biomarker (surrogate end point):  A marker intended to predict outcomes on the basis of

  • epidemiologic
  • therapeutic
  • pathophysiologic or
  • other scientific evidence.

With biomarkers monitoring disease progression or response to therapy, the patient can serve as  his or her own control (follow-up values may be compared to baseline  values).

Costs may be less important for prognostic markers when they are largely restricted to people with disease (total cost=cost per person x number to be tested, plus down-stream costs). Some biomarkers (e.g., an exercise stress test) may be used for both diagnostic and prognostic purposes.

Generally there are cost differences in establishing a prognostic value versus diagnostic value of a biomarker:

  • prognostic utility typically requires a large sample and a prospective design, whereas
  • diagnostic value often can be determined with a smaller sample in a cross-sectional design

Regardless of the intended use, it is important to remember that biomarkers that do not change disease management

  • cannot affect patient outcome and therefore
  • are unlikely to be cost-effective (judged in terms of quality-adjusted life-years gained).

Typically, for a biomarker to change management, it is important to have evidence that risk reduction strategies should vary with biomarker levels, and/or biomarker-guided management achieves advantages over a management scheme that ignores the biomarker levels.

Typically it means that biomarker levels should be modifiable by therapy.

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a
linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date

to cite examples.

This allows perusal through a graphical user interface (GUI) that

  • partitions the information or necessary reports in a workstation entered by keying to icons.
  • presents decision support

Examples of data partitions include:

  • history
  • medications
  • laboratory reports
  • imaging
  • EKGs

The introduction of a DASHBOARD adds presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

A basic issue for such a tool is what information is presented and how it is displayed.

A determinant of the success of this endeavor is if it

  • facilitates workflow
  • facilitates decision-making process
  • reduces medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data based on the assumption that computer extraction of data from disparate sources will, in the long run, further improve this process.

For instance, there is synergistic value in finding coincidence of:

  • ST shift on EKG
  • elevated cardiac biomarker (troponin)
  • in the absence of substantially reduced renal function.

Similarly, the conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • morphologic review of a peripheral smear
  • descriptive statistics

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors
  • domination of a cell line
  • suppression of hematopoiesis

Other dimensions are created by considering

  • the maturity and size of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating into the model

  • multiple complex clinical features as well as onset and duration .

An example of a difficult area for clinical problem solving is found in the diagnosis of Systemic Inflammatory Response Syndrome (SIRS) and associated sepsis. SIRS is a costly diagnosis in hospitalized patients.   Failure to diagnose it in a timely manner increases the financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heartrate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed, leaving unanswered the hope for

  • a reliable method for earlier diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, provides

  • a promising support to early accurate decision making.

Such a system aims to reduce medical error by utilizing

  • the conjoined syndromic features of disparate data elements .

How we frame our expectations is important. It determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.

Potential arenas of benefit include:

  • hospital operations
  • nonhospital laboratory studies
  • companies in the diagnostic business
  • planners of health systems

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006):
“ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require.” Hospital information technology (HIT) use has been focused on information retrieval, leaving

  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how the situation could be improved through better

  • design of the software that presents data .

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system could have a “front-to-back” design, with the call up for any patient

  • A dashboard design that presents the crucial information that the physician would likely act on in an easily accessible manner
  • Each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

Eugene Rypka contributed greatly to clarifying the extraction of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features with sufficient variety to generate maps.

The mapping is done by

  • a truth table NxN of messages and choices
  • each variable is scaled to assign values for each message choice.

For example, the message for an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced by this compression, even though it may represent less information.

The main issue is

  • how a combination of variables falls into a table to convey meaningful information.

We are concerned with

  • accurate assignment into uniquely variable groups by information in test relationships.

One determines the effectiveness of each variable by its contribution to information gain in the system. The reference or null set is the class having no information.  Uncertainty in assigning to a classification can be countered by providing sufficient information.

One determines the effectiveness of each variable by its contribution to information gain in the system. The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • developed by Shannon, Kullback, and others
  • integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman

Akaike pioneered recognition that the choice of model influence results in a measurable manner. In particular, a larger number of variables promotes further explanations of variance, such that a model selection criterion is important that penalizes for the number of variables when success is measured by explanation of variance.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example:

  1. anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome);
  2. pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia;
  3. multiple organ failure and hemodynamic shock;
  4. electrolyte/acid base balance disorders;
  5. acute and chronic liver disease;
  6. acute and chronic renal disease;
  7. diabetes mellitus;
  8. protein-energy malnutrition;
  9. acute respiratory distress of the newborn;
  10. acute coronary syndrome;
  11. congestive heart failure;
  12. hypertension
  13. disordered bone mineral metabolism;
  14. hemostatic disorders;
  15. leukemia and lymphoma;
  16. malabsorption syndromes; and
  17. cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].
  18. endocrine disorders
  19. prenatal and perinatal diseases

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of
myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy
malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the
Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB)
Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method
(GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with
chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression.
C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore

  • utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community.
  • The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise.
We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a

  1. real time assessment as well as
  2. diagnostic options based on comparable cases,
  3. flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

The MISSIVE(c) system, by Justin Pearlman, is an alternative approach that includes not only automated data retrieval and reformatting of data for decision support, but also an integrated set of tools to speed up analysis, structured for quality and error reduction, couplled to facilitated report generation, incorporation of just-in-time knowledge and group expertise, standards of care, evidence-based planning, and both physician and patient instruction.

See also in Pharmaceutical Intelligence:

The Cost Burden of Disease: U.S. and Michigan.CHRT Brief. January 2010. @www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor WickerhauserAdapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising.
Optical Engineering 1994; 33(7):2170–2174.

R. Coifman and N. SaitoConstructions of local orthonormal bases for classification and regressionC. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments,
& Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic
concepts.
  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories.
Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

Sheila Nirenberg/Cornell and Chethan Pandarinath/Stanford, “Retinal prosthetic strategy with the capacity to restore normal vision,” Proceedings of the National Academy of Sciences.

Other related articles published in this Open Access Online Scientific Journal include the following:

http://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-
have-travelled-and-where-is-journeys-end/

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-
informatics-to-healthcare-is-more-than-currently-estimated/

http://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-
systems-for-disease-management-decision-making/?goback=%2Egde_4346921_member_239739196

http://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-
laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

http://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

http://pharmaceuticalintelligence.com/2013/02/13/cracking-the-code-of-human-life-
the-birth-of-bioinformatics-and-computational-genomics/

http://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-
atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-
and-energy-homeostasis/

http://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-
are-relatedto-the-actin-cytoskeleton/

http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-
for-comparison-and-classification-of-predictor-variables/

http://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-
immature-granulocytes/

http://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-
of-sirs-sepsis-septic-shock/

http://pharmaceuticalintelligence.com/2012/08/12/1815/

http://pharmaceuticalintelligence.com/2012/08/15/1946/

http://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-
musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN
Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-
based-pharmacological-therapy-including-thymosin/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology
Aviva Lev-Ari, PhD, RN 1/28/2013
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-
cardiovascular-imaging-technology/

PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not
Platelet Reactivity    Aviva Lev-Ari, PhD, RN 1/10/2013
http://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-
associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX,
and Clinical SYNTAX   Aviva Lev-Ari, PhD, RN 1/3/2013
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-
established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by
Serum Protein Profiles    Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-
patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia
Aviva Lev-Ari, PhD, RN 8/27/2012
http://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-
fractional-flow-reserve-ffrct-for-tagging-ischemia/

Herceptin Fab (antibody) - light and heavy chains

Herceptin Fab (antibody) – light and heavy chains (Photo credit: Wikipedia)

Personalized Medicine

Personalized Medicine (Photo credit: Wikipedia)

Diagnostic of pathogenic mutations. A diagnost...

Diagnostic of pathogenic mutations. A diagnostic complex is a dsDNA molecule resembling a short part of the gene of interest, in which one of the strands is intact (diagnostic signal) and the other bears the mutation to be detected (mutation signal). In case of a pathogenic mutation, the transcribed mRNA pairs to the mutation signal and triggers the release of the diagnostic signal (Photo credit: Wikipedia)

Read Full Post »

The Binding of Oligonucleotides in DNA and 3-D Lattice Structures

Curator: Larry H Bernstein, MD, FCAP

 

This article is a renewal of a previous discussion on the role of genomics in discovery of therapeutic targets which focused on:

  •  key drivers of cellular proliferation,
  •  stepwise mutational changes coinciding with cancer progression, and
  •  potential therapeutic targets for reversal of the process.

“The Birth of BioInformatics & Computational Genomics” lays the manifold multivariate systems analytical tools that has moved the science forward to a ground that ensures clinical application. Their is a web-like connectivity between inter-connected scientific discoveries, as significant findings have led to novel hypotheses and has driven our understanding of biological and medical processes at an exponential pace owing to insights into the chemical structure of DNA,

  • the basic building blocks of DNA  and proteins,
  • of nucleotide and protein-protein interactions,
  • protein folding, allostericity, genomic structure,
  • DNA replication,
  • nuclear polyribosome interaction, and
  • metabolic control.

In addition, the emergence of methods for

  • copying,
  • removal and insertion, and
  • improvements in structural analysis as well as
  • developments in applied mathematics have transformed the research framework.

Three-Dimensional Folding and Functional Organization Principles of The Drosophila Genome Sexton T, Yaffe E, Kenigeberg E, Bantignies F,…Cavalli G. Institute de Genetique Humaine, Montpelliere GenomiX, and Weissman Institute, France and Israel. Cell 2012; 148(3): 458-472.       http://dx.doi.org/10.1016/j.cell.2012.01.010   http://www.ncbi.nlm.nih.gov/pubmed/22265598 Chromosomes are the physical realization of genetic information and thus form the basis for its

  •   readout and propagation.

Here we present a high-resolution chromosomal contact map derived from a modified genome-wide chromosome conformation capture approach applied to Drosophila embryonic nuclei. The entire genome is linearly partitioned into well-demarcated physical domains that overlap extensively with

  •   active and repressive epigenetic marks.

Chromosomal contacts are hierarchically organized between domains. Global modeling of contact density and clustering of domains show that

  •   inactive domains are condensed and confined to their chromosomal territories, whereas
  •  active domains reach out of the territory to form remote intra- and interchromosomal contacts.
  •  we systematically identify specific long-range intrachromosomal contacts between Polycomb-repressed domains

Together, these observations allow for quantitative prediction of the Drosophila chromosomal contact map, laying the foundation for detailed studies of

  • chromosome structure and function in a genetically tractable system.

“Mr. President; The Genome is Fractal !” Eric Lander (Science Adviser to the President and Director of Broad Institute) et al. delivered the message on Science Magazine cover (Oct. 9, 2009) and generated interest in this by the International HoloGenomics Society at a Sept meeting. First, it may seem to be trivial to rectify the statement in “About cover” of Science Magazine by AAAS. The statement

  • “the Hilbert curve is a one-dimensional fractal trajectory” needs mathematical clarification.

While the paper itself does not make this statement, the new Editorship of the AAAS Magazine might be even more advanced if the previous Editorship did not reject (without review)

  • a Manuscript by 20+ Founders of (formerly) International PostGenetics Society in December, 2006.

Second, it may not be sufficiently clear for the reader that the reasonable requirement for the

  • DNA polymerase to crawl along a “knot-free” (or “low knot”) structure does not need fractals.

A “knot-free” structure could be spooled by an ordinary “knitting globule” (such that the DNA polymerase does not bump into a “knot” when duplicating the strand; just like someone knitting can go through the entire thread without encountering an annoying knot):

  • Just to be “knot-free” you don’t need fractals.

Note, however, that the “strand” can be accessed only at its beginning – it is impossible to e.g.

  • to pluck a segment from deep inside the “globulus”.

This is where certain fractals provide a major advantage – that could be the “Eureka” moment. For instance, the mentioned Hilbert-curve is not only “knot free” – but provides an easy access to

  • “linearly remote” segments of the strand.

If the Hilbert curve starts from the lower right corner and ends at the lower left corner, for instance

  • the path shows the very easy access of what would be the mid-point if the Hilbert-curve
  • is measured by the Euclidean distance along the zig-zagged path.

Likewise, even the path from the beginning of the Hilbert-curve is about equally easy to access – easier than to reach from the origin a point that is about 2/3 down the path. The Hilbert-curve provides an easy access between two points within the “spooled thread”; from a point that is about 1/5 of the overall length to about 3/5 is also in a “close neighborhood”. This marvellous fractal structure is illustrated by the 3D rendering of the Hilbert-curve. Once you observe such fractal structure,

  • you’ll never again think of a chromosome as a “brillo mess”, would you?

It will dawn on you that the genome is orders of magnitudes more finessed than we ever thought so. Those embarking at a somewhat complex review of some historical aspects of the power of fractals may wish to consult the ouvre of Mandelbrot (also, to celebrate his 85th birthday). For the more sophisticated readers, even the fairly simple Hilbert-curve (a representative of the Peano-class) becomes even more stunningly brilliant than just some “see through density”. Those who are familiar with the classic “Traveling Salesman Problem” know that “the shortest path along which every given n locations can be visited once, and only once” requires fairly sophisticated algorithms (and tremendous amount of computation if n>10 (or much more). Some readers will be amazed, therefore, that for n=9 the underlying Hilbert-curve helps to provide an empirical solution. refer to pellionisz@junkdna.com Briefly, the significance of the above realization, that the (recursive) Fractal Hilbert Curve is intimately connected to the (recursive) solution of TravelingSalesman Problem, a core-concept of Artificial Neural Networks can be summarized as below. Accomplished physicist John Hopfield (already a member of the National Academy of Science) aroused great excitement in 1982 with his (recursive) design of artificial neural networks and learning algorithms which were able to find solutions to combinatorial problems such as the Traveling SalesmanProblem. (Book review Clark Jeffries, 1991; see  J Anderson, Rosenfeld, and A Pellionisz (eds.), Neurocomputing 2: Directions for research, MIT Press, Cambridge, MA, 1990): “Perceptions were modeled chiefly with neural connections in a “forward” direction: A -> B -* C — D. The analysis of networks with strong backward coupling proved intractable. All our interesting results arise as consequences of the strong back-coupling” (Hopfield, 1982). The Principle of Recursive Genome Function surpassed obsolete axioms that blocked, for half a Century, entry of recursive algorithms to interpretation of the structure-and function of (Holo)Genome.  This breakthrough,

  • by uniting the two largely separate fields of Neural Networks and Genome Informatics,

is particularly important for those who focused on Biological (actually occurring) Neural Networks (rather than  abstract algorithms that may not, or because of their core-axioms, simply could not represent neural networks under the governance of DNA information). If biophysicist Andras Pellionisz is correct, genetic science may be on the verge of yielding its third — and by far biggest — surprise. With a doctorate in physics, Pellionisz is the holder of Ph.D.’s in computer sciences and experimental biology from the prestigious Budapest Technical University and the Hungarian National Academy of Sciences. A biophysicist by training, the 59-year-old is a former research associate professor of physiology and biophysics at New York University, author of numerous papers in respected scientific journals and textbooks, a past winner of the prestigious Humboldt Prize for scientific research, a former consultant to NASA and holder of a patent on the world’s first artificial cerebellum, a technology that has already been integrated into research on advanced avionics systems. Because of his background, the Hungarian-born brain researcher might also become one of the first people to successfully launch a new company by

  • using the Internet to gather momentum for a novel scientific idea.

The genes we know about today, Pellionisz says, can be thought of as something similar to machines that make bricks (proteins, in the case of genes), with certain junk-DNA sections providing a blueprint for the different ways those proteins are assembled. The notion that at least certain parts of junk DNA might have a purpose for example, many researchers

  • now refer to with a far less derogatory term: introns.

In a provisional patent application filed July 31, Pellionisz claims to have

  • unlocked a key to the hidden role junk DNA

plays in growth — and in life itself. His patent application covers all attempts to

  • count,
  • measure and
  • compare

the fractal properties of introns for diagnostic and therapeutic purposes.

The FractoGene Decade from Inception in 2002 Proofs of Concept and Impending Clinical Applications by 2012Junk DNA Revisited (SF Gate, 2002)The Future of Life, 50th Anniversary of DNA (Monterey, 2003)Mandelbrot and Pellionisz (Stanford, 2004)Morphogenesis, Physiology and Biophysics (Simons, Pellionisz 2005)PostGenetics; Genetics beyond Genes (Budapest, 2006)ENCODE-conclusion (Collins, 2007)The Principle of Recursive Genome Function (paper, YouTube, 2008)You Tube Cold Spring Harbor presentation of FractoGene (Cold Spring Harbor, 2009)Mr. President, the Genome is Fractal! (2009)HolGenTech, Inc. Founded (2010)Pellionisz on the Board of Advisers in the USA and India (2011)ENCODE – final admission (2012) Recursive Genome Function is Clogged by Fractal Defects in Hilbert-Curve (2012) Geometric Unification of Neuroscience and Genomics (2012) US Patent Office issues FractoGene 8,280,641 to Pellionisz (2012) http://www.junkdna.com/the_fractogene_decade.pdf

The Hidden Fractal Language of Intron DNA

To fully understand Pellionisz’ idea, one must first know what a fractal is. Fractals are a way that nature organizes matter. Fractal patterns can be found in anything that has a nonsmooth surface (unlike a billiard ball), such as

  • coastal seashores,
  • the branches of a tree or
  • the contours of a neuron (a nerve cell in the brain).

Some, but not all, fractals are self-similar and stop repeating their patterns at some stage;

  • the branches of a tree, for example, can get only so small.

Because they are geometric, meaning they have a shape, fractals can be described in mathematical terms. It’s similar to the way a circle can be described by using a number to represent its radius (the distance from its center to its outer edge). When that number is known, it’s possible to draw the circle it represents without ever having seen it before. Although the math is much more complicated, the same is true of fractals. If one has the formula for a given fractal, it’s possible to use that formula to construct, or reconstruct, an image of whatever structure it represents, no matter how complicated. The mysteriously repetitive but not identical strands of genetic material are in reality

  • building instructions organized in a special type of pattern known as a fractal.

It’s this pattern of fractal instructions, he says, that tells genes what they must do in order to form living tissue, everything from the wings of a fly to the entire body of a full-grown human. In a move sure to alienate some scientists, Pellionisz chose the unorthodox route of making his initial disclosures online on his own Web site. He picked that strategy, he says, because it is the fastest way he can document his claims and find scientific collaborators and investors. Most mainstream scientists usually blanch at such approaches, preferring more traditionally credible methods, such as publishing articles in peer-reviewed journals. Pellionisz’ idea is that a fractal set of building instructions in the DNA plays a role in organizing life itself. Decode the language, and in theory it could be reverse engineered. Just as knowing the radius of a circle lets one create that circle. The fractal-based formula

  • would allow us to understand how a heart or disease-fighting antibodies is created.

The idea is  encourage new collaborations across the boundaries that separate the intertwined

  • disciplines of biology, mathematics and computer sciences.

Hal Plotkin, Special to SF Gate. Thursday, November 21, 2002. http://www.junkdna.com/ http://www.junkdna.com/the_fractogene_decade.pdf http://www.sciencentral.com/articles/view.php3?article_id=218392305 http://www.news-medical.net/health/Junk-DNA-What-is-Junk-DNA.aspx http://www.kurzweilai.net/junk-dna-plays-active-role-in-cancer-progression-researchers-find http://marginalrevolution.com/marginalrevolution/2013/05/the-battle-over-junk-dna http://profiles.nlm.nih.gov/SC/B/B/F/T/_/scbbft.pdf

Human Genome is Multifractal

The human genome: a multifractal analysis. Moreno PA, Vélez PE, Martínez E, et al.    BMC Genomics 2011, 12:506. http://www.biomedcentral.com/1471-2164/12/506 Several studies have shown that genomes can be studied via a multifractal formalism. These researchers used a multifractal approach to study the genetic information content of the Caenorhabditis elegans genome. They investigated the possibility that the human genome shows a similar behavior to that observed in the nematode. They report

  • multifractality in the human genome sequence.

This behavior correlates strongly on the presence of Alu elements and to a lesser extent on CpG islands and (G+C) content.

  1. Gene function,
  2. cluster of orthologous genes,
  3. metabolic pathways, and
  4. exons
  • tended to increase their frequencies with ranges of multifractality and
  • large gene families were located in genomic regions with varied multifractality.
  • a multifractal map and classification for human chromosomes are proposed.

They propose a descriptive non-linear model for the structure of the human genome. This model reveals a multifractal regionalization where many regions coexist that are

  • far from equilibrium and this non-linear organization has significant
  • molecular and medical genetic implications for understanding the role of Alu elements in genome stability and structure of the human genome.

Given the role of Alu sequences in

  • gene regulation
  • genetic diseases
  • human genetic diversity
  • adaptation and phylogenetic analyses

these quantifications are especially useful.

MiIP: The Monomer Identification and Isolation Program

Bun C, Ziccardi W, Doering J and Putonti C. Evolutionary Bioinformatics 2012:8 293-300. http://dx.doi.org/10.4137/EBO.S9248 Repetitive elements within genomic DNA are both functionally and evolution-wise informative. Discovering these sequences ab initio is computationally challenging, compounded by the fact that sequence identity between repetitive elements can vary significantly. These investigators present a new application, the Monomer Identification and Isolation Program (MiIP),

  • which provides functionality to both search for a particular repeat as well as
  • discover repetitive elements within a larger genomic sequence.

To compare MiIP’s performance with other repeat detection tools, analysis was conducted for synthetic sequences as well as several a21-II clones and HC21 BAC sequences. The main benefit of MiIP is

  • it is a single tool capable of searching for both known monomeric sequences
  • discovering the occurrence of repeats ab initio

Triplex DNA: A third strand for DNA

The DNA double helix can under certain conditions accommodate

  • a third strand in its major groove.

Researchers in the UK  presented a complete set of four variant nucleotides that makes it

  • possible to use this phenomenon in gene regulation and mutagenesis.

Natural DNA only forms a triplex if the targeted strand is rich in purines – guanine (G) and adenine (A) – which in addition to the bonds of the Watson-Crick base pairing

  • can form two further hydrogen bonds,
  •  the ‘third strand’ oligonucleotide has the matching sequence of pyrimidines – cytosine (C) and thymine (T).

Any Cs or Ts in the target strand of the duplex will only bind very weakly, as

  • they contribute just one hydrogen bond.

Moreover, the recognition of G requires the C in the probe strand to be protonated,

  • triplex formation will only work at low pH.

To overcome all these problems, the groups of Tom Brown and Keith Fox at the University of Southampton have developed modified building blocks, and have now completed

  • a set of four new nucleotides, each of which will bind to one DNA nucleotide from the major groove of the double helix.

They tested the binding of a 19-mer of these designer nucleotides to a double helix target sequence in comparison with the corresponding triplex-forming oligonucleotide made from natural DNA bases. Using fluorescence-monitored thermal melting and DNase I footprinting, the researchers showed that

  • their construct forms stable triplex even at neutral pH. 

Tests with mutated versions of the target sequence showed that

  • three of the novel nucleotides are highly selective for their target base pair,
  • while the ‘S’ nucleotide, designed to bind to T, also tolerates C.

References

DA Rusling et al, Nucleic Acids Res. 2005, 33, 3025 http://nucleicacidsres.com/Rusling_DA KM Vasquez et al, Science 2000, 290, 530 http://Science.org/2000/290.530/Vazquez_KM/ Frank-Kamenetskii MD, Mirkin SM. Annual Rev Biochem 1995; 64:69-95. http://www.annualreviews.org/aronline/1995/Frank-Kamenetski_MD/64.69/ Since the pioneering work of Felsenfeld, Davies, & Rich, double-stranded polynucleotides containing purines in one strand and pydmidines in the other strand [such as poly(A)/poly(U), poly(dA)/poly(dT), or poly(dAG)/ poly(dCT)] have been known to be able to undergo a stoichiometric transition forming a triple-stranded structure containing one polypurine and two poly-pyrimidine strands. Early on, it was assumed that the third strand was located in the major groove and associated with the duplex via non-Watson-Crick interactions now

  • known as Hoogsteen pairing.

Triple helices consisting of one pyrimidine and two purine strands were also proposed. However, notwithstanding the fact that single-base triads in tRNA structures were well- documented, triple-helical DNA escaped wide attention before the mid-1980s. The interest in DNA triplexes arose due to two partially independent developments.

  1.  homopurine-homopyrimidine stretches in super-coiled plasmids were found to adopt an unusual DNA structure, called H-DNA which includes a triplex.
  2. several groups demonstrated that homopyrimidine and some purine-rich oligonucleotides
  • can form stable and sequence-specific complexes with
  • corresponding homopurine-homopyrimidine sites on duplex DNA.

These complexes were shown to be triplex structures rather than D-loops, where

  • the oligonucleotide invades the double helix and displaces one strand.

A characteristic feature of all these triplexes is that the two

  • chemically homologous strands (both pyrimidine or both purine) are antiparallel.

These findings led explosive growth in triplex studies. One can easily imagine numerous “geometrical” ways to form a triplex, and those that have been studied experimentally. The canonical intermolecular triplex consists of either

  • three independent
  • oligonucleotide chains or of a
  • long DNA duplex carrying homopurine-homopyrimidine insert
    • and the corresponding oligonucleotide.

Triplex formation strongly depends on the oligonucleotide(s) concentration. A single DNA

  • chain may also fold into a triplex connected by two loops.

To comply with the sequence and polarity requirements for triplex formation, such a DNA strand must have a peculiar sequence: It contains a mirror repeat

  1. (homopyrimidine for YR*Y triplexes and homopurine for YR*R triplexes)
  2. flanked by a sequence complementary to one half of this repeat.

Such DNA sequences fold into triplex configuration much more readily than do the corresponding intermolecular triplexes, because all triplex forming segments are brought together within the same molecule. It has become clear that both

  • sequence requirements and chain polarity rules for triplex formation

can be met by DNA target sequences built of clusters of purines and pyrimidines. The third strand consists of adjacent homopurine and homopyrimidine blocks forming Hoogsteen hydrogen bonds with purines on alternate strands of the target duplex, and

  • this strand switch preserves the proper chain polarity.

These structures, called alternate-strand triplexes, have been experimentally observed as both intra- and inter-molecular triplexes. These results increase the number of potential targets for triplex formation in natural DNAs somewhat by adding sequences composed of purine and pyrimidine clusters, although arbitrary sequences are still not targetable because

  • strand switching is energetically unfavorable.

References: Lyamichev VI, Mirkin SM, Frank-Kamenetskii MD. J. Biomol. Stract. Dyn. 1986; 3:667-69. http://JbiomolStractDyn.com/1986/Lyamichev_VI/3.667/ Filippov SA, Frank-Kamenetskii MD. Nature 1987; 330:495-97. http://Nature.com/1987/Fillipov_SA/330.495/ Demidov V, Frank-Kamenetskii MD, Egholm M, Buchardt O, Nielsen PE. Nucleic Acids Res. 1993; 21:2103-7. http://NucleicAcidsResearch.com/1993/Demidov_V/21.2103/ Mirkin SM, Frank-Kamenetskii MD. Anna. Rev. Biophys. Biomol. Struct. 1994; 23:541-76. http://AnnRevBiophysBiomolecStructure.com/1994/Mirkin_SM/23.541/ Hoogsteen K. Acta Crystallogr. 1963; 16:907-16 http://ActaCrystallogr.com/1963/Hoogsteen_K/16.907/ Malkov VA, Voloshin ON, Veselkov AG, Rostapshov VM, Jansen I, et al. Nucleic Acids Res. 1993; 21:105-11. http://NucleicAcidsResearch.com/1993/Malkov_VA/21.105 Malkov VA, Voloshin ON, Soyfer VN, Frank-Kamenetskii MD. Nucleic Acids Res. 1993; 21:585-91 http://NucleicAcidsRes.com/1993/Malkov_VA/21.585/ Chemy DY, Belotserkovskii BP, Frank-Kamenetskii MD, Egholm M, Buchardt O, et al. Proc. Natl. Acad. Sci. USA 1993; 90:1667-70 http://PNAS.org/1993/Chemy_DY/90.1667/ Triplex forming oligonucleotides Triplex forming oligonucleotides: sequence-specific tools for genetic targeting. Knauert MP, Glazer PM. Human Molec Genetics 2001; 10(20):2243-2251. http://HumanMolecGenetics.com/2001/Knauert_ MP/10.2243/ Triplex forming oligonucleotides (TFOs) bind in the major groove of duplex DNA with a

  • high specificity and affinity.

Because of these characteristics, TFOs have been proposed as

  • homing devices for genetic manipulation in vivo.

These investigators review work demonstrating the ability of TFOs and related molecules

  • to alter gene expression and mediate gene modification in mammalian cells.

TFOs can mediate targeted gene knock out in mice, providing a foundation for potential

  • application of these molecules in human gene therapy.

The Triplex Genetic Code

Novagon DNA John Allen Berger, founder of Novagon DNA and The Triplex Genetic Code Over the past 12+ years, Novagon DNA has amassed a vast array of empirical findings which

  • challenge the “validity” of the “central dogma theory”, especially the current five nucleotide
  • Watson-Crick DNA and RNA genetic codes. DNA = A1T1G1C1, RNA =A2U1G2C2.

We propose that our new Novagon DNA 6 nucleotide Triplex Genetic Code has more validity than

  • the existing 5 nucleotide (A1T1U1G1C1) Watson-Crick genetic codes.

Our goal is to conduct a “world class” validation study to replicate and extend our findings.

Methods for Examining Genomic and Proteomic Interactions.

An Integrated Statistical Approach to Compare Transcriptomics Data Across Experiments: A Case Study on the Identification of Candidate Target Genes of the Transcription Factor PPARα Ullah MO, Müller M and Hooiveld GJEJ. Bioinformatics and Biology Insights 2012;6: 145–154. http://dx.doi.org/10.4137/BBI.S9529 http://www.ncbi.nlm.nih.gov/pubmed/22783064 Corresponding author email: guido.hooiveld@wur.nl       http://edepot.wur.nl/213859 An effective strategy to elucidate the signal transduction cascades activated by a transcription factor

  • is to compare the transcriptional profiles of wild type and transcription factor knockout models.

Many statistical tests have been proposed for analyzing gene expression data, but

  • most tests are based on pair-wise comparisons.

Since the analysis of microarrays involves the testing of multiple hypotheses within one study,

  • it is generally accepted to control for false positives by the false discovery rate (FDR).

However, this may be an inappropriate metric for

    • comparing data across different experiments.

Here we propose  the simultaneous testing and integration of

  • the three hypotheses (contrasts) using the cell means ANOVA model.

These three contrasts test for the effect of a treatment in

  1. wild type,
  2. gene knockout, and
  3. globally over all experimental groups

We compare differential expression of genes across experiments while

  • controlling for multiple hypothesis testing,
  • managing biological complexity across orthologs
  • with a visual knowledgebase of documented biomolecular interactions.

Vincent Van Buren & Hailin Chen. Scientific Reports 2012; 2, Article number: 1011 http://dx.doi.org/10.1038/srep01011 The complexity of biomolecular interactions and influences is a major obstacle

  • to their comprehension and elucidation.

Visualizing knowledge of biomolecular interactions increases

  • comprehension and facilitates the development of new hypotheses.

The rapidly changing landscape of high-content experimental results also presents a challenge

  • for the maintenance of comprehensive knowledgebases.

Distributing the responsibility for maintenance of a knowledgebase to a community of

  • experts is an effective strategy for large, complex and rapidly changing knowledgebases.

Cognoscente serves these needs

  • by building visualizations for queries of biomolecular interactions on demand,
  • by managing the complexity of those visualizations, and
  • by crowdsourcing to promote the incorporation of current knowledge from the literature.

Imputing functional associations

  • between biomolecules and imputing directionality of regulation
  • for those predictions each require a corpus of existing knowledge as a framework.

Comprehension of the complexity of this corpus of knowledge will be facilitated by effective

  • visualizations of the corresponding biomolecular interaction networks.

Cognoscente (http://vanburenlab.medicine.tamhsc.edu/cognoscente.html) was designed and implemented to serve these roles as a knowledgebase and as

  • an effective visualization tool for systems biology research and education.

Cognoscente currently contains over 413,000 documented interactions, with coverage across multiple species. Perl, HTML, GraphViz1, and a MySQL database were used in the development of Cognoscente. Cognoscente was motivated by the need to update the knowledgebase of

  • biomolecular interactions at the user level, and
  • flexibly visualize multi-molecule query results for
    • heterogeneous interaction types across different orthologs.

Satisfying these needs provides a strong foundation for developing new hypotheses about

  • regulatory and metabolic pathway topologies.

Several existing tools provide functions that are similar to Cognoscente.

Hilbert 3D curve, iteration 3

Hilbert 3D curve, iteration 3 (Photo credit: Wikipedia)

3-dimensionnal Hilbert cube.

3-dimensionnal Hilbert cube. (Photo credit: Wikipedia)

0tj, 1st and 2nd iteration of Hilbert curve in...

0tj, 1st and 2nd iteration of Hilbert curve in 3D. If you’re looking for the source file, contact me. (Photo credit: Wikipedia)

8 first steps of the building of the Hilbert c...

8 first steps of the building of the Hilbert curve in animated gif (Photo credit: Wikipedia)

Read Full Post »

« Newer Posts - Older Posts »