Advertisements
Feeds:
Posts
Comments

Archive for the ‘Mass automation of plasma proteins’ Category


A Future for Plasma Metabolomics in Cardiovascular Disease Assessment

Curator: Larry H Bernstein, MD, FCAP

 

 

Plasma metabolomics reveals a potential panel of biomarkers for early diagnosis
in acute coronary syndrome  

CM. Laborde, L Mourino-Alvarez, M Posada-Ayala,
G Alvarez-Llamas, MG Serranillos-Reus, et al.
Metabolomics – manuscript draft

In this study, analyses of peripheral plasma from Non-ST Segment Elevation
Acute Coronary Syndrome patients and healthy controls by gas chromatography-
mass spectrometry permitted the identification of 15 metabolites with statistical
differences (p<0.05) between experimental groups.
In our study, 6 amino acids were found decreased in NSTEACS patients when
compared with healthy control group suggesting either a decrease in anabolic
activity of these metabolites or an increase in the catabolic pathways. Of both
possibilities, the increased catabolism of the amino acids can be explained
considering simultaneously the capacity of glycogenic and ketogenic amino
acids along with the gradual hypoxic condition to which cardiac muscle cells
have been exposed.

Additionally, validation by gas chromatography-mass spectrometry and liquid
chromatography-mass spectrometry permitted us to identify a potential panel
of biomarkers formed by 5-OH tryptophan, 2-OH-butyric acid and 3-OH-butyric
acid. Oxidative stress conditions dramatically increase the rate of hepatic
synthesis of glutathione. It is synthesized from the amino acids cysteine, glutamic
acid and glycine. Under these conditions of metabolic stress, the supply of cysteine
for glutathione synthesis become limiting and homocysteine is used to form
cystathionine, which is cleaved to cysteine and 2-OH-butyric acid. Thus elevated
plasma levels of 2-OH-butyric acid can be a good biomarker of cellular oxidative
stress for the early diagnosis of ACS.  Another altered metabolite of similar
structure was 3-OH-butyric acid, a ketone body together with the acetoacetate,
and acetone. Elevated levels of ketone bodies in blood and urine mainly occur
in diabetic ketoacidosis. Type 1 diabetes mellitus (DMI) patients have decreased
levels of insulin in the blood that prevent glucose enter cells so these cells use
the catabolism of fats as energy source that produce ketones as final products.
This panel of biomarkers reflects the oxidative stress and the hypoxic state that
disrupts the myocardial cells and consequently constitutes a metabolomic
signature that could be used for early diagnosis of acute coronary syndrome.
We hypothesize that the hypoxia situation comes to “mimic” the physiological
situation that occurs in DMI. In this case, the low energy yield of glucose
metabolism “forces” these cells to use fat as energy source (through catabolism
independent of aerobic/anaerobic conditions) occurring ketones as final
products. In our experiment, the 3-OH-butyric acid was strongly elevated in
NSTEACS patients.

 

Current Methods Used in the Protein Carbonyl Assay
Nicoleta Carmen Purdel, Denisa Margina and Mihaela Ilie.
Ann Res & Rev in Biol 2014; 4(12): 2015-2026.
http://www.sciencedomain.org/download.php?f=Purdel4122013ARRB8763-1

The attack of reactive oxygen species on proteins and theformation of
protein carbonyls were investigated only in the recent years. Taking into
account that protein carbonyls may play an important role in the early
diagnosis of pathologies associated with reactive oxygen species
overproduction, a robust and reliable method to quantify the protein
carbonyls in complex biological samples is also required. Oxidative
stress represents the aggression produced at the molecular level by
the imbalance between pro-oxidant and antioxidant agents, in favor of
pro-oxidants, with severe functional consequences in all organs and
tissues. An overproduction of ROS results in oxidative damages
especially to proteins (the main target of ROS), as well as in lipids,or
DNA. Glycation and oxidative stress are closely linked, and both
phenomena are referred to as ‘‘glycoxidation’’. All steps of glycoxidation
generate oxygen-free radical production, some of them being common
with lipidic peroxidation pathways.
The initial glycation reaction is followed by a cascade of chemical
reactions resulting in the formation of intermediate products (Schiff base,
Amadori and Maillard products) and finally to a variety of derivatives
named advanced glycation end products (AGEs). In hyperglycemic
environments and in natural aging, AGEs are generated in increased
concentrations; their levels can be evaluated in plasma due to the fact
that they are fluorescent compounds. Specific biomarkers of oxidative
stress are currently investigated in order to evaluate the oxidative status
of a biological system and/or its regenerative power. Generaly, malondi-
aldehyde, 4-hydroxy-nonenal (known together as thiobarbituric acid
reactive substances – TBARS), 2-propenal and F2-isoprostanes are
investigated as markers of lipid peroxidation, while the measurement
of protein thiols, as well as S-glutathionylated protein are assessed
as markers of oxidative damage of proteins. In most cases, the
oxidative damage of the DNA has 8-hydroxy-2l-deoxyguanosine
(8-OHdG) as a marker.  The oxidative degradation of proteins plays an
important role in the early diagnosis of pathologies associated with
ROS overproduction. Oxidative modification of the protein structure
may take a variety of forms, including the nitration of tyrosine residues,
carbonylation, oxidation of methionine, or thiol groups, etc.

The carbonylation of protein represents the introduction of carbonyl
groups (aldehyde or ketone) in the protein structure, through several
mechanisms: by direct oxidation of the residues of lysine, arginine,
proline and threonine residues from the protein chain, by interaction
with lipid peroxidation products with aldehyde groups (such as 4-
hydroxy-2-nonenal, malondialdehyde, 2-propenal), or by the
interaction with the compounds with the carbonyl groups resulting
from the degradation of the lipid or glycoxidation. All of these
molecular changes occur under oxidative stress conditions.
There is a pattern of carbonylation, meaning that only certain
proteins can undergo this process and protein structure determines
the preferential sites of carbonylation. The most investigated
carbonyl derivates are represented by gamma-glutamic
semialdehyde (GGS) generated from the degradation of arginine
residue and α-aminoadipic semialdehyde (AAS) derived from lysine.

A number of studies have shown that the generation of protein
carbonyl groups is associated with normal cellular phenomena like
apoptosis, and cell differentiation and is dependent on age, species
and habits (eg. smoking) or severe conditions’ exposure (as
starvation or stress). The formation and accumulation of protein
carbonyls is increased in various human diseases, including –
diabetes and cardiovascular disease.

Recently, Nystrom [7] suggested that the carbonylation process
is associated with the physiological and not to the chronological
age of the organism and the carbonylation may be one of the causes
of aging and cell senescence; therefore it can be used as the marker
of these processes. Jha and Rizvi, [15] proposed the quantification of
protein carbonyls in the erythrocyte membrane as a biomarker of aging

PanelomiX: A threshold-based algorithm to create panels of
biomarkers

X Robin, N Turck, A Hainard, N Tiberti, F Lisacek. 
T r a n s l a t i o n a l  P r o t e o m i c s   2 0 1 3; 1: 57–64.
http://dx.doi.org/10.1016/j.trprot.2013.04.003

The computational toolbox we present here – PanelomiX – uses
the iterative combination of biomarkers and thresholds (ICBT) method.
This method combines biomarkers andclinical scores by selecting
thresholds that provide optimal classification performance. Tospeed
up the calculation for a large number of biomarkers, PanelomiX selects
a subset ofthresholds and parameters based on the random forest method.
The panels’ robustness and performance are analysed by cross-validation
(CV) and receiver operating characteristic(ROC) analysis.

Using 8 biomarkers, we compared this method against classic
combination procedures inthe determination of outcome for 113 patients
with an aneurysmal subarachnoid hemorrhage. The panel classified the
patients better than the best single biomarker (< 0.005) and compared
favourably with other off-the-shelf classification methods.

In conclusion, the PanelomiX toolbox combines biomarkers and evaluates
the performance of panels to classify patients better than single markers
or other classifiers. The ICBT algorithm proved to be an efficient classifier,
the results of which can easily be interpreted. 

Multiparametric diagnostics of cardiomyopathies by microRNA
signatures.
CS. Siegismund, M Rohde, U Kühl,  D  Lassner.
Microchim Acta 2014 Mar.
http://dx.doi.org:/10.1007/s00604-014-1249-y

MicroRNAs (miRNAs) represent a new group of stable biomarkers
that are detectable both in tissue and body fluids. Such miRNAs
may serve as cardiological biomarkers to characterize inflammatory
processes and to differentiate various forms of infection. The predictive
power of single miRNAs for diagnosis of complex diseases may be further
increased if several distinctly deregulated candidates are combined to
form a specific miRNA signature. Diagnostic systems that generate
disease related miRNA profiles are based on microarrays, bead-based
oligo sorbent assays, or on assays based on real-time polymerase
chain reactions and placed on microfluidic cards or nanowell plates.
Multiparametric diagnostic systems that can measure differentially
expressed miRNAs may become the diagnostic tool of the future due
to their predictive value with respect to clinical course, therapeutic
decisions, and therapy monitoring.

Nutritional lipidomics: Molecular metabolism, analytics, and
diagnostics
JT. Smilowitz, AM. Zivkovic, Yu-Jui Y Wan, SM. Watkins, et al.
Mol. Nutr. Food Res2013, 00, 1–17.
http://dx.doi.org:/10.1002/mnfr.201200808

The term lipidomics is quite new, first appearing in 2001. Its definition
is still being debated, from “the comprehensive analysis of all lipid
components in a biological sample” to “the full characterization of
lipid molecular species and their biological roles with respect to the
genes that encode proteins that regulate lipid metabolism”. In principle,
lipidomics is a field taking advantage of the innovations in the separation
sciences and MS together with bioinformatics to characterize the lipid
compositions of biological samples (biofluids, cells, tissues, organisms)
compositionally and quantitatively.

Biochemical pathways of lipid metabolism remain incomplete and the
tools to map lipid compositional data to pathways are still being assembled.
Biology itself is dauntingly complex and simply separating biological
structures remains a key challenge to lipidomics. Nonetheless, the
strategy of combining tandem analytical methods to perform the sensitive,
high-throughput, quantitative, and comprehensive analysis of lipid
metabolites of very large numbers of molecules is poised to drive
the field forward rapidly. Among the next steps for nutrition to understand
the changes in structures, compositions, and function of lipid biomolecules
in response to diet is to describe their distribution within discrete functional
compartments lipoproteins. Additionally, lipidomics must tackle the task
of assigning the functions of lipids as signaling molecules, nutrient sensors,
and intermediates of metabolic pathways.

Advertisements

Read Full Post »


Larry H Bernstein, MD, Curator

Leaders in Pharmaceutical Intelligence

 

Natriuretic Peptides (BNP and Amino-terminal proBNP)

Author: Larry Bernstein, M.D.,
(see Reviewers/Authors page)
Revised: 12 December 2010, last major update December 2010
Copyright: (c) 2003-2010, PathologyOutlines.com, Inc.
http://dx.doi.org:/PathologyOutlines.com/cardiac

General
=========================================================================

  • Brain natriuretic peptide (BNP), now known as B-type natriuretic peptide (also BNP),
    is a 32 amino acid polypeptide secreted by the cardiac ventricles in response to
    excessive stretching of cardiomyocytes (Wikipedia)
  • BNP was originally identified in extracts of porcine brain, although in humans
    it is produced mainly in the cardiac ventricles
  • BNP is co-secreted with a 76 amino acid N-terminal fragment (NT-proBNP),
    which is biologically inactive Indications

=========================================================================

  • Evaluation of dyspneic patient with suspected congestive heart failure,
    regardless of renal function (J Am Coll Cardiol 2006;47:91)
  • B-type natriuretic peptide levels are higher in patients with congestive heart
    failure than in dyspnea from other causes (J Am Coll Cardiol 2002;39:202,
    N Engl J Med 2004;350:647)
  • NT-proBNP measurement is a valuable addition to standard clinical
    assessment for the identification and exclusion of acute CHF in the
    emergency department setting (Am J Cardiol 2005;95:9480)

Clinical features
=========================================================================

  • Reduces misdiagnosis of congestive heart failure, which occurs
    50% to 75% of the time
  • NT-proBNP is superior to BNP for predicting mortality and morbidity for heart
    failure (Clin Chem 2006;52:1528), and coexisting renal disease and heart failure
    (Clin Chem 2007;53:1511)

Reference ranges
=========================================================================

  • BNP levels below 100 pg/mL indicate no heart failure

Limitations
=========================================================================

  • Determination of endogenous BNP with the AxSYM assay using frozen
    plasma samples may not be valid after 1 day, but NT-proBNP as
    measured by the Elecsys assay may be stored at -20 degrees C for
    at least four months without a relevant loss of the immunoreactive
    analyte (Clin Chem Lab Med 2004;42:942)

Additional references
=========================================================================

  • Clin Chem 2007;53:1928, Am J Kidney Dis 2005;46:610,
    Hypertension 2005;46:118, Hypertension 2006;47:874,
    Eur J Heart Fail 2004;6:269

Natriuretic peptides for risk stratification of patients with acute
coronary
 syndromes  
M Galvani,  D Ferrini, F Ottani. Eur J Heart Fail 2004;  6: 327–333.
http://eurjhf.oxfordjournals.org

Both BNP and NT-proBNP possess several characteristics of the ideal biomarker,
showing independent and incremental prognostic value above traditional clinical,
electrocardiographic, and biochemical (particularly troponin) risk indicators. Specifically,
in ACS patients, BNP and NT-proBNP have powerful prognostic value both in patients
without a history of previous heart failure or without clinical or instrumental signs of
left ventricular dysfunction on admission or during hospital stay.

Our results show that the prognostic value of natriuretic peptides is similar:
(1) both at short- and long-term;
(2) when natriuretic peptides are measured at first patient contact or during hospital stay;
(3) for BNP or NT-proBNP; and
(4) in patients with ST elevation myocardial infarction or no ST elevation ACS.

 

Steady-State Levels of Troponin and Brain Natriuretic Peptide for Prediction
of Long-Term
 Outcome after Acute Heart Failure with or without Stage 3 to 4
Chronic Kidney Disease

Y Endo, S Kohsaka, T Nagai, K Koide, M Takahashi, et al.
Br J Med Med Res 2012; 2(4): 490-500.
http://dx.doi.org:/10.9734/BJMMR/2012/1384

The population was predominantly male (69.3%), and the mean age was 66.6±15.3 years.
Patients with higher BNP levels or detectable TnT had a worse prognosis (BNP45.0% vs.
18.8%, p<0.001; TnT 43.8% vs. 25.1%, p=0.002, respectively). The primary event rate
was additively worse among patients with both increased BNP levels and detectable TnT
compared to those with increased levels of BNP or detectable TnT alone (log-rank p<0.001).
A similar trend was observed in the subgroup of patients with CKD stage III–V (n=172).

The Effect of Correction of Mild Anemia in Severe, Resistant Congestive
Heart Failure
 Using Subcutaneous Erythropoietin and Intravenous Iron:
A Randomized Controlled Study

DS. Silverberg, D Wexler, D Sheps, M Blum, G Keren, et al.  JACC 2001; 37(7).
PII S0735-1097(01)01248-7  http://www.ncbi.nlm.nih.gov/pubmed/11401110

When anemia in CHF is treated with EPO and IV iron, a marked improvement in
cardiac and patient function is seen, associated with less hospitalization and renal
impairment and less need for diuretics. (J Am Coll Cardiol 2001;37:1775– 80)

 

 

 

Hemoglobin on NT proBNP

Hemoglobin on NT proBNP

 

 

 

 

What is the best approximation of reference normal for NT-proBNP?
Clinical levels for enhanced assessment
 of NT-proBNP (CLEAN) 

Larry H. Bernstein1*, Michael Y. Zions1,4, Mohammed E. Alam1,5, Salman A. Haq1,
John F. Heitner1, Stuart Zarich2, Bette Seamonds3 and Stanley Berger3
1New York Methodist Hospital, Brooklyn, NY; 2Bridgeport Hospital, Bridgeport, CT;
3Mercy Catholic Medical Center, Darby, Phila, PA;  4Touro College, &  5Medgar
Evers College, Brooklyn, NY
Journal of Medical Laboratory and Diagnosis 04/2011; 2:16-21.
http://www.academicjournals.org/jmld

The natriuretic peptides, B-type natriuretic peptide (BNP) and NT-proBNP that
have emerged as tools for diagnosing congestive heart failure (CHF) are affected
by age and renal insufficiency (RI).  NTproBNP is used in rejecting CHF and as a
marker of risk for patients with acute coronary syndromes. This observational study
was undertaken to evaluate the reference value for interpreting NT-proBNP
concentrations. The hypothesis is that increasing concentrations of NT-proBNP
are associated with the effects of multiple co-morbidities, not merely CHF,
resulting in altered volume status or myocardial filling pressures.

NT-proBNP was measured in a population with normal trans-thoracic echocardiograms
(TTE) and free of anemia or renal impairment. Exclusion conditions were the following
co-morbidities:

  • anemia as defined by WHO,
  • atrial fibrillation (AF),
  • elevated troponin T exceeding 0.070 mg/dl,
  • systolic or diastolic blood pressure exceeding 140 and 90 respectively,
  • ejection fraction less than 45%,
  • left ventricular hypertrophy (LVH),
  • left ventricular wall relaxation impairment, and
  • renal insufficiency (RI) defined by creatinine clearance < 60ml/min using
    the MDRD formula .

Study participants were seen in acute care for symptoms of shortness of breath
suspicious for CHF requiring evaluation with cardiac NTproBNP assay. The median
NT-proBNP for patients under 50 years is 60.5 pg/ml with an upper limit of 462 pg/ml,
and for patients over 50 years the median was 272.8 pg/ml with an upper limit of
998.2 pg/ml.
We suggest that NT-proBNP levels can be more accurately interpreted only after
removal of the major co-morbidities that affect an increase in this  peptide in serum.
The PRIDE study guidelines should be applied until presence or absence of
comorbidities is diagnosed. With no comorbidities, the reference range for normal
over 50 years of age remains steady at ~1000 pg/ml. The effect shown in previous
papers likely is due to increasing concurrent comorbidity with age.

NT-proBNP profile of combined population taken from 3 sites and donors.

Age    Under 50 years 50-69 years 70 and over
NT-proBNP

Mean   
95% CI of Mean
Median   
95% CI of median
2.5-97.5 percentile   
25-75 percentile
209
35.9
29.8-43.3
27.6
24.8-33.6
5.0-1364
14.9-55.8
126
182.4
132.1-251.9
142.3
92.3-219.0
10.8-11604
42.1-565
82
611.7
425.2-880.1
564.2
419.7-1007.7
28.8-14242
210.2-2062

 

We observe the following changes with respect to NTproBNP and age:

(i) Sharp increase in NT-proBNP at over age 50

(ii) Increase in NT-proBNP at 7% per decade over 50

(iii) Decrease in eGFR at 4% per decade over 50

(iv) Slope of NT-proBNP increase with age is related to proportion of patients with
eGFR less than 90

(v) NT-proBNP increase can be delayed or accelerated based on disease
comorbidities

NT-proBNP sensitivity and specificity with RI prevalence

NT-proBNP sensitivity and specificity with RI prevalence

Figure 1. Plot of NT-proBNP sensitivity and specificity with RI prevalence.
GFRe scale: 0, > 120; 1, 90- 119; 2, 60-89; 3, 40-59; 4, 15-39; 5, under 15 ml/min.

NKF staging by GFRe interval and NT-proBNP (CHF removed).

NKF staging by GFRe interval and NT-proBNP (CHF removed).

 

Figure 2  plots the mean and 95% CI of NTproBNP (CHF removed) by the National Kidney Foundation
staging for eGFR interval (eGFR scale: 0, > 120; 1, 90 to 119;2, 60 to 89; 3, 40 to 59; 4, 15 to 39; 5,
under 15 ml/min). We created a new variable to minimize the effects of age and eGFR variability by
correcting these large effects in the whole sample population.

Adjustment of the NT-proBNP for eGFR and for age over 50 differences. We have
carried out a normalization to adjust for both eGFR and for age over 50:

(i) Take Log of NT-proBNP and multiply by 1000

(ii) Divide the result by eGFR (using MDRD9 or Cockroft Gault10)

(iii) Compare results for age under 50, 50-70, and over 70 years

(iv) Adjust to age under 50 years by multiplying by 0.66 and 0.56.

The equation does not require weight because the results are reported normalized
to 1.73 m2 body surface area, which is an accepted average adult surface area.

 

fn.log-NT-proBNP vs age

fn.log-NT-proBNP vs age

Figure 3.  Plot of 1000*log (NT-proBNP)/GFR vs age at  eGFR over 90  and 60 ml/min

scatterplot and regression line with centroid and confidence interval for fn.logNTproBNP vs age

scatterplot and regression line with centroid and confidence interval for fn.logNTproBNP vs age

Figure 4. Superimposed scatterplot and regression line with centroid and
confidence interval for 1000*log(NT-proBNP)/eGFR vs age (anemia removed)
at eGFR over 40 and 90 ml/min. (Black: eGFR > 90, Blue:  eGFR > 40)  

 

Ref Range NTpro NKLogNTpro

Ref Range NTpro NKLogNTpro

 

Reference range for NT-proBNP before and after adjusting

 

Amino-Terminal Pro-Brain Natriuretic Peptide, Renal Function, and
Outcomes in Acute Heart Failure
RRJ. van Kimmenade,  JL. Januzzi, JR,  AL. Baggish, et al. JACC 2006; 48(8).: 1621-7.

We sought to study the individual and integrative role of amino-terminal pro-brain natriuretic
peptide (NT-proBNP) and parameters of renal function for prognosis in heart failure. The
combination of NT-proBNP with measures of renal function better predicts short-term outcome
in acute heart failure than either parameter alone. Among heart failure patients, the objective
parameter of NT-proBNP seems more useful to delineate the “cardiorenal syndrome” than the
previous criteria of a clinical diagnosis of heart failure.

 

NT-proBNP testing for diagnosis and short-term prognosis in acute destabilized
heart failure: an international pooled analysis of 1256 patients The International
Collaborative of NT-proBNP Study
Januzzi, R van Kimmenade, J Lainchbury, A Bayes-Genis, J Ordonez-Llanos, et al.
Eur Heart J 2006; 27, 330–337. http://dx.doi.org:/10.1093/eurheartj/ehi631

Differences in NT-proBNP levels among 1256 patients with and without acute HF and the relationship
between NT-proBNPlevels and HF symptomswere examined.Optimal cut-points for diagnosis and
prognosis were identified and verified using bootstrapping and multi-variable logistic regression techniques.

Seven hundred and twenty subjects (57.3%) had acute HF, whose median NT-proBNP was considerably
higher than those without (4639 vs. 108 pg/mL, P < 0.001), and levels of NT-proBNP correlated with HF
symptom severity (P < 0.008). An optimal strategy to identify acute HF was to use age-related cut-points
of 450, 900, and 1800 pg/mL for ages < 50, 50–75, and  > 75, which yielded 90% sensitivity and 84% specificity
for acute HF. An age-independent cut-point of 300 pg/mL had 98% negative predictive value to exclude acute
HF. Among those with acute HF, a presenting NT-proBNP concentration > 5180 pg/mL was strongly predictive
of death by 76 days [odds ratio = 5.2, 95% confidence interval (CI) =2.2 – 8.1, P < 0.001].

Effect of B-type natriuretic peptide-guided treatment of chronic heart failure on total mortality
and hospitalization: an individual patient meta-analysis
RW. Troughton, CM. Frampton, HP Brunner-La Rocca, M Pfisterer, LW.M. Eurlings, et al.
Eur Heart J Mar 2014; 35, 1559–1567.
http://dx.doi.org:/10.1093/eurheartj/ehu090

We sought to perform an individual patient data meta-analysis to evaluate the effect of NP-guided treatment
of heart failure on all-cause mortality.  The survival benefit from NP-guided therapy was seen in younger (< 75
years) patients [0.62 (0.45–0.85); P = 0.004] but not older (≥75 years) patients [0.98 (0.75–1.27); P = 0.96].
Hospitalization due to heart failure [0.80 (0.67–0.94); P = 0.009] or cardiovascular disease [0.82 (0.67–0.99);
P = 0.048] was significantly lower in NP-guided patients with no heterogeneity between studies and no interaction
with age or LVEF.

 

Diagnostic and prognostic evaluation of left ventricular systolic heart failure by plasma N-terminal
pro-brain natriuretic peptide concentrations in a large sample of the general population

BA Groenning, I Raymond, PR Hildebrandt, JC Nilsson, M Baumann, F Pedersen.
Heart 2004; 90:297–303.  http://dx.doi.org:/10.1136/hrt.2003.026021

Value of NT-proBNP in evaluating patients with symptoms of heart failure and impaired left ventricular (LV) systolic
function; prognostic value of NT-proBNP for mortality and hospital admissions. In 38 (5.6%) participants LV ejection
fraction (LVEF) was ( 40%. NT-proBNP identified patients with symptoms of heart failure and LVEF ( 40% with a
sensitivity of 0.92, a specificity of 0.86, AUC of 0.94.  NT-proBNP was the strongest independent predictor of mortality
(hazard ratio (HR) = 5.70, p , 0.0001), hospital admissions for heart failure (HR = 13.83, p , 0.0001), and other cardiac
admissions (HR = 3.69, p , 0.0001). Mortality (26 v 6, p = 0.0003), heart failure admissions (18 v 2, p = 0.0002), and
admissions for other cardiac causes (44 v 13, p , 0.0001) were significantly higher in patients with NTproBNP above the
study median (32.5 pmol/l).

 

Testing for BNP and NT-proBNP in the Diagnosis and Prognosis of Heart Failure
Evidence Report/Technology Assessment – Number 142. Agency for Healthcare Research and Quality.
Prepared by: McMaster University Evidence-based Practice Center, Hamilton, ON, Canada
C Balion, PL. Santaguida, S Hill, A Worster, M McQueen, et al.
http://archive.ahrq.gov/downloads/pub/evidence/pdf/bnp/bnp.pdf

Question 1: What are the determinants of both BNP and NT-proBNP?
Question 2a: What are the clinical performance characteristics of both BNP and NTproBNP
measurement in patients with symptoms suggestive of HF or with known HF?
Question 2b: Does measurement of BNP or NT-proBNP add independent diagnostic information
to the traditional diagnostic measures of HF in patients with suggestive HF?
Question 3a: Do BNP or NT-proBNP levels predict cardiac events in populations at risk of CAD,
with diagnosed CAD and HF?
Question 3b: What are the screening performance characteristics of BNP or NT-proBNP in
general asymptomatic populations?
Question 4: Can BNP or NT-proBNP measurement be used to monitor response to therapy?        

Diagnosis: In all settings both BNP and NT-proBNP show good diagnostic properties as a rule out test for HF.
Prognosis: BNP and NT-proBNP are consistent independent predictors of mortality and other cardiac composite
endpoints for populations with risk of CAD, diagnosed CAD, and diagnosed HF. There is insufficient evidence to
determine the value of B-type natriuretic peptides for screening of HF.
Monitoring Treatment: There is insufficient evidence to demonstrate that BNP and NT-proBNP levels
show change in response to therapies to manage stable chronic HF patients.

Guide-IT Trial

Biomarker-Guided HF Therapy: Is It Cost-Effective
www.medscape.org/viewarticle/764686_transcript

Jan 29, 2013 – Uploaded by DCLRI
Michael Felker, MD, MHS
Associate Professor in the Division of Cardiology
Duke University Medical Center
www.youtube.com/watch?v=AW0480EE2kw

GUIDE-IT will last five years and involve approximately 45 trial sites in the United States. The first group of
patients will be enrolled by the end of 2012.

The trial tests NT-proBNP guided therapy with a COMPANION diagnostic biomarker used to optimize already
available and effective therapies for heart failure. It may identify  patients who will benefit from intensified therapy,
and  who would not have been known using only signs and symptoms of heart failure as it is currently the practice.
The NT-proBNP biomarker would enable doctors to create personalized treatment plans for patients to substantially
reduce mortality and morbidity

 Risk stratification in acute heart failure: Rationale and design of the
STRATIFY and DECIDE studies 

SP. Collins, CJ. Lindsell, CA. Jenkins, FE. Harrell, et al.
Am Heart J 2012;164:825-34.
http://dx.doi.org/10.1016/j.ahj.2012.07.033

Two studies (STRATIFY and DECIDE) have been funded by the National Heart Lung and Blood Institute with
the goal of developing prediction rules to facilitate early decision making in AHF. Using prospectively gathered
evaluation and treatment data from the acute setting (STRATIFY) and early inpatient stay (DECIDE), rules will
be generated to predict risk for death and serious complications.
A rigorous analysis plan has been developed to construct the prediction rules that will maximally extract both the
statistical and clinical properties of every data element. Upon completion of this study we will subsequently externally
test the prediction rules in a heterogeneous patient cohort.

N-terminal pro-B-type natriuretic peptide and the prediction of primary cardiovascular
events: results from 15-year follow-up of WOSCOPS

P Welsh, O Doolin, P Willeit, C Packard, P Macfarlane, S Cobbe, et al.
Eur Heart J Aug  2012.
http://dx.doi.org:/10.1093/eurheartj/ehs239

To test whether N-terminal pro-B-type natriuretic peptide (NT-proBNP) was independently associated with, and
improved the prediction of, cardiovascular disease (CVD) in a primary prevention cohort. N-terminal pro-B-type
natriuretic peptide predicts CVD events in men without clinical evidence of CHD, angina, or history of stroke,
and appears related more strongly to the risk for fatal events.
NT-proBNP was associated with an increased risk of all CVD [HR: 1.17 (95% CI: 1.11–1.23) per standard deviation
increase in log NT-proBNP] after adjustment for classical and clinical cardiovascular risk factors plus C-reactive protein.
N-terminal pro-B-type natriuretic peptide was more strongly related to the rsk of fatal [HR: 1.34 (95% CI: 1.19–1.52)]
than non-fatal CVD [HR: 1.17 (95% CI: 1.10–1.24)] (P = 0.022). The addition of NT-proBNP to traditional risk factors
improved the C-index (+0.013; P = 0.001). The continuous net reclassification index improved with the addition of NT-
proBNP by 19.8% (95% CI: 13.6–25.9%) compared with 9.8% (95% CI: 4.2–15.6%) with the addition of C-reactive protein.

 

Utility of B-Natriuretic Peptide in Detecting Diastolic Dysfunction: Comparison With
Doppler Velocity Recordings
E Lubien, A DeMaria, P Krishnaswamy, P Clopton, J Koon…A Maisel.
http://circ.ahajournals.org/content/105/5/595
Circulation. 2002;105:595-601
http://dx.doi.org:/10.1161/hc0502.103010

Although Doppler echocardiography has been used to identify abnormal left ventricular (LV) diastolic filling dynamics,
inherent limitations suggest the need for additional measures of diastolic dysfunction. Because data suggest that B-natriuretic
peptide (BNP) partially reflects ventricular pressure, we hypothesized that BNP levels could predict diastolic abnormalities
in patients with normal systolic function. A rapid assay for BNP can reliably detect the presence of diastolic abnormalities
on echocardiography. In  patients with normal systolic function, elevated BNP levels and diastolic filling abnormalities might
help to reinforce the diagnosis diastolic dysfunction

Association of common variants in NPPA and NPPB with circulating natriuretic
peptides and blood pressure.
C Newton-Cheh, MG Larson, RS Vasan, D Levy, KD Bloch, et al.
Nat Genet. 2009 Mar; 41(3): 348–353.
http://dx.doi.org:/10.1038/ng.328

We examined the association of common variants at the NPPA-NPPB locus with circulating concentrations of the
natriuretic peptides, which have blood pressure–lowering properties. In 29,717 individuals, the alleles of rs5068
and rs198358 that showed association with increased circulating natriuretic peptide concentrations were also found
to be associated with lower systolic (P = 2 ×10−6 and 6 × 10−5, respectively) and diastolic blood pressure (P = 1 × 10−6
and 5 × 10−5), as well as reduced odds of hypertension (OR = 0.85, 95% CI = 0.79–0.92, P = 4 × 10−5; OR = 0.90, 95%
CI = 0.85–0.95, P = 2 × 10−4, respectively).

2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk
DC. Goff, Jr, DM. Lloyd-Jones, G Bennett, S Coady, RB. D’Agostino, Sr, et al.
Circulation. 2013;  http://circ.ahajournals.org/content/early/2013/11/11/01.cir.0000437741.48606.98.citation
http://dx.doi.org:/10.1161/01.cir.0000437741.48606.98

The ACC and AHA have collaborated with the National Heart, Lung, and Blood Institute (NHLBI) and stakeholder
and professional organizations to develop clinical practice guidelines for assessment of CV risk, lifestyle modifications
to reduce CV risk, and management of blood cholesterol, overweight and obesity in adults.
Although the Task Force led the final development of these prevention guidelines, they differ from other ACC/AHA
guidelines. First, as opposed to an extensive compendium of clinical information, these documents are significantly
more limited in scope and focus on selected CQs in each topic, based on the highest quality evidence available.
Recommendations were derived from randomized trials, meta-analyses, and observational studies evaluated for quality,
and were not formulated when sufficient evidence was not available. Second, the text accompanying each recommendation
is succinct, summarizing the evidence for each question. Third, the format of the recommendations differs from other
ACC/AHA guidelines. Each recommendation has been mapped from the NHLBI grading format to the ACC/AHA Class
of Recommendation/Level of Evidence (COR/LOE) construct (Table 1) and is expressed in both formats.

 

Read Full Post »

Why did this occur? The matter of Individual Actions Undermining Trust, The Patent Dilemma and The Value of a Clinical Trials


Why did this occur? The matter of Individual Actions Undermining Trust, The Patent Dilemma and The Value of a Clinical Trials

Reporter and Curator: Larry H. Bernstein, MD, FCAP

 

he large amount of funding tied to continued research and support of postdoctoral fellows leads one to ask how following the money can lead to discredited work in th elite scientific community.

Moreover, the pressure to publish in prestigious journals with high impact factors is a road to academic promotion.  In the last twenty years, it is unusual to find submissions for review with less than 6-8 authors, with the statement that all contributed to the work.  These factors can’t be discounted outright, but it is easy for work to fall through the cracks when a key investigator has over 200 publications and holds tenure in a great research environment.  But that is where we find ourselves today.

There is another issue that comes up, which is also related to the issue of carrying out research, and then protecting the work for commercialization.  It is more complicated in the sense that it is necessary to determine whether there is prior art, and then there is the possibility that after the cost of filing patent and a 6 year delay in obtaining protection, there is as great a cost in bringing the patent to finasl production.

I.  Individual actions undermining trust.

II. The patent dilemma.

III. The value of a clinical trial.

IV. The value contributions of RAP physicians
(radiologists, anesthesiologists, and pathologists – the last for discussion)
Those who maintain and inform the integrity of medical and surgical decisions

 

I. Top heart lab comes under fire

Kelly Servick

Science 18 July 2014: Vol. 345 no. 6194 p. 254 DOI: 10.1126/science.345.6194.25

 

In the study of cardiac regeneration, Piero Anversa is among the heavy hitters. His research into the heart’s repair mechanisms helped kick-start the field of cardiac cell therapy (see main story). After more than 4 decades of research and 350 papers, he heads a lab at Harvard Medical School’s Brigham and Women’s Hospital (BWH) in Boston that has more than $6 million in active grant funding from the National Institutes of Health (NIH). He is also an outspoken voice in a field full of disagreement.

So when an ongoing BWH investigation of the lab came to light earlier this year, Anversa’s colleagues were transfixed. “Reactions in the field run the gamut from disbelief to vindication,” says Mark Sussman, a cardiovascular researcher at San Diego State University in California who has collaborated with Anversa. By Sussman’s account, Anversa’s reputation for “pushing the envelope” and “challenging existing dogma” has generated some criticism. Others, however, say that the disputes run deeper—to doubts about a cell therapy his lab has developed and about the group’s scientific integrity. Anversa told Science he was unable to comment during the investigation.

“People are talking about this all the time—at every scientific meeting I go to,” says Charles Murry, a cardiovascular pathologist at the University of Washington, Seattle. “It’s of grave concern to people in the field, but it’s been frustrating,” because no information is available about BWH’s investigation. BWH would not comment for this article, other than to say that it addresses concerns about its researchers confidentially.

In April, however, the journal Circulation agreed to Harvard’s request to retract a 2012 paper on which Anversa is a corresponding author, citing “compromised” data. The Lancet also issued an “Expression of Concern” about a 2011 paper reporting results from a clinical trial, known as SCIPIO, on which Anversa collaborated. According to a notice from the journal, two supplemental figures are at issue.

For some, Anversa’s status has earned him the benefit of the doubt. “Obviously, this is very disconcerting,” says Timothy Kamp, a cardiologist at the University of Wisconsin, Madison, but “I would be surprised if it was an implication of a whole career of research.”

Throughout that career, Anversa has argued that the heart is a prolific, lifelong factory for new muscle cells. Most now accept the view that the adult heart can regenerate muscle, but many have sparred with Anversa over his high estimates for the rate of this turnover, which he maintained in the retracted Circulation paper.

Anversa’s group also pioneered a method of separating cells with potential regenerative abilities from other cardiac tissue based on the presence of a protein called c-kit. After publishing evidence that these cardiac c-kit+cells spur new muscle growth in rodent hearts, the group collaborated in the SCIPIO trial to inject them into patients with heart failure. In The Lancet, the scientists reported that the therapy was safe and showed modest ability to strengthen the heart—evidence that many found intriguing and provocative. Roberto Bolli, the cardiologist whose group at the University of Louisville in Kentucky ran the SCIPIO trial, plans to test c-kit+ cells in further clinical trials as part of the NIH-funded Cardiovascular Cell Therapy Research Network.

But others have been unable to reproduce the dramatic effects Anversa saw in animals, and some have questioned whether these cells really have stem cell–like properties. In May, a group led by Jeffery Molkentin, a molecular biologist at Cincinnati Children’s Hospital Medical Center in Ohio, published a paper in Nature tracing the genetic lineage of c-kit+ cells that reside in the heart. He concluded that although they did make new muscle cells, the number is “astonishingly low” and likely not enough to contribute to the repair of damaged hearts. Still, Molkentin says that he “believe[s] in their therapeutic potential” and that he and Anversa have discussed collaborating.

Now, an anonymous blogger claims that problems in the Anversa lab go beyond controversial findings. In a letter published on the blog Retraction Watch on 30 May, a former research fellow in the Anversa lab described a lab culture focused on protecting the c-kit+ cell hypothesis: “[A]ll data that did not point to the ‘truth’ of the hypothesis were considered wrong,” the person wrote. But another former lab member offers a different perspective. “I had a great experience,” says Federica Limana, a cardiovascular disease researcher at IRCCS San Raffaele Pisana in Rome who spent 2 years of her Ph.D. work with the group in 1999 and 2000, as it was beginning to investigate c-kit+ cells. “In that period, there was no such pressure” to produce any particular result, she says.

Accusations about the lab’s integrity, combined with continued silence from BWH, are deeply troubling for scientists who have staked their research on theories that Anversa helped pioneer. Some have criticized BWH for requesting retractions in the midst of an investigation. “Scientific reputations and careers hang in the balance,” Sussman says, “so everyone should wait until all facts are clearly and fully disclosed.”

 

II.  Trolling Along: Recent Commotion About Patent Trolls

July 17, 2014

PriceWaterhouseCoopers recently released a study about 2014 Patent Litigation. PwC’s ultimate conclusion was that case volume increased vastly and damages continue a general decline, but what’s making headlines everywhere is that “patent trolls” now account for 67% of all new patent lawsuits (see, e.g., Washington Post and Fast Company).

Surprisingly, looking at PwC’s study, the word “troll” is not to be found. So, with regard to patent trolls, what does this study really mean for companies, patent owners and casual onlookers?

First of all, who are these trolls?

“Patent Troll” is a label applied to patent owners who do not make or manufacture a product, or offer a service. Patent trolls live (and die) by suing others for allegedly practicing an invention that is claimed by their patents.

The politically correct term is Non-practicing Entity (NPE). PwC solely uses the term NPE, which it defines as an entity that does not have the capability to design, manufacture, or distribute products with features protected by the patent.

So, what’s so bad about them?

The common impression of an NPEs is a business venture looking to collect and monetize assets (i.e., patents). In the most basic strategy, an NPE typically buys patents with broad claims that cover a wide variety of technologies and markets, and then sues a large group of alleged patent infringers in the hope to collect a licensing royalty or a settlement. NPEs typically don’t want to spend money on a trial unless they have to, and one tactic uses settlements with smaller businesses to build a “war chest” for potential suits with larger companies.

NPEs initiating a lawsuit can be viewed positively, such as a just defense of the lowly inventor who sold his patent to someone (with deeper pockets) who could fund the litigation to protect the inventor’s hard work against a mega-conglomerate who ripped off his idea.

Or NPE litigation can be seen negatively, such as an attorney’s demand letter on behalf of an anonymous shell corporation to shake down dozens of five-figure settlements from all the local small businesses that have ever used a fax machine.

NPEs can waste a company’s valuable time and resources with lawsuits, yet also bring value to their patent portfolios by energizing a patent sales and licensing market. There are unscrupulous NPEs, but it’s hardly the black and white situation that some media outlets are depicting.

What did PwC say about trolls?

Well, the PwC study looked at the success rates and awards of patent litigation decisions. One conclusion is that damages awards for NPEs averaged more than triple those for practicing entities over the last four years. We’ll come back to this statistic.

Another key observation is that NPEs have been successful 25% of the time overall, versus 35% for practicing entities. This makes sense because of the burden of proof the NPEs carry as a plaintiff at trial and the relative lack of success for NPEs at summary judgment. However, PwC’s report states that both types of entities win about two-thirds of their trials.

But what about this “67% of all patent trials are initiated by trolls” discussion?

The 67% number comes from the RPX Corporation’s litigation report (produced January 2014) that quantified the percentage of NPE cases filed in 2013 as 67%, compared to 64% in 2012, 47% in 2011, 30% in 2010 and 28% in 2009.

PwC refers to the RPX statistics to accentuate that this new study indicates that only 20% ofdecisions in 2013 involved NPE-filed cases, so the general conclusion would be that NPE cases tend to settle or be dismissed prior to a court’s decision. Admittedly, this is indicative of the prevalent “spray and pray” strategy where NPEs prefer to collect many settlement checks from several “targets” and avoid the courtroom.

In this study, who else is an NPE?

If someone were looking to dramatize the role of “trolls,” the name can be thrown around liberally (and hurtfully) to anyone who owns and asserts a patent without offering a product or a service. For instance, colleges and universities fall under the NPE umbrella as their research and development often ends with a series of published papers rather than a marketable product on an assembly line.

In fact, PwC distinguishes universities and non-profits from companies and individuals within their NPE analysis, with only about 5% of the NPE cases from 1995 to 2013 being attributed to universities and non-profits. Almost 50% of the NPE cases are attributed to an “individual,” who could be the listed inventor for the patent or a third-party assignee.

The word “troll” is obviously a derogatory term used to connote greed and hiding (under a bridge), but the term has adopted a newer, meme-like status as trolls are currently depicted as lacking any contribution to society and merely living off of others’ misfortunes and fears. [Three Billy Goats Gruff]. This is not always the truth with NPEs (e.g., universities).

No one wants to be called a troll—especially in front of a jury—so we’ve even recently seen courts bar defendants from referring to NPEs as such colorful terms as a “corporate shell,” “bounty hunter,” “privateer,” or someone “playing the lawsuit lottery.” [Judge Koh Bans Use Of Term ” Patent Troll” In Apple Jury Trial]

Regardless of the portrayal of an NPE, most people in the patent world distinguish the “trolls” by the strength of the patent, merits of the alleged infringement and their behavior upon notification. Often these are expressed as “frivolity” of the case and “gamesmanship” of the attorneys. Courts are able to punish plaintiffs who bring frivolous claims against a party and state bar associations are tasked with monitoring the ethics of attorneys. The USPTO is tasked with working to strengthen the quality of patents.

What’s the take-away from this study regarding NPEs?

The study focuses on patent litigation that produced a decision, therefore the most important and relevant conclusion is that, over the last four years, average damages awards for NPEs are more than triple the damages for practicing entities. Everything else in these articles, such as the initiation of litigation by NPEs, settlement percentages, and the general behavior of patent trolls is pure inference beyond the scope of the study.

This may sound sympathetic to trolls, but keep in mind that the study highlights that NPEs have more than triple the damages on average compared to practicing entities and it is meant to shock the reader a bit. One explanation for this is that NPEs are in the best position to choose the patents they want to assert and choose the targets they wish to sue—especially when the NPE is willing to ride that patent all the way to the end of a long, expensive trial. Sometimes settling is not an option. Chart 2b indicates that the disparity in the damages awarded to NPEs relative to practicing entities has always been big (since 2000), but perhaps going from two-fold from 2000 – 2009 to three times as much in the past 4 years indicates that NPEs are improving at finding patents and/or picking battles to take all the way to a court decision. More than anything, this seems to reflect the growth in the concept of patents as a business asset.

The PwC report is chock full of interesting patterns and trends of litigation results, so it’s a shame that the 67% number makes the headlines—far more interesting are the charts comparing success rates by 4-year periods (Chart 6b) or success rates for NPEs and practicing entities in front of a jury verusin front of a bench (Chart 6c), as well as other tables that reveal statistics for specific districts of the federal courts. Even the stats that look at the success rates of each type of NPE are telling because the reader sees that universities and non-profits have a higher success rate than non-practicing companies or individuals.

What do we do about the trolls?

The White House has recently called for Congress to do something about the trolls as horror stories of scams and shake-downs are shared. A bill was gaining momentum in the Senate, when Senator Leahy took it off the agenda in early July. That bill had miraculously passed 325-91 in the House and President Obama was willing to sign it if the Senate were to pass it. The bill was opposed by trial attorneys, universities, and bio-pharmaceutical businesses who felt as though the law would severely inhibit everyone’s access to the courts in order to hinder just the trolls. Regardless, most people think that the sitting Congressmen merely wanted a “win” prior to the mid-term elections and that patent reform is unlikely to reappear until next term.

In the meantime, the Supreme Court has recently reiterated rules concerning attorney fee-shifting on frivolous patent cases, as well as clarifying the validity of software patents. Time will tell if these changes have any effects on the damages awards that PwC’s study examined or even if they cause a chilling of the number of patent lawsuit filings.

Furthermore, new ways to challenge the validity of asserted patents have been initiated via the America Invents Act. For example, the Inter Partes Review (IPR) has yielded frightening preliminary statistics as to slowing, if not killing, patents that have been asserted in a suit. While these administrative trials are not cheap, many view these new tools at the Patent Trial and Appeals Board as anti-troll measures. It will be interesting to watch how the USPTO implements these procedures in the near future, especially while former Google counsel, Acting Director Michelle K. Lee, oversees the office.

In the private sector, Silicon Valley has recently seen a handful of tech companies come together as the License on Transfer Network, a group hoping to disarm the “Patent Assertion Entities.” Joining the LOT Network comes via an agreement that creates a license for use of a patent by anyone in the LOT network once that patent is sold. The thought is that the NPEs who consider purchasing patents from companies in the LOT Network will have fewer companies to sue since the license to the other active LOT participants will have triggered upon the transfer and, thus, the NPE will not be as inclined to “troll.” For instance, if a member-company such as Google were to sell a patent to a non-member company and an NPE bought that patent, the NPE would not be able to sue any members of the LOT Network with that patent.

Other notes

NPEs are only as evil as the people who run them—that being said, there are plenty of horror stories of small businesses receiving phantom demand letters that threaten a patent infringement suit without identifying themselves or the patent. This is an out-and-out scam and a plague on society that results in wasted time and resource, and inevitably higher prices on the consumer end.

It is a sin and a shame that patent rights can be misused in scams and shake-downs of businesses around us, but there is a reason that U.S. courts are so often used to defend patent rights. The PwC study, at minimum, reflects the high stakes of the patent market and perhaps the fragility. Nevertheless, merely monitoring the courts may not keep the trolls at bay.

I’d love to hear your thoughts.

*This is provided for informational purposes only, and does not constitute legal or financial advice. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney or patent agent.

 

III. Large-scale analysis finds majority of clinical trials don’t provide meaningful evidence

Ineffective TreatmentsMedical Ethics • Tags: Center for Drug Evaluation and ResearchClinical trialCTTIDuke University HospitalFDAFood and Drug AdministrationNational Institutes of HealthUnited States National Library of Medicine

04 May 2012

DURHAM, N.C.— The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making. The analysis, published today in JAMA, found the majority of clinical trials is small, and there are significant differences among methodical approaches, including randomizing, blinding and the use of data monitoring committees.

“Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high quality evidence to ensure confidence in guideline recommendations,” said Robert Califf, M.D., first author of the paper, vice chancellor for clinical research at Duke University Medical Center, and director of the Duke Translational Medicine Institute.

The analysis was conducted by the Clinical Trials Transformation Initiative (CTTI), a public private partnership founded by the Food and Drug Administration (FDA) and Duke. It extends the usability of the data in ClinicalTrials.gov for research by placing the data through September 27, 2010 into a database structured to facilitate aggregate analysis. This publically accessible database facilitates the assessment of the clinical trials enterprise in a more comprehensive manner than ever before and enables the identification of trends by study type.

 

The National Library of Medicine (NLM), a part of the National Institutes of Health, developed and manages ClinicalTrials.gov. This site maintains a registry of past, current, and planned clinical research studies.

“Since 2007, the Food and Drug Administration Amendment Act has required registration of clinical trials, and the expanded scope and rigor of trial registration policies internationally is producing more complete data from around the world,” stated Deborah Zarin, MD, director, ClinicalTrials.gov, and assistant director for clinical research projects, NLM. “We have amassed over 120,000 registered clinical trials. This rich repository of data has a lot to say about the national and international research portfolio.”

This CTTI project was a collaborative effort by informaticians, statisticians and project managers from NLM, FDA and Duke. CTTI comprises more than 60 member organizations with the goal of identifying practices that will improve the quality and efficiency of clinical trials.

“Since the ClinicalTrials.gov registry contains studies sponsored by multiple entities, including government, industry, foundations and universities, CTTI leaders recognized that it might be a valuable source for benchmarking the state of the clinical trials enterprise,” stated Judith Kramer, MD, executive director of CTTI.

The project goal was to produce an easily accessible database incorporating advances in informatics to permit a detailed characterization of the body of clinical research and facilitate analysis of groups of studies by therapeutic areas, by type of sponsor, by number of participants and by many other parameters.

“Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others,” says Califf. “For example, 96% of clinical trials have ≤1000 participants, and 62% have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors and consumers about the choices they must make to prevent and treat disease.”

The analysis showed heterogeneity in median trial size, with cardiovascular trials tending to be twice as large as those in oncology and trials in mental health falling in the middle. It also showed major differences in the use of randomization, blinding, and data monitoring committees, critical issues often used to judge the quality of evidence for medical decisions in clinical practice guidelines and systematic overviews.

“These results reinforce the importance of exploration, analysis and inspection of our clinical trials enterprise,” said Rachel Behrman Sherman, MD, associate director for the Office of Medical Policy at the FDA’s Center for Drug Evaluation and Research. “Generation of this evidence will contribute to our understanding of the number of studies in different phases of research, the therapeutic areas, and ways we can improve data collection about clinical trials, eventually improving the quality of clinical trials.”

Related articles

 

IV.  Lawmakers urge CMS to extend MU hardship exemption for pathologists

 

Eighty-nine members of Congress have asked the Centers for Medicare & Medicaid Services to give pathologists a break and extend the hardship exemption they currently enjoy for all of Stage 3 of the Meaningful Use program.In the letter–dated July 10 and addressed to CMS Administrator Marilyn Tavenner–the lawmakers point out that CMS had recognized in its 2012 final rule implementing Stage 2 of the program that it was difficult for pathologists to meet the Meaningful Use requirements and granted a one year exception for 2015, the first year that penalties will be imposed. They now are asking that the exception be expanded to include the full five-year maximum allowed under the American Recovery and Reinvestment Act.

“Pathologists have limited direct contact with patients and do not operate in EHRs,” the letter states. “Instead, pathologists use sophisticated computerized laboratory information systems (LISs) to support the work of analyzing patient specimens and generating test results. These LISs exchange laboratory and pathology data with EHRs.”

Interestingly, the lawmakers’ exemption request is only on behalf of pathologists, even though CMS had granted the one-year hardship exception to pathologists, radiologists and anesthesiologists.

Rep. Tom Price (R-Ga.), one of the members spearheading the letter, had also introduced a bill (H.R. 1309) in March 2013 that would exclude pathologists from the incentives and penalties of the Meaningful Use program. The bill, which has 31 cosponsors, is currently sitting in committee. That bill also does not include relief for radiologists or anesthesiologists.

CMS has provided some flexibility about the hardship exceptions in the past, most recently by allowing providers to apply for one due to EHR vendor delays in upgrading to Stage 2 of the program.

However, CMS also noted in the 2012 rule granting the one-year exception that it was granting the exception in large part because of the then-current lack of health information exchange and that “physicians in these three specialties should not expect that this exception will continue indefinitely, nor should they expect that we will grant the exception for the full 5-year period permitted by statute.”

To learn more:
– read the letter (.pdf)

Read Full Post »

USPTO Guidance On Patentable Subject Matter


USPTO Guidance On Patentable Subject Matter

Curator and Reporter: Larry H Bernstein, MD, FCAP

LH Bernstein

LH Bernstein

 

 

 

 

 

 

Revised 4 July, 2014

https://pharmaceuticalintelligence.com/2014/07/03/uspto-guidance-on-patentable-subject-matter

 

I came across a few recent articles on the subject of US Patent Office guidance on patentability as well as on Supreme Court ruling on claims. I filed several patents on clinical laboratory methods early in my career upon the recommendation of my brother-in-law, now deceased.  Years later, after both brother-in-law and patent attorney are no longer alive, I look back and ask what I have learned over $100,000 later, with many trips to the USPTO, opportunities not taken, and a one year provisional patent behind me.

My conclusion is

(1) that patents are for the protection of the innovator, who might realize legal protection, but the cost and the time investment can well exceed the cost of startup and building a small startup enterprize, that would be the next step.

(2) The other thing to consider is the capability of the lawyer or firm that represents you.  A patent that is well done can be expected to take 5-7 years to go through with due diligence.   I would not expect it to be done well by a university with many other competing demands. I might be wrong in this respect, as the climate has changed, and research universities have sprouted engines for change.  Experienced and productive faculty are encouraged or allowed to form their own such entities.

(3) The emergence of Big Data, computational biology, and very large data warehouses for data use and integration has changed the landscape. The resources required for an individual to pursue research along these lines is quite beyond an individuals sole capacity to successfully pursue without outside funding.  In addition, the changed designated requirement of first to publish has muddied the water.

Of course, one can propose without anything published in the public domain. That makes it possible for corporate entities to file thousands of patents, whether there is actual validation or not at the time of filing.  It would be a quite trying experience for anyone to pursue in the USPTO without some litigation over ownership of patent rights. At this stage of of technology development, I have come to realize that the organization of research, peer review, and archiving of data is still at a stage where some of the best systems avalailable for storing and accessing data still comes considerably short of what is needed for the most complex tasks, even though improvements have come at an exponential pace.

I shall not comment on the contested views held by physicists, chemists, biologists, and economists over the completeness of guiding theories strongly held.  Only history will tell.  Beliefs can hold a strong sway, and have many times held us back.

I am not an expert on legal matters, but it is incomprehensible to me that issues concerning technology innovation can be adjudicated in the Supreme Court, as has occurred in recent years. I have postgraduate degrees in  Medicine, Developmental Anatomy, and post-medical training in pathology and laboratory medicine, as well as experience in analytical and research biochemistry.  It is beyond the competencies expected for these type of cases to come before the Supreme Court, or even to the Federal District Courts, as we see with increasing frequency,  as this has occurred with respect to the development and application of the human genome.

I’m not sure that the developments can be resolved for the public good without a more full development of an open-access system of publishing. Now I present some recent publication about, or published by the USPTO.

DR ANTHONY MELVIN CRASTO

Dr. Melvin Castro - Organic Chemistry and New Drug Development

Dr. Melvin Castro – Organic Chemistry and New Drug Development

 

 

 

 

 

 

 

 

YOU ARE FOLLOWING THIS BLOG You are following this blog, along with 1,014 other amazing people (manage).

patentimages.storage.goog…

USPTO Guidance On Patentable Subject Matter: Impediment to Biotech Innovation

Joanna T. Brougher, David A. Fazzolare J Commercial Biotechnology 2014 20(3):Brougher

jcbiotech-patents

jcbiotech-patents

 

 

 

 

 

 

 

 

 

 

 

Abstract In June 2013, the U.S. Supreme Court issued a unanimous decision upending more than three decades worth of established patent practice when it ruled that isolated gene sequences are no longer patentable subject matter under 35 U.S.C. Section 101.While many practitioners in the field believed that the USPTO would interpret the decision narrowly, the USPTO actually expanded the scope of the decision when it issued its guidelines for determining whether an invention satisfies Section 101.

The guidelines were met with intense backlash with many arguing that they unnecessarily expanded the scope of the Supreme Court cases in a way that could unduly restrict the scope of patentable subject matter, weaken the U.S. patent system, and create a disincentive to innovation. By undermining patentable subject matter in this way, the guidelines may end up harming not only the companies that patent medical innovations, but also the patients who need medical care.  This article examines the guidelines and their impact on various technologies.

Keywords:   patent, patentable subject matter, Myriad, Mayo, USPTO guidelines

Full Text: PDF

References

35 U.S.C. Section 101 states “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

” Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 566 U.S. ___ (2012)

Association for Molecular Pathology et al., v. Myriad Genetics, Inc., 569 U.S. ___ (2013).

Parke-Davis & Co. v. H.K. Mulford Co., 189 F. 95, 103 (C.C.S.D.N.Y. 1911)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

DOI: http://dx.doi.org/10.5912/jcb664

 

Science 4 July 2014; 345 (6192): pp. 14-15  DOI: http://dx.doi.org/10.1126/science.345.6192.14
  • IN DEPTH

INTELLECTUAL PROPERTY

Biotech feels a chill from changing U.S. patent rules

A 2013 Supreme Court decision that barred human gene patents is scrambling patenting policies.

PHOTO: MLADEN ANTONOV/AFP/GETTY IMAGES

A year after the U.S. Supreme Court issued a landmark ruling that human genes cannot be patented, the biotech industry is struggling to adapt to a landscape in which inventions derived from nature are increasingly hard to patent. It is also pushing back against follow-on policies proposed by the U.S. Patent and Trademark Office (USPTO) to guide examiners deciding whether an invention is too close to a natural product to deserve patent protection. Those policies reach far beyond what the high court intended, biotech representatives say.

“Everything we took for granted a few years ago is now changing, and it’s generating a bit of a scramble,” says patent attorney Damian Kotsis of Harness Dickey in Troy, Michigan, one of more than 15,000 people who gathered here last week for the Biotechnology Industry Organization’s (BIO’s) International Convention.

At the meeting, attorneys and executives fretted over the fate of patent applications for inventions involving naturally occurring products—including chemical compounds, antibodies, seeds, and vaccines—and traded stories of recent, unexpected rejections by USPTO. Industry leaders warned that the uncertainty could chill efforts to commercialize scientific discoveries made at universities and companies. Some plan to appeal the rejections in federal court.

USPTO officials, meanwhile, implored attendees to send them suggestions on how to clarify and improve its new policies on patenting natural products, and even announced that they were extending the deadline for public comment by a month. “Each and every one of you in this room has a moral duty … to provide written comments to the PTO,” patent lawyer and former USPTO Deputy Director Teresa Stanek Rea told one audience.

At the heart of the shake-up are two Supreme Court decisions: the ruling last year in Association for Molecular Pathology v. Myriad Genetics Inc. that human genes cannot be patented because they occur naturally (Science, 21 June 2013, p. 1387); and the 2012 Mayo v. Prometheus decision, which invalidated a patent on a method of measuring blood metabolites to determine drug doses because it relied on a “law of nature” (Science, 12 July 2013, p. 137).

Myriad and Mayo are already having a noticeable impact on patent decisions, according to a study released here. It examined about 1000 patent applications that included claims linked to natural products or laws of nature that USPTO reviewed between April 2011 and March 2014. Overall, examiners rejected about 40%; Myriad was the basis for rejecting about 23% of the applications, and Mayo about 35%, with some overlap, the authors concluded. That rejection rate would have been in the single digits just 5 years ago, asserted Hans Sauer, BIO’s intellectual property counsel, at a press conference. (There are no historical numbers for comparison.) The study was conducted by the news service Bloomberg BNA and the law firm Robins, Kaplan, Miller & Ciseri in Minneapolis, Minnesota.

USPTO is extending the decisions far beyond diagnostics and DNA?

The numbers suggest USPTO is extending the decisions far beyond diagnostics and DNA, attorneys say. Harness Dickey’s Kotsis, for example, says a client recently tried to patent a plant extract with therapeutic properties; it was different from anything in nature, Kotsis argued, because the inventor had altered the relative concentrations of key compounds to enhance its effect. Nope, decided USPTO, too close to nature.

In March, USPTO released draft guidance designed to help its examiners decide such questions, setting out 12 factors for them to weigh. For example, if an examiner deems a product “markedly different in structure” from anything in nature, that counts in its favor. But if it has a “high level of generality,” it gets dinged.

The draft has drawn extensive criticism. “I don’t think I’ve ever seen anything as complicated as this,” says Kevin Bastian, a patent attorney at Kilpatrick Townsend & Stockton in San Francisco, California. “I just can’t believe that this will be the standard.”

USPTO officials appear eager to fine-tune the draft guidance, but patent experts fear the Supreme Court decisions have made it hard to draw clear lines. “The Myriad decision is hopelessly contradictory and completely incoherent,” says Dan Burk, a law professor at the University of California, Irvine. “We know you can’t patent genetic sequences,” he adds, but “we don’t really know why.”

Get creative in using Draft Guidelines!

For now, Kostis says, applicants will have to get creative to reduce the chance of rejection. Rather than claim protection for a plant extract itself, for instance, an inventor could instead patent the steps for using it to treat patients. Other biotech attorneys may try to narrow their patent claims. But there’s a downside to that strategy, they note: Narrower patents can be harder to protect from infringement, making them less attractive to investors. Others plan to wait out the storm, predicting USPTO will ultimately rethink its guidance and ease the way for new patents.

 

Public comment period extended

USPTO has extended the deadline for public comment to 31 July, with no schedule for issuing final language. Regardless of the outcome, however, Stanek Rea warned a crowd of riled-up attorneys that, in the world of biopatents, “the easy days are gone.”

 

United States Patent and Trademark Office

Today we published and made electronically available a new edition of the Manual of Patent Examining Procedure (MPEP). Manual of Patent Examining Procedure uspto.gov http://www.uspto.gov/web/offices/pac/mpep/index.html Summary of Changes

PDF Title Page
PDF Foreword
PDF Introduction
PDF Table of Contents
PDF Chapter 600 –
PDF   Parts, Form, and Content of Application Chapter 700 –
PDF    Examination of Applications Chapter 800 –
PDF   Restriction in Applications Filed Under 35 U.S.C. 111; Double Patenting Chapter 900 –
PDF   Prior Art, Classification, and Search Chapter 1000 –
PDF  Matters Decided by Various U.S. Patent and Trademark Office Officials Chapter 1100 –
PDF   Statutory Invention Registration (SIR); Pre-Grant Publication (PGPub) and Preissuance Submissions Chapter 1200 –
PDF    Appeal Chapter 1300 –
PDF   Allowance and Issue Appendix L –
PDF   Patent Laws Appendix R –
PDF   Patent Rules Appendix P –
PDF   Paris Convention Subject Matter Index 
PDF Zipped version of the MPEP current revision in the PDF format.

Manual of Patent Examining Procedure (MPEP)Ninth Edition, March 2014

The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to:
http://uspto-mpep.ideascale.com.

Manual of Patent Examining Procedure (MPEP) Ninth Edition, March 2014
The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to: http://uspto-mpep.ideascale.com.

Note: For current fees, refer to the Current USPTO Fee Schedule.
Consolidated Laws – The patent laws in effect as of May 15, 2014. Consolidated Rules – The patent rules in effect as of May 15, 2014.  MPEP Archives (1948 – 2012)
Current MPEP: Searchable MPEP

The documents updated in the Ninth Edition of the MPEP, dated March 2014, include changes that became effective in November 2013 or earlier.
All of the documents have been updated for the Ninth Edition except Chapters 800, 900, 1000, 1300, 1700, 1800, 1900, 2000, 2300, 2400, 2500, and Appendix P.
More information about the changes and updates is available from the “Blue Page – Introduction” of the Searchable MPEP or from the “Summary of Changes” link to the HTML and PDF versions provided below. Discuss the Manual of Patent Examining Procedure (MPEP) Welcome to the MPEP discussion tool!

We have received many thoughtful ideas on Chapters 100-600 and 1800 of the MPEP as well as on how to improve the discussion site. Each and every idea submitted by you, the participants in this conversation, has been carefully reviewed by the Office, and many of these ideas have been implemented in the August 2012 revision of the MPEP and many will be implemented in future revisions of the MPEP. The August 2012 revision is the first version provided to the public in a web based searchable format. The new search tool is available at http://mpep.uspto.gov. We would like to thank everyone for participating in the discussion of the MPEP.

We have some great news! Chapters 1300, 1500, 1600 and 2400 of the MPEP are now available for discussion. Please submit any ideas and comments you may have on these chapters. Also, don’t forget to vote on ideas and comments submitted by other users. As before, our editorial staff will periodically be posting proposed new material for you to respond to, and in some cases will post responses to some of the submitted ideas and comments.Recently, we have received several comments concerning the Leahy-Smith America Invents Act (AIA). Please note that comments regarding the implementation of the AIA should be submitted to the USPTO via email t aia_implementation@uspto.gov or via postal mail, as indicated at the America Invents Act Web site. Additional information regarding the AIA is available at www.uspto.gov/americainventsact  We have also received several comments suggesting policy changes which have been routed to the appropriate offices for consideration. We really appreciate your thinking and recommendations!

FDA Guidance for Industry:Electronic Source Data in Clinical Investigations

Electronic Source Data

Electronic Source Data

 

 

 

 

 

 

 

The FDA published its new Guidance for Industry (GfI) – “Electronic Source Data in Clinical Investigations” in September 2013.
The Guidance defines the expectations of the FDA concerning electronic source data generated in the context of clinical trials. Find out more about this Guidance.
http://www.gmp-compliance.org/enews_4288_FDA%20Guidance%20for%20Industry%3A%20Electronic%20Source%20Data%20in%20Clinical%20Investigations
_8534,8457,8366,8308,Z-COVM_n.html

After more than 5 years and two draft versions, the final version of the Guidance for
Industry (GfI) – “Electronic Source Data in Clinical Investigations” was published in
September 2013. This new FDA Guidance defines the FDA’s expectations for sponsors,
CROs, investigators and other persons involved in the capture, review and retention of
electronic source data generated in the context of FDA-regulated clinical trials.In an
effort to encourage the modernization and increased efficiency of processes in clinical
trials, the FDA clearly supports the capture of electronic source data and emphasizes
the agency’s intention to support activities aimed at ensuring the reliability, quality,
integrity and traceability of this source data, from its electronic source to the electronic
submission of the data in the context of an authorization procedure. The Guidance
addresses aspects as data capture, data review and record retention. When the
computerized systems used in clinical trials are described, the FDA recommends
that the description not only focus on the intended use of the system, but also on
data protection measures and the flow of data across system components and
interfaces. In practice, the pharmaceutical industry needs to meet significant
requirements regarding organisation, planning, specification and verification of
computerized systems in the field of clinical trials. The FDA also mentions in the
Guidance that it does not intend to apply 21 CFR Part 11 to electronic health records
(EHR). Author: Oliver Herrmann Q-Infiity Source: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/
Guidances/UCM328691.pdf
Webinar: https://collaboration.fda.gov/p89r92dh8wc

 

Read Full Post »


Larry H Bernstein, MD, FCAP, Author and Curator

https://pharmaceuticalintelligence.com/2014/06/22/Proteomics – The Pathway to Understanding and Decision-making in Medicine

This dialogue is a series of discussions introducing several perspective on proteomics discovery, an emerging scientific enterprise in the -OMICS- family of disciplines that aim to clarify many of the challenges toward the understanding of disease and aiding in the diagnosis as well as guiding treatment decisions. Beyond that focus, it will contribute to personalized medical treatment in facilitating the identification of treatment targets for the pharmaceutical industry. Despite enormous advances in genomics research over the last two decades, there is a still a problem in reaching anticipated goals for introducing new targeted treatments that has seen repeated failures in stage III of clinical trials, and even when success has been achieved, it is temporal.  The other problem has been toxicity of agents widely used in chemotherapy.  Even though the genomic approach brings relieve to the issues of toxicity found in organic chemistry derivative blocking reactions, the specificity for the target cell without an effect on normal cells has been elusive.

This is not confined to cancer chemotherapy, but can also be seen in pain medication, and has been a growing problem in antimicrobial therapy.  The stumbling block has been inability to manage a multiplicity of reactions that also have to be modulated in a changing environment based on 3-dimension structure of proteins, pH changes, ionic balance, micro- and macrovascular circulation, and protein-protein and protein- membrane interactions. There is reason to consider that the present problems can be overcome through a much better modification of target cellular metabolism as we peel away the confounding and blinding factors with a multivariable control of these imbalances, like removing the skin of an onion.

This is the first of a series of articles, and for convenience we shall here  only emphasize the progress of application of proteomics to cardiovascular disease.

growth in funding proteomics 1990-2010

growth in funding proteomics 1990-2010

Part I.

Panomics: Decoding Biological Networks  (Clinical OMICs 2014; 5)

Technological advances such as high-throughput sequencing are transforming medicine from symptom-based diagnosis and treatment to personalized medicine as scientists employ novel rapid genomic methodologies to gain a broader comprehension of disease and disease progression. As next-generation sequencing becomes more rapid, researchers are turning toward large-scale pan-omics, the collective use of all omics such as genomics, epigenomics, transcriptomics, proteomics, metabolomics, lipidomics and lipoprotein proteomics, to better understand, identify, and treat complex disease.

Genomics has been a cornerstone in understanding disease, and the sequencing of the human genome has led to the identification of numerous disease biomarkers through genome-wide association studies (GWAS). It was the goal of these studies that these biomarkers would serve to predict individual disease risk, enable early detection of disease, help make treatment decisions, and identify new therapeutic targets. In reality, however, only a few have gone on to become established in clinical practice. For example in human GWAS studies for heart failure at least 35 biomarkers have been identified but only natriuretic peptides have moved into clinical practice, where they are limited primarily for use as a diagnostic tool.

Proteomics Advances Will Rival the Genetics Advances of the Last Ten Years

Seventy percent of the decisions made by physicians today are influenced by results of diagnostic tests, according to N. Leigh Anderson, founder of the Plasma Proteome Institute and CEO of SISCAPA Assay Technologies. Imagine the changes that will come about when future diagnostics tests are more accurate, more useful, more economical, and more accessible to healthcare practitioners. For Dr. Anderson, that’s the promise of proteomics, the study of the structure and function of proteins, the principal constituents of the protoplasm of all cells.

In explaining why proteomics is likely to have such a major impact, Dr. Anderson starts with a major difference between the genetic testing common today, and the proteomic testing that is fast coming on the scene. “Most genetic tests are aimed at measuring something that’s constant in a person over his or her entire lifetime. These tests provide information on the probability of something happening, and they can help us understand the basis of various diseases and their potential risks. What’s missing is, a genetic test is not going to tell you what’s happening to you right now.”

Mass Spec-Based Multiplexed Protein Biomarkers

Clinical proteomics applications rely on the translation of targeted protein quantitation technologies and methods to develop robust assays that can guide diagnostic, prognostic, and therapeutic decision-making. The development of a clinical proteomics-based test begins with the discovery of disease-relevant biomarkers, followed by validation of those biomarkers.

“In common practice, the discovery stage is performed on a MS-based platform for global unbiased sampling of the proteome, while biomarker qualification and clinical implementation generally involve the development of an antibody-based protocol, such as the commonly used enzyme linked ELISA assays,” state López et al. in Proteome Science (2012; 10: 35–45). “Although this process is potentially capable of delivering clinically important biomarkers, it is not the most efficient process as the latter is low-throughput, very costly, and time-consuming.”

Part II.  Proteomics for Clinical and Research Use: Combining Protein Chips, 2D Gels and Mass Spectrometry in 

The next Step: Exploring the Proteome: Translation and Beyond

N. Leigh Anderson, Ph.D., Chief Scientific Officer, Large Scale Proteomics Corporation

Three streams of technology will play major roles in quantitative (expression) proteomics over the coming decade. Two-dimensional electrophoresis and mass spectrometry represent well-established methods for, respectively, resolving and characterizing proteins, and both have now been automated to enable the high-throughput generation of data from large numbers of samples.

These methods can be powerfully applied to discover proteins of interest as diagnostics, small molecule therapeutic targets, and protein therapeutics. However, neither offers a simple, rapid, routine way to measure many proteins in common samples like blood or tissue homogenates.

Protein chips do offer this possibility, and thus complete the triumvirate of technologies that will deliver the benefits of proteomics to both research and clinical users. Integration of efforts in all three approaches are discussed, highlighting the application of the Human Protein Index® database as a source of protein leads.

leighAnderson

leighAnderson

N. Leigh Anderson, Ph D. is Chief Scientific Officer of the Proteomics subsidiary of Large Scale Biology Corporation (LSBC).
Dr. Anderson obtained his B.A. in Physics with honors from Yale and a Ph.D. in Molecular Biology from Cambridge University
(England) where he worked with M. F. Perutz as a Churchill Fellow at the MRC Laboratory of Molecular Biology. Subsequently
he co-founded the Molecular Anatomy Program at the Argonne National Laboratory (Chicago) where his work in the development
of 2D electrophoresis and molecular database technology earned him, among other distinctions, the American Association for
Clinical Chemistry’s Young Investigator Award for 1982, the 1983 Pittsburgh Analytical Chemistry Award, 2008 AACC Outstanding
Research Award, and 2013 National Science Medal..

In 1985 Dr. Anderson co-founded LSBC in order to pursue commercial development and large scale applications of 2-D electro-
phoretic protein mapping technology. This effort has resulted in a large-scale proteomics analytical facility supporting research
work for LSBC and its pharmaceutical industry partners. Dr. Anderson’s current primary interests are in the automation of proteomics
technologies, and the expansion of LSBC’s proteomics databases describing drug effects and disease processes in vivo and in vitro.
Large Scale Biology went public in August 2000.

Part II. Plasma Proteomics: Lessons in Biomarkers and Diagnostics

Exposome Workshop
N Leigh Anderson
Washington 8 Dec 2011

QUESTIONS AND LESSONS:

CLINICAL DIAGNOSTICS AS A MODEL FOR EXPOSOME INDICATORS
TECHNOLOGY OPTIONS FOR MEASURING PROTEIN RESPONSES TO EXPOSURES
SCALE OF THE PROBLEM: EXPOSURE SIGNALS VS POPULATION NOISE

The Clinical Plasma Proteome
• Plasma and serum are the dominant non-invasive clinical sample types
– standard materials for in vitro diagnostics (IVD)
• Proteins measured in clinically-available tests in the US
– 109 proteins via FDA-cleared or approved tests
• Clinical test costs range from $9 (albumin) to $122 (Her2)
• 90% of those ever approved are still in use
– 96 additional proteins via laboratory-developed tests (not FDA
cleared or approved)
– Total 205 proteins (≅ products of 211genes, excluding Ig’s)
• Clinically applied proteins thus account for
– About 1% of the baseline human proteome (1 gene :1 protein)
– About 10% of the 2,000+ proteins observed in deep discovery
plasma proteome datasets

“New” Protein Diagnostics Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

FDA clearance of protein diagnostics

FDA clearance of protein diagnostics

A  Major Technology Gulf Exists Between Discovery

Proteomics and Routine Diagnostic Platforms

Two Streams of Proteomics
A.  Problem Technology
Basic biology: maximum proteome coverage (including PTM’s, splices) to
provide unbiased discovery of mechanistic information
• Critical: Depth and breadth
• Not critical: Cost, throughput, quant precision

B.  Discovery proteomics
Specialized proteomics field,
large groups,
complex workflows and informatics

Part III.  Addressing the Clinical Proteome with Mass Spectrometric Assays

N. Leigh Anderson, PhD, SISCAPA Assay Technologies, Inc.

protein changes in biological mechanisms

protein changes in biological mechanisms

No Increase in FDA Cleared Protein Tests in 20 yr

“New” Protein Tests in Plasma Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

See figure above

An Explanation: the Biomarker Pipeline is Blocked at the Verification Step

Immunoassay Weaknesses Impact Biomarker Verification

1) Specificity: what actually forms the immunoassay sandwich – or prevents its
formation – is not directly visualized

2) Cost: an assay developed to FDA approvable quality costs $2-5M per
protein

Major_Plasma_Proteins

Major_Plasma_Proteins

Immunoassay vs Hybrid MS-based assays

Immunoassay vs Hybrid MS-based assays

MASS SPECTROMETRY: MRM’s provide what is missing in..IMMUNOASSAYS:

– SPECIFICITY
– INTERNAL STANDARDIZATION
– MULTIPLEXING
– RAPID CONFIGURATION PROVIDED A PROTEIN CAN ACT LIKE A SMALL
MOLECULE

MRM of Proteotypic Tryptic Peptides Provides Highly Specific Assays for Proteins > 1ug/ml in Plasma

Peptide-Level MS Provides High Structural Specificity
Multiple Reaction Monitoring (MRM) Quantitation

ADDRESSING MRM LIMITATIONS VIA SPECIFIC ENRICHMENT OF ANALYTE  PEPTIDES: SISCAPA

– SENSITIVITY
– THROUGHPUT (LC-MS/MS CYCLE TIME)

SISCAPA combines best features of immuno and MS

SISCAPA combines best features of immuno and MS

SISCAPA Process Schematic Diagram
Stable Isotope-labeled Standards with Capture on Anti-Peptide Antibodies

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

Antibodies sequence specific peptide binding

Antibodies sequence specific peptide binding

SISCAP target enrichmant

SISCAP target enrichmant

Multiple reaction monitoring (MRM) quantitation

Multiple reaction monitoring (MRM) quantitation

protein-quantitation-via-signature-peptides.png

protein-quantitation-via-signature-peptides.png

First SISCAP Assay - thyroglobulin

First SISCAP Assay – thyroglobulin

personalized reference range within population range

Glycemic control in DM

Glycemic control in DM

Part IV. National Heart, Lung, and Blood Institute Clinical

Proteomics Working Group Report
Christopher B. Granger, MD; Jennifer E. Van Eyk, PhD; Stephen C. Mockrin, PhD;
N. Leigh Anderson, PhD; on behalf of the Working Group Members*
Circulation. 2004;109:1697-1703 doi: 10.1161/01.CIR.0000121563.47232.2A
http://circ.ahajournals.org/content/109/14/1697

Abstract—The National Heart, Lung, and Blood Institute (NHLBI) Clinical Proteomics Working Group
was charged with identifying opportunities and challenges in clinical proteomics and using these as a
basis for recommendations aimed at directly improving patient care. The group included representatives
of clinical and translational research, proteomic technologies, laboratory medicine, bioinformatics, and
2 of the NHLBI Proteomics Centers, which form part of a program focused on innovative technology development.

This report represents the results from a one-and-a-half-day meeting on May 8 and 9, 2003. For the purposes
of this report, clinical proteomics is defined as the systematic, comprehensive, large-scale identification of
protein patterns (“fingerprints”) of disease and the application of this knowledge to improve patient care
and public health through better assessment of disease susceptibility, prevention of disease, selection of
therapy for the individual, and monitoring of treatment response. (Circulation. 2004;109:1697-1703.)
Key Words: proteins diagnosis prognosis genetics plasma

Part V.  Overview: The Maturing of Proteomics in Cardiovascular Research

Jennifer E. Van Eyk
Circ Res. 2011;108:490-498  doi: 10.1161/CIRCRESAHA.110.226894
http://circres.ahajournals.org/content/108/4/490

Abstract: Proteomic technologies are used to study the complexity of proteins, their roles, and biological functions.
It is based on the premise that the diversity of proteins, comprising their isoforms, and posttranslational modifications
(PTMs) underlies biology.

Based on an annotated human cardiac protein database, 62% have at least one PTM (phosphorylation currently dominating),
whereas 25% have more than one type of modification.

The field of proteomics strives to observe and quantify this protein diversity. It represents a broad group of technologies
and methods arising from analytic protein biochemistry, analytic separation, mass spectrometry, and bioinformatics.
Since the 1990s, the application of proteomic analysis has been increasingly used in cardiovascular research.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

Technology development and adaptation have been at the heart of this progress. Technology undergoes a maturation,

becoming routine and ultimately obsolete, being replaced by newer methods. Because of extensive methodological
improvements, many proteomic studies today observe 1000 to 5000 proteins.

Only 5 years ago, this was not feasible. Even so, there are still road blocks. Nowadays, there is a focus on obtaining
better characterization of protein isoforms and specific PTMs. Consequentl, new techniques for identification and
quantification of modified amino acid residues are required, as is the assessment of single-nucleotide polymorphisms
in addition to determination of the structural and functional consequences.

In this series, 4 articles provide concrete examples of how proteomics can be incorporated into cardiovascular
research and address specific biological questions. They also illustrate how novel discoveries can be made and
how proteomic technology has continued to evolve. (Circ Res. 2011;108:490-498.)
Key Words: proteomics technology protein isoform posttranslational modification polymorphism

Part VI.   The -omics era: Proteomics and lipidomics in vascular research

Athanasios Didangelos, Christin Stegemann, Manuel Mayr∗

King’s British Heart Foundation Centre, King’s College London, UK

Atherosclerosis 2012; 221: 12– 17     http://dx.doi.org/10.1016/j.atherosclerosis.2011.09.043

a b s t r a c t

A main limitation of the current approaches to atherosclerosis research is the focus on the investigation of individual
factors, which are presumed to be involved in the pathophysiology and whose biological functions are, at least in part, understood.

These molecules are investigated extensively while others are not studied at all. In comparison to our detailed
knowledge about the role of inflammation in atherosclerosis, little is known about extracellular matrix remodelling
and the retention of individual lipid species rather than lipid classes in early and advanced atherosclerotic lesions.

The recent development of mass spectrometry-based methods and advanced analytical tools are transforming
our ability to profile extracellular proteins and lipid species in animal models and clinical specimen with the goal
of illuminating pathological processes and discovering new biomarkers.

Fig. 1. ECM in atherosclerosis

Fig. 1. ECM in atherosclerosis. The bulk of the vascular ECM is synthesised by smooth muscle cells and composed primarily of collagens, proteoglycans and glycoproteins.During the early stages of atherosclerosis, LDL binds to the proteoglycans of the vessel wall, becomes modified, i.e. by oxidation (ox-LDL), and sustains a proinflammatory cascade that is proatherogenic

Lipidomics of atherosclerotic plaques

Lipidomics of atherosclerotic plaques

Fig. 2. Lipidomics of atherosclerotic plaques. Lipids were separated by ultra performance reverse phase
liquid chromatography on a Waters® ACQUITY UPLC® (HSS T3 Column, 100 mm × 2.1 mm i.d., 1.8 _m
particle size, 55 ◦C, flow rate 400 _L/min, Waters, Milford MA, USA) and analyzed on a quadrupole time-of-flight
mass spectrometer (Waters® SYNAPTTM HDMSTM system) in both positive (A) and negative ion mode (C).
In positive MS mode, lysophosphatidyl-cholines (lPCs) and lysophosphatidylethanolamines (lPEs) eluted first;
followed by phosphatidylcholines (PCs), sphingomyelin (SMs), phosphatidylethanol-amines (PEs) and cholesteryl
esters (CEs); diacylglycerols (DAGs) and triacylglycerols (TAGs) had the longest retention times. In negative MS mode,
fatty acids (FA) were followed by phosphatidyl-glycerols (PGs), phosphatidyl-inositols (PIs), phosphatidylserines (PS)
and PEs. The chromatographic peaks corresponding to the different classes were detected as retention time-mass to
charge ratio (m/z) pairs and their areas were recorded. Principal component analyses on 629 variables from triplicate
analysis (C1, 2, 3 = control 1, 2, 3; P1, 2, 3 = endarterectomy patient 1, 2, 3) demonstrated a clear separation of
atherosclerotic plaques and control radial arteries in positive (B) and negative (D) ion mode. The clustering of the
technical replicates and the central projection of the pooled sample within the scores plot confirm the reproducibility
of the analyses, and the Goodness of Fit test returned a chi-squared of 0.4 and a R-squared value of 0.6.

Challenges in mass spectrometry

Mass spectrometry is an evolving technology and the technological advances facilitate the detection and quantification
of scarce proteins. Nonetheless, the enrichment of specific subproteomes using differential solubilityor isolation of cellular
organelleswill remain important to increase coverage and, at least partially, overcome the inhomogeneity of diseased tissue,
one of the major factors affecting sample-to-sample variation.

Proteomics is also the method of choice for the identification of post-translational modifications, which play an essential
role in protein function, i.e. enzymatic activation, binding ability and formation of ECM structures. Again, efficient enrichment
is essential to increase the likelihood of identifying modified peptides in complex mixtures. Lipidomics faces similar challenges.
While the extraction of lipids is more selective, new enrichment methods are needed for scarce lipids as well as labile lipid
metabolites, that may have important bioactivity. Another pressing issue in lipidomics is data analysis, in particular the lack
of automated search engines that can analyze mass spectra obtained from instruments of different vendors. Efforts to
overcome this issue are currently underway.

Conclusions

Proteomics and lipidomics offer an unbiased platform for the investigation of ECM and lipids within atherosclerosis. In
combination, these innovative technologies will reveal key differences in proteolytic processes responsible for plaque rupture
and advance our understanding of ECM – lipoprotein interactions in atherosclerosis.

references

Virtualization in Proteomics: ‘Sakshat’ in India, at IIT Bombay(tginnovations.wordpress.com)

Proteome Portraits (the-scientist.com)

A Protease for ‘Middle-down’ Proteomics(pharmaceuticalintelligence.com)

Intrinsic Disorder in the Human Spliceosomal Proteome(ploscompbiol.org)

proteome

proteome

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

Table - metabolic  targets

Table – metabolic targets

HK-II Phosphorylation

Read Full Post »

« Newer Posts