Feeds:
Posts
Comments

Posts Tagged ‘mass spectrometry’

Summary of Proteomics

Author and Curator: Larry H. Bernstein, MD, FCAP 

 

We have completed a series of discussions on proteomics, a scientific endeavor that is essentially 15 years old.   It is quite remarkable what has been accomplished in that time.  The interest is abetted by the understanding of the limitations of the genomic venture that has preceded it.  The thorough, yet incomplete knowledge of the genome, has led to the clarification of its limits.  It is the coding for all that lives, but all that lives has evolved to meet a demanding and changing environment with respect to

  1. availability of nutrients
  2. salinity
  3. temperature
  4. radiation exposure
  5. toxicities in the air, water, and food
  6. stresses – both internal and external

We have seen how both transcription and translation of the code results in a protein, lipoprotein, or other complex than the initial transcript that was modeled from tRNA. What you see in the DNA is not what you get in the functioning cell, organ, or organism.  There are comparabilities as well as significant differences between plants, prokaryotes, and eukaryotes.  There is extensive variation.  The variation goes beyond genomic expression, and includes the functioning cell, organ type, and species.

Here, I return to the introductory discussion.  Proteomics is a goal directed, sophisticated science that uses a combination of methods to find the answers to biological questions. Graves PR and Haystead TAJ.  Molecular Biologist’s Guide to Proteomics.
Microbiol Mol Biol Rev. Mar 2002; 66(1): 39–63.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC120780/

Peptide mass tag searching

Peptide mass tag searching

Peptide mass tag searching. Shown is a schematic of how information from an unknown peptide (top) is matched to a peptide sequence in a database (bottom) for protein identification. The partial amino acid sequence or “tag” obtained by MS/MS is combined with the peptide mass (parent mass), the mass of the peptide at the start of the sequence (mass tag 1), and the mass of the peptide at the end of the sequence (mass tag 2). The specificity of the protease used (trypsin is shown) can also be included in the search.

ICAT method for measuring differential protein expression

ICAT method for measuring differential protein expression

The ICAT method for measuring differential protein expression. (A) Structure of the ICAT reagent. ICAT consists of a biotin affinity group, a linker region that can incorporate heavy (deuterium) or light (hydrogen) atoms, and a thiol-reactive end group for linkage to cysteines. (B) ICAT strategy. Proteins are harvested from two different cell states and labeled on cysteine residues with either the light or heavy form of the ICAT reagent. Following labeling, the two protein samples are mixed and digested with a protease such as trypsin. Peptides labeled with the ICAT reagent can be purified by virtue of the biotin tag by using avidin chromatography. Following purification, ICAT-labeled peptides can be analyzed by MS to quantitate the peak ratios and proteins can be identified by sequencing the peptides with MS/MS.

Strategies for determination of phosphorylation sites in proteins

Strategies for determination of phosphorylation sites in proteins

Strategies for determination of phosphorylation sites in proteins. Proteins phosphorylated in vitro or in vivo can be isolated by protein electrophoresis and analyzed by MS. (A) Identification of phosphopeptides by peptide mass fingerprinting. In this method, phosphopeptides are identified by comparing the mass spectrum of an untreated sample to that of a sample treated with phosphatase. In the phosphatase-treated sample, potential phosphopeptides are identified by a decrease in mass due to loss of a phosphate group (80 Da). (B) Phosphorylation sites can be identified by peptide sequencing using MS/MS. (C) Edman degradation can be used to monitor the release of inorganic 32P to provide information about phosphorylation sites in peptides.

protein mining strategy

protein mining strategy

Proteome-mining strategy. Proteins are isolated on affinity column arrays from a cell line, organ, or animal source and purified to remove nonspecific adherents. Then, compound libraries are passed over the array and the proteins eluted are analyzed by protein electrophoresis. Protein information obtained by MS or Edman degradation is then used to search DNA and protein databases. If a relevant target is identified, a sublibrary of compounds can be evaluated to refine the lead. From this method a protein target and a drug lead can be simultaneously identified.

Although the technology for the analysis of proteins is rapidly progressing, it is still not feasible to study proteins on a scale equivalent to that of the nucleic acids. Most of proteomics relies on methods, such as protein purification or PAGE, that are not high-throughput methods. Even performing MS can require considerable time in either data acquisition or analysis. Although hundreds of proteins can be analyzed quickly and in an automated fashion by a MALDI-TOF mass spectrometer, the quality of data is sacrificed and many proteins cannot be identified. Much higher quality data can be obtained for protein identification by MS/MS, but this method requires considerable time in data interpretation. In our opinion, new computer algorithms are needed to allow more accurate interpretation of mass spectra without operator intervention. In addition, to access unannotated DNA databases across species, these algorithms should be error tolerant to allow for sequencing errors, polymorphisms, and conservative substitutions. New technologies will have to emerge before protein analysis on a large-scale (such as mapping the human proteome) becomes a reality.

Another major challenge for proteomics is the study of low-abundance proteins. In some eukaryotic cells, the amounts of the most abundant proteins can be 106-fold greater than those of the low-abundance proteins. Many important classes of proteins (that may be important drug targets) such as transcription factors, protein kinases, and regulatory proteins are low-copy proteins. These low-copy proteins will not be observed in the analysis of crude cell lysates without some purification. Therefore, new methods must be devised for subproteome isolation.

Tissue Proteomics for the Next Decade?  Towards a Molecular Dimension in Histology

R Longuespe´e, M Fle´ron, C Pottier, F Quesada-Calvo, Marie-Alice Meuwis, et al.
OMICS A Journal of Integrative Biology 2014; 18: 9.    http://dx.doi.org:/10.1089/omi.2014.0033

The concept of tissues appeared more than 200 years ago, since textures and attendant differences were described within the whole organism components. Instrumental developments in optics and biochemistry subsequently paved the way to transition from classical to molecular histology in order to decipher the molecular contexts associated with physiological or pathological development or function of a tissue. In 1941, Coons and colleagues performed the first systematic integrated examination of classical histology and biochemistry when his team localized pneumonia antigens in infected tissue sections. Most recently, in the early 21st century, mass spectrometry (MS) has progressively become one of the most valuable tools to analyze biomolecular compounds. Currently, sampling methods, biochemical procedures, and MS instrumentations
allow scientists to perform ‘‘in depth’’ analysis of the protein content of any type of tissue of interest. This article reviews the salient issues in proteomics analysis of tissues. We first outline technical and analytical considerations for sampling and biochemical processing of tissues and subsequently the instrumental possibilities for proteomics analysis such as shotgun proteomics in an anatomical context. Specific attention concerns formalin fixed and paraffin embedded (FFPE) tissues that are potential ‘‘gold mines’’ for histopathological investigations. In all, the matrix assisted laser desorption/ionization (MALDI) MS imaging, which allows for differential mapping of hundreds of compounds on a tissue section, is currently the most striking evidence of linkage and transition between ‘‘classical’’ and ‘‘molecular’’ histology. Tissue proteomics represents a veritable field of research and investment activity for modern biomarker discovery and development for the next decade.

Progressively, tissue analyses evolved towards the description of the whole molecular content of a given sample. Currently, mass spectrometry (MS) is the most versatile
analytical tool for protein identification and has proven its great potential for biological and clinical applications. ‘‘Omics’’ fields, and especially proteomics, are of particular
interest since they allow the analysis of a biomolecular picture associated with a given physiological or pathological state. Biochemical techniques were then adapted for an optimal extraction of several biocompounds classes from tissues of different natures.

Laser capture microdissection (LCM) is used to select and isolate tissue areas of interest for further analysis. The developments of MS instrumentations have then definitively transformed the scientific scene, pushing back more and more detection and identification limits. Since a few decades, new approaches of analyses appeared, involving the use of tissue sections dropped on glass slides as starting material. Two types of analyses can then be applied on tissue sections: shotgun proteomics and the very promising MS imaging (MSI) using Matrix Assisted Laser Desorption/Ionization (MALDI) sources. Also known as ‘‘molecular histology,’’ MSI is the most striking hyphen between histology and molecular analysis. In practice, this method allows visualization of the spatial distribution of proteins, peptides, drugs, or others analytes directly on tissue sections. This technique paved new ways of research, especially in the field of histopathology, since this approach appeared to be complementary to conventional histology.

Tissue processing workflows for molecular analyses

Tissue processing workflows for molecular analyses

Tissue processing workflows for molecular analyses. Tissues can either be processed in solution or directly on tissue sections. In solution, processing involves protein
extraction from tissue pieces in order to perform 2D gel separation and identification of proteins, shotgun proteomics, or MALDI analyses. Extracts can also be obtained from
tissues area selection and protein extraction after laser micro dissection or on-tissue processing. Imaging techniques are dedicated to the morphological characterization or molecular mapping of tissue sections. Histology can either be conducted by hematoxylin/eosin staining or by molecular mapping using antibodies with IHC. Finally, mass spectrometry imaging allows the cartography of numerous compounds in a single analysis. This approach is a modern form of ‘‘molecular histology’’ as it grafts, with the use of mathematical calculations, a molecular dimension to classical histology. (AR, antigen retrieval; FFPE, formalin fixed and paraffin embedded; fr/fr, fresh frozen; IHC, immunohistochemistry; LCM, laser capture microdissection; MALDI, matrix assisted laser desorption/ionization; MSI, mass spectrometry imaging; PTM, post translational modification.)

Analysis of tissue proteomes has greatly evolved with separation methods and mass spectrometry instrumentation. The choice of the workflow strongly depends on whether a bottom-up or a top-down analysis has to be performed downstream. In-gel or off-gel proteomics principally differentiates proteomic workflows. The almost simultaneous discoveries of the MS ionization sources (Nobel Prize awarded) MALDI (Hillenkamp and Karas, 1990; Tanaka et al., 1988) and electrospray ionization (ESI) (Fenn et al., 1989) have paved the way for analysis of intact proteins and peptides. Separation methods such as two-dimension electrophoresis (2DE) (Fey and Larsen, 2001) and nanoscale reverse phase liquid chromatography (nanoRP-LC) (Deterding et al., 1991) lead to efficient preparation of proteins for respectively topdown and bottom-up strategies. A huge panel of developments was then achieved mostly for LC-MS based proteomics in order to improve ion fragmentation approaches and peptide
identification throughput relying on database interrogation. Moreover, approaches were developed to analyze post translational modifications (PTM) such as phosphorylations (Ficarro et al., 2002; Oda et al., 2001; Zhou et al., 2001) or glycosylations (Zhang et al., 2003), proposing as well different quantification procedures. Regarding instrumentation, the most cutting edge improvements are the gain of mass accuracy for an optimal detection of the eluted peptides during LC-MS runs (Mann and Kelleher, 2008; Michalski et al., 2011) and the increase in scanning speed, for example with the use of Orbitrap analyzers (Hardman and Makarov, 2003; Makarov et al., 2006; Makarov et al., 2009; Olsen et al., 2009). Ion transfer efficiency was also drastically improved with the conception of ion funnels that homogenize the ion transmission
capacities through m/z ranges (Kelly et al., 2010; Kim et al., 2000; Page et al., 2006; Shaffer et al., 1998) or by performing electrospray ionization within low vacuum (Marginean et al., 2010; Page et al., 2008; Tang et al., 2011). Beside collision induced dissociation (CID) that is proposed for many applications (Li et al., 2009; Wells and McLuckey, 2005), new fragmentation methods were investigated, such as higher-energy collisional dissociation (HCD) especially for phosphoproteomic
applications (Nagaraj et al., 2010), and electron transfer dissociation (ETD) and electron capture dissociation (ECD) that are suited for phospho- and glycoproteomics (An
et al., 2009; Boersema et al., 2009; Wiesner et al., 2008). Methods for data-independent MS2 analysis based on peptide fragmentation in given m/z windows without precursor selection neither information knowledge, also improves identification throughput (Panchaud et al., 2009; Venable et al., 2004), especially with the use of MS instruments with high resolution and high mass accuracy specifications (Panchaud et al., 2011). Gas fractionation methods such as ion mobility (IM) can also be used as a supplementary separation dimension which enable more efficient peptide identifications (Masselon et al., 2000; Shvartsburg et al., 2013; Shvartsburg et al., 2011).

Microdissection relies on a laser ablation principle. The tissue section is dropped on a plastic membrane covering a glass slide. The preparation is then placed into a microscope
equipped with a laser. A highly focused beam will then be guided by the user at the external limit of the area of interest. This area composed by the plastic membrane, and the tissue section will then be ejected from the glass slide and collected into a tube cap for further processing. This mode of microdissection is the most widely used due to its ease of handling and the large panels of devices proposed by constructors. Indeed, Leica microsystem proposed the Leica LMD system (Kolble, 2000), Molecular Machine and Industries, the MMI laser microdissection system Microcut, which was used in combination with IHC (Buckanovich et al., 2006), Applied Biosystems developed the Arcturus
microdissection System, and Carl Zeiss patented P.A.L.M. MicroBeam technology (Braakman et al., 2011; Espina et al., 2006a; Espina et al., 2006b; Liu et al., 2012; Micke
et al., 2005). LCM represents a very adequate link between classical histology and sampling methods for molecular analyses as it is a simple customized microscope. Indeed,
optical lenses of different magnification can be used and the method is compatible with classical IHC (Buckanovich et al., 2006). Only the laser and the tube holder need to be
added to the instrumentation.

After microdissection, the tissue pieces can be processed for analyses using different available MS devices and strategies. The simplest one consists in the direct analysis of the
protein profiles by MALDI-TOF-MS (MALDI-time of flight-MS). The microdissected tissues are dropped on a MALDI target and directly covered by the MALDI matrix (Palmer-Toy et al., 2000; Xu et al., 2002). This approach was already used in order to classify breast cancer tumor types (Sanders et al., 2008), identify intestinal neoplasia protein biomarkers (Xu et al., 2009), and to determine differential profiles in glomerulosclerosis (Xu et al., 2005).

Currently the most common proteomic approach for LCM tissue analysis is LC-MS/MS. Label free LC-MS approaches have been used to study several cancers like head and neck squamous cell carcinomas (Baker et al., 2005), esophageal cancer (Hatakeyama et al., 2006), dysplasic cervical cells (Gu et al., 2007), breast carcinoma tumors (Hill et al., 2011; Johann et al., 2009), tamoxifen-resistant breast cancer cells (Umar et al., 2009), ER + / – breast cancer cells (Rezaul et al., 2010), Barretts esophagus (Stingl et al., 2011), and ovarian endometrioid cancer (Alkhas et al., 2011). Different isotope labeling methods have been used in order to compare proteins expression. ICAT was first used to investigate proteomes of hepatocellular carcinoma (Li et al., 2004; 2008). The O16/O18 isotopic labeling was then used for proteomic analysis of ductal carcinoma of the breast (Zang et al., 2004).

Currently, the lowest amount of collected cells for a relevant single analysis using fr/fr breast cancer tissues was 3000–4000 (Braakman et al., 2012; Liu et al., 2012; Umar et al., 2007). With a Q-Exactive (Thermo, Waltham) mass spectrometer coupled to LC, Braakman was able to identify up to 1800 proteins from 4000 cells. Processing
of FFPE microdissected tissues of limited sizes still remains an issue which is being addressed by our team.

Among direct tissue analyses modes, two categories of investigations can be done. MALDI profiling consists in the study of molecular localization of compounds and can be
combined with parallel shotgun proteomic methods. Imaging methods give less detailed molecular information, but is more focused on the accurate mapping of the detected compounds through tissue area. In 2007, a concept of direct tissue proteomics (DTP) was proposed for high-throughput examination of tissue microarray samples. However, contrary to the classical workflow, tissue section chemical treatment involved a first step of scrapping each FFPE tissue spot with a razor blade from the glass slide. The tissues were then transferred into a tube and processed with RIPA buffer and finally submitted to boiling as an AR step (Hwang et al., 2007). Afterward, several teams proved that it was possible to perform the AR directly on tissue sections. These applications were mainly dedicated to MALDI imaging analyses (Bonnel et al., 2011; Casadonte and Caprioli, 2011; Gustafsson et al., 2010). However, more recently, Longuespe´e used citric acid antigen retrieval (CAAR) before shotgun proteomics associated to global profiling proteomics (Longuespee et al., 2013).

MALDI imaging workflow

MALDI imaging workflow

MALDI imaging workflow. For MALDI imaging experiments, tissue sections are dropped on conductive glass slides. Sample preparations are then adapted depending on the nature of the tissue sample (FFPE or fr/fr). Then, matrix is uniformly deposited on the tissue section using dedicated devices. A laser beam subsequently irradiates the preparation following a given step length and a MALDI spectrum is acquired for each position. Using adapted software, the different detected ions are then mapped through the tissue section, in function of their differential intensities. The ‘‘molecular maps’’ are called images. (FFPE, formalin fixed and paraffin embedded; fr/fr, fresh frozen; MALDI, matrix assisted laser desorption ionization.)

Proteomics instrumentations, specific biochemical preparations, and sampling methods such as LCM altogether allow for the deep exploration and comparison of different proteomes between regions of interest in tissues with up to 104 detected proteins. MALDI MS imaging that allows for differential mapping of hundreds of compounds on a tissue section is currently the most striking illustration of association between ‘‘classical’’ and ‘‘molecular’’ histology.

Novel serum protein biomarker panel revealed by mass spectrometry and its prognostic value in breast cancer

L Chung, K Moore, L Phillips, FM Boyle, DJ Marsh and RC Baxter*  Breast Cancer Research 2014, 16:R63
http://breast-cancer-research.com/content/16/3/R63

Introduction: Serum profiling using proteomic techniques has great potential to detect biomarkers that might improve diagnosis and predict outcome for breast cancer patients (BC). This study used surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) mass spectrometry (MS) to identify differentially expressed proteins in sera from BC and healthy volunteers (HV), with the goal of developing a new prognostic biomarker panel.
Methods: Training set serum samples from 99 BC and 51 HV subjects were applied to four adsorptive chip surfaces (anion-exchange, cation-exchange, hydrophobic, and metal affinity) and analyzed by time-of-flight MS. For validation, 100 independent BC serum samples and 70 HV samples were analyzed similarly. Cluster analysis of protein spectra was performed to identify protein patterns related to BC and HV groups. Univariate and multivariate statistical analyses were used to develop a protein panel to distinguish breast cancer sera from healthy sera, and its prognostic potential was evaluated.
Results: From 51 protein peaks that were significantly up- or downregulated in BC patients by univariate analysis, binary logistic regression yielded five protein peaks that together classified BC and HV with a receiver operating characteristic (ROC) area-under-the-curve value of 0.961. Validation on an independent patient cohort confirmed
the five-protein parameter (ROC value 0.939). The five-protein parameter showed positive association with large tumor size (P = 0.018) and lymph node involvement (P = 0.016). By matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS, immunoprecipitation and western blotting the proteins were identified as a fragment
of apolipoprotein H (ApoH), ApoCI, complement C3a, transthyretin, and ApoAI. Kaplan-Meier analysis on 181 subjects after median follow-up of >5 years demonstrated that the panel significantly predicted disease-free survival (P = 0.005), its efficacy apparently greater in women with estrogen receptor (ER)-negative tumors (n = 50, P = 0.003) compared to ER-positive (n = 131, P = 0.161), although the influence of ER status needs to be confirmed after longer follow-up.
Conclusions: Protein mass profiling by MS has revealed five serum proteins which, in combination, can distinguish between serum from women with breast cancer and healthy control subjects with high sensitivity and specificity. The five-protein panel significantly predicts recurrence-free survival in women with ER-negative tumors and may have value in the management of these patients.

Cellular prion protein is required for neuritogenesis: fine-tuning of multiple signaling pathways involved in focal adhesions and actin cytoskeleton dynamics

Aurélie Alleaume-Butaux, et al.   Cell Health and Cytoskeleton 2013:5 1–12

Neuritogenesis is a dynamic phenomenon associated with neuronal differentiation that allows a rather spherical neuronal stem cell to develop dendrites and axon, a prerequisite for the integration and transmission of signals. The acquisition of neuronal polarity occurs in three steps:

(1) neurite sprouting, which consists of the formation of buds emerging from the postmitotic neuronal soma;

(2) neurite outgrowth, which represents the conversion of buds into neurites, their elongation and evolution into axon or dendrites; and

(3) the stability and plasticity of neuronal polarity.

In neuronal stem cells, remodeling and activation of focal adhesions (FAs)

  • associated with deep modifications of the actin cytoskeleton is
  • a prerequisite for neurite sprouting and subsequent neurite outgrowth.

A multiple set of growth factors and interactors located in

  • the extracellular matrix and the plasma membrane orchestrate neuritogenesis
  • by acting on intracellular signaling effectors, notably small G proteins such as RhoA, Rac, and Cdc42,
  • which are involved in actin turnover and the dynamics of FAs.

The cellular prion protein (PrPC), a glycosylphosphatidylinositol (GPI)-anchored membrane protein

  • mainly known for its role in a group of fatal neurodegenerative diseases,
  • has emerged as a central player in neuritogenesis.

Here, we review the contribution of PrPC to neuronal polarization and

  • detail the current knowledge on the signaling pathways fine-tuned
  • by PrPC to promote neurite sprouting, outgrowth, and maintenance.

We emphasize that PrPC-dependent neurite sprouting is a process in which

  • PrPC governs the dynamics of FAs and the actin cytoskeleton via β1 integrin signaling.

The presence of PrPC is necessary to render neuronal stem cells

  • competent to respond to neuronal inducers and to develop neurites.

In differentiating neurons, PrPC exerts a facilitator role towards neurite elongation.

This function relies on the interaction of PrPC with a set of diverse partners such as

  1. elements of the extracellular matrix,
  2. plasma membrane receptors,
  3. adhesion molecules, and
  4. soluble factors that control actin cytoskeleton turnover
  • through Rho-GTPase signaling.

Once neurons have reached their terminal stage of differentiation and

  • acquired their polarized morphology,
  • PrPC also takes part in the maintenance of neurites.

By acting on tissue nonspecific alkaline phosphatase, or matrix metalloproteinase type 9,

  • PrPC stabilizes interactions between neurites and the extracellular matrix.

Fusion-pore expansion during syncytium formation is restricted by an actin network

Andrew Chen et al., Journal of Cell Science 121, 3619-3628. http://dx.doi.org:/10.1242/jcs.032169

Cell-cell fusion in animal development and in pathophysiology

  • involves expansion of nascent fusion pores formed by protein fusogens
  • to yield an open lumen of cell-size diameter.

Here we explored the enlargement of micron-scale pores in syncytium formation,

  • which was initiated by a well-characterized fusogen baculovirus gp64.

Radial expansion of a single or, more often, of multiple fusion pores

  • proceeds without loss of membrane material in the tight contact zone.

Pore growth requires cell metabolism and is

  • accompanied by a local disassembly of the actin cortex under the pores.

Effects of actin-modifying agents indicate that

  • the actin cortex slows down pore expansion.

We propose that the growth of the strongly bent fusion-pore rim

  1. is restricted by a dynamic resistance of the actin network and
  2. driven by membrane-bending proteins that are involved in
  3. the generation of highly curved intracellular membrane compartments.

Pak1 Is Required to Maintain Ventricular Ca2+ Homeostasis and Electrophysiological Stability Through SERCA2a Regulation in Mice

Yanwen Wang, et al.  Circ Arrhythm Electrophysiol. 2014;7:00-00.

Impaired sarcoplasmic reticular Ca2+ uptake resulting from

  • decreased sarcoplasmic reticulum Ca2+-ATPase type 2a (SERCA2a) expression or activity
  • is a characteristic of heart failure with its associated ventricular arrhythmias.

Recent attempts at gene therapy of these conditions explored strategies

  • enhancing SERCA2a expression and the activity as novel approaches to heart failure management.

We here explore the role of Pak1 in maintaining ventricular Ca2+ homeostasis and electrophysiological stability

  • under both normal physiological and acute and chronic β-adrenergic stress conditions.

Methods and Results—Mice with a cardiomyocyte-specific Pak1 deletion (Pak1cko), but not controls (Pak1f/f), showed

  • high incidences of ventricular arrhythmias and electrophysiological instability
  • during either acute β-adrenergic or chronic β-adrenergic stress leading to hypertrophy,
  • induced by isoproterenol.

Isolated Pak1cko ventricular myocytes correspondingly showed

  • aberrant cellular Ca2+ homeostasis.

Pak1cko hearts showed an associated impairment of SERCA2a function and

  • downregulation of SERCA2a mRNA and protein expression.

Further explorations of the mechanisms underlying the altered transcriptional regulation

  • demonstrated that exposure to control Ad-shC2 virus infection
  • increased SERCA2a protein and mRNA levels after
  • phenylephrine stress in cultured neonatal rat cardiomyocytes.

This was abolished by the

  • Pak1-knockdown in Ad-shPak1–infected neonatal rat cardiomyocytes and
  • increased by constitutive overexpression of active Pak1 (Ad-CAPak1).

We then implicated activation of serum response factor, a transcriptional factor well known for

  • its vital role in the regulation of cardiogenesis genes in the Pak1-dependent regulation of SERCA2a.

Conclusions—These findings indicate that

Pak1 is required to maintain ventricular Ca2+ homeostasis and electrophysiological stability

  • and implicate Pak1 as a novel regulator of cardiac SERCA2a through
  • a transcriptional mechanism

fusion in animal development and in pathophysiology involves expansion of nascent fusion pores

  • formed by protein fusogens to yield an open lumen of cell-size diameter.

Here we explored the enlargement of micron-scale pores in syncytium formation,

  • which was initiated by a well-characterized fusogen baculovirus gp64.

Radial expansion of a single or, more often, of multiple fusion pores proceeds

  • without loss of membrane material in the tight contact zone.

Pore growth requires cell metabolism and is accompanied by

  • a local disassembly of the actin cortex under the pores.

Effects of actin-modifying agents indicate that the actin cortex slows down pore expansion.

We propose that the growth of the strongly bent fusion-pore rim is restricted

  • by a dynamic resistance of the actin network and driven by
  • membrane-bending proteins that are involved in the generation of
  • highly curved intracellular membrane compartments.

Role of forkhead box protein A3 in age-associated metabolic decline

Xinran Maa,1, Lingyan Xua,1, Oksana Gavrilovab, and Elisabetta Muellera,2
PNAS Sep 30, 2014 | 111 | 39 | 14289–14294  http://pnas.org/cgi/doi/10.1073/pnas.1407640111

Significance
This paper reports that the transcription factor forkhead box protein A3 (Foxa3) is

  • directly involved in the development of age-associated obesity and insulin resistance.

Mice that lack the Foxa3 gene

  1. remodel their fat tissues,
  2. store less fat, and
  3. burn more energy as they age.

These mice also live significantly longer.

We show that Foxa3 suppresses a key metabolic cofactor, PGC1α,

  • which is involved in the gene programs that turn on energy expenditure in adipose tissues.

Overall, these findings suggest that Foxa3 contributes to the increased adiposity observed during aging,

  • and that it can be a possible target for the treatment of metabolic disorders.

Aging is associated with increased adiposity and diminished thermogenesis, but

  • the critical transcription factors influencing these metabolic changes late in life are poorly understood.

We recently demonstrated that the winged helix factor forkhead box protein A3 (Foxa3)

  • regulates the expansion of visceral adipose tissue in high-fat diet regimens; however,
  • whether Foxa3 also contributes to the increase in adiposity and the decrease in brown fat activity
  • observed during the normal aging process is currently unknown.

Here we report that during aging, levels of Foxa3 are significantly and selectively

  • up-regulated in brown and inguinal white fat depots, and that
  • midage Foxa3-null mice have increased white fat browning and thermogenic capacity,
  1. decreased adipose tissue expansion,
  2. improved insulin sensitivity, and
  3. increased longevity.

Foxa3 gain-of-function and loss-of-function studies in inguinal adipose depots demonstrated

  • a cell-autonomous function for Foxa3 in white fat tissue browning.

The mechanisms of Foxa3 modulation of brown fat gene programs involve

  • the suppression of peroxisome proliferator activated receptor γ coactivtor 1 α (PGC1α) levels
  • through interference with cAMP responsive element binding protein 1-mediated
  • transcriptional regulation of the PGC1α promoter.

Our data demonstrate a role for Foxa3 in energy expenditure and in age-associated metabolic disorders.

Control of Mitochondrial pH by Uncoupling Protein 4 in Astrocytes Promotes Neuronal Survival

HP Lambert, M Zenger, G Azarias, Jean-Yves Chatton, PJ. Magistretti,§, S Lengacher
JBC (in press) M114.570879  http://www.jbc.org/cgi/doi/10.1074/jbc.M114.570879

Background: Role of uncoupling proteins (UCP) in the brain is unclear.
Results: UCP, present in astrocytes, mediate the intra-mitochondrial acidification leading to a decrease in mitochondrial ATP production.
Conclusion: Astrocyte pH regulation promotes ATP synthesis by glycolysis whose final product, lactate, increases neuronal survival.
Significance: We describe a new role for a brain uncoupling protein.

Brain activity is energetically costly and requires a steady and

  • highly regulated flow of energy equivalents between neural cells.

It is believed that a substantial share of cerebral glucose, the major source of energy of the brain,

  • will preferentially be metabolized in astrocytes via aerobic glycolysis.

The aim of this study was to evaluate whether uncoupling proteins (UCPs),

  • located in the inner membrane of mitochondria,
  • play a role in setting up the metabolic response pattern of astrocytes.

UCPs are believed to mediate the transmembrane transfer of protons

  • resulting in the uncoupling of oxidative phosphorylation from ATP production.

UCPs are therefore potentially important regulators of energy fluxes. The main UCP isoforms

  • expressed in the brain are UCP2, UCP4, and UCP5.

We examined in particular the role of UCP4 in neuron-astrocyte metabolic coupling

  • and measured a range of functional metabolic parameters
  • including mitochondrial electrical potential and pH,
  1. reactive oxygen species production,
  2. NAD/NADH ratio,
  3. ATP/ADP ratio,
  4. CO2 and lactate production, and
  5. oxygen consumption rate (OCR).

In brief, we found that UCP4 regulates the intra-mitochondrial pH of astrocytes

  • which acidifies as a consequence of glutamate uptake,
  • with the main consequence of reducing efficiency of mitochondrial ATP production.
  • the diminished ATP production is effectively compensated by enhancement of glycolysis.
  • this non-oxidative production of energy is not associated with deleterious H2O2 production.

We show that astrocytes expressing more UCP4 produced more lactate,

  • used as energy source by neurons, and had the ability to enhance neuronal survival.

Jose Eduardo des Salles Roselino

The problem with genomics was it was set as explanation for everything. In fact, when something is genetic in nature the genomic reasoning works fine. However, this means whenever an inborn error is found and only in this case the genomic knowledge afterwards may indicate what is wrong and not the completely way to put biology upside down by reading everything in the DNA genetic as well as non-genetic problems.

Read Full Post »

Metabolomic analysis of two leukemia cell lines. I.

Larry H. Bernstein, MD, FCAP, Reviewer and Curator

Leaders in Pharmaceutical Intelligence

 

I have just posted a review of metabolomics.  In the last few weeks, the Human Metabolome was published.  I am hopeful that my decision has taken the right path to prepare my readers adequately if they will have read the articles that preceded this.  I pondered how I would present this massive piece of work, a study using two leukemia cell lines and mapping the features and differences that drive the carcinogenesis pathways, and identify key metabolic signatures in these differentiated cell types and subtypes.  It is a culmination of a large collaborative effort that required cell culture, enzymatic assays, mass spectrometry, the full measure of which I need not present here, and a very superb validation of the model with a description of method limitations or conflicts.  This is a beautiful piece of work carried out by a small group by today’s standards.

I shall begin this by asking a few questions that will be addressed in the article, which I need to beak up into parts, to draw the readers in more effectively.

Q 1. What metabolic pathways do you expect to have the largest role in the study about to be presented?

Q2. What are the largest metabolic differences that one expects to see in compairing the two lymphoblastic cell lines?

Q3. What methods would be used to extract the information based on external metabolites, enzymes, substrates, etc., to create the model for the cell internal metabolome?

 

 

Abstract

Metabolic models can provide a mechanistic framework to analyze information-rich omics data sets, and are increasingly being used

  • to investigate metabolic alternations in human diseases.

An expression of the altered metabolic pathway utilization is

  • the selection of metabolites consumed and released by cells.

However, methods for the inference of intracellular metabolic states from extracellular measurements in the context of metabolic models

  • remain underdeveloped compared to methods for other omics data.

Herein, we describe a workflow for such an integrative analysis

  • extracting the information from extracellular metabolomics data.

We demonstrate, using the lymphoblastic leukemia cell lines Molt-4 and CCRF-CEM, how

  • our methods can reveal differences in cell metabolism.

Our models explain metabolite uptake and secretion by

  • predicting a more glycolytic phenotype for the CCRF-CEM model and
  • a more oxidative phenotype for the Molt-4 model, which
  • was supported by our experimental data.

Gene expression analysis revealed altered expression of gene products at

  • key regulatory steps in those central metabolic pathways,

and literature query emphasized

  • the role of these genes in cancer metabolism.

Moreover, in silico gene knock-outs identified

  • unique control points for each cell line model, e.g., phosphoglycerate dehydrogenase for the Molt-4 model.

Thus, our workflow is well suited to the characterization of cellular metabolic traits based on

  • extracellular metabolomic data, and
  • it allows the integration of multiple omics data sets into a cohesive picture based on a defined model context.

Keywords Constraint-based modeling _ Metabolomics _Multi-omics _ Metabolic network _ Transcriptomics

 

Reviewer Summary:

  1. A model is introduced to demonstrate a lymphocytic integrated data set using to cell lines.
  2. The method is required to integrate extracted data sets from extracellular metabolites to an intracellular picture of cellular metabolism for each cell line.
  3. The method predicts a more glycolytic or a more oxidative metabolic framework for one or the othe cell line.
  4. The genetic phenotypes differ with a unique control point for each cell line.
  5. The model presents an integration of omics data sets into a cohesive picture based on the model context.

Without having seen the full presentation –

  1. Is the method a snapshot of the neoplastic processes described?
  2. Does the model give insight into the cellular metabolism of an initial cell state for either one or both cell lines?
  3. Would one be able to predict a therapeutic strategy based on the model for either or both cell lines?

Before proceeding further into the study, I would conjecture that there is no way of knowing the initial state ( consistent with what is described by Ilya Prigogine for a self-organizing system) because the model is based on the study of cultured cells that had an unknown metabolic control profile in a host proliferating bone marrow that is likely B-cell origin.  So this is a snapshot of a stable state of two incubated cell lines.  Then the question that is raised is whether there is not only a genetic-phenotypic relationship between the cells in culture and the external metabolites produced, but also whether differences can be discerned between the  internal metabolic constructions that would fit into a family tree.

 

Introduction

Modern high-throughput techniques

  • have increased the pace of biological data generation.

Also referred to as the ‘‘omics avalanche’’, this wealth of data

  • provides great opportunities for metabolic discovery.

Omics data sets contain a snapshot of almost the entire repertoire of

  • mRNA, protein, or metabolites at a given time point or
  • under a particular set of experimental conditions.

Because of the high complexity of the data sets,

  • computational modeling is essential for their integrative analysis.

Currently, such data analysis

  • is a bottleneck in the research process and
  • methods are needed to facilitate the use of these data sets, e.g.,
  1. through meta-analysis of data available in public databases
    [e.g., the human protein atlas (Uhlen et al. 2010)
  2. or the gene expression omnibus (Barrett  et al.  2011)], and
  3. to increase the accessibility of valuable information
    for the biomedical research community.

Constraint-based modeling and analysis (COBRA) is

  • a computational approach that has been successfully used
  • to investigate and engineer microbial metabolism through
    the prediction of steady-states (Durot et al.2009).

The basis of COBRA is network reconstruction: networks are assembled

  1. in a bottom-up fashion based on genomic data and
  2. extensive organism-specific information from the literature.

Metabolic reconstructions

  1. capture information on the known biochemical transformations
    taking place in a target organism
  2. to generate a biochemical, genetic and genomic knowledge base
    (Reed et al. 2006).

Once assembled, a metabolic reconstruction

  • can be converted into a mathematical model
    (Thiele and Palsson 2010), and
  • model properties can be interrogated using a great variety of methods
    (Schellenberger et al. 2011).

The ability of COBRA models to represent

  • genotype–phenotype and environment–phenotype relationships
  • arises through the imposition of constraints,
  • which limit the system to a subset of possible network states
    (Lewis et al. 2012).

Currently, COBRA models exist for more than 100 organisms, including humans
(Duarte et al. 2007; Thiele et al. 2013).

Since the first human metabolic reconstruction was described
[Recon 1 (Duarte et al. 2007)],

  • biomedical applications of COBRA have increased
    (Bordbar and Palsson 2012).

One way to contextualize networks is to

  • define their system boundaries
  • according to the metabolic states of the system,
    e.g., disease or dietary regimes.

The consequences of the applied constraints

  • can then be assessed for the entire network
    (Sahoo and Thiele 2013).

Additionally, omics data sets have frequently been used

  • to generate cell-type or condition-specific metabolic models.

Models exist for specific cell types, such as

  • enterocytes (Sahoo and Thiele2013),
  • macrophages (Bordbar et al. 2010), and
  • adipocytes (Mardinoglu et al. 2013), and
  • even multi-cell assemblies that represent
    the interactions of brain cells (Lewis et al. 2010).

All of these cell type specific models,

  • except the enterocyte reconstruction
  • were generated based on omics data sets.

Cell-type-specific models have been used

  • to study diverse human disease conditions.

For example, an adipocyte model was generated using

  • transcriptomic,
  • proteomic, and
  • metabolomics data.

This model was subsequently used to investigate

  • metabolic alternations in adipocytes
  • that would allow for the stratification of obese patients
    (Mardinoglu et al. 2013).

One highly active field within the biomedical applications of COBRA is

  • cancer metabolism (Jerby and Ruppin, 2012).

Omics-driven large-scale models have been used

  • to predict drug targets (Folger et al. 2011; Jerby et al. 2012).

A cancer model was generated using

  • multiple gene expression data sets and
  • subsequently used to predict synthetic lethal gene pairs
  • as potential drug targets selective for the cancer model,
  • but non-toxic to the global model (Recon 1),
  • a consequence of the reduced redundancy in the
    cancer specific model (Folger et al. 2011).

In a follow up study, lethal synergy between

  • FH and enzymes of the heme metabolic pathway
    were experimentally validated and
  • resolved the mechanism by which FH deficient cells,
    e.g., in renal-cell cancer cells
  • survive a non-functional TCA cycle (Frezza et al. 2011).

Contextualized models, which contain only 

  • the subset of reactions active in 
  • a particular tissue (or cell-) type,
  • can be generated in different ways
    (Becker and Palsson, 2008; Jerby et al. 2010).

However, the existing algorithms mainly consider

  • gene expression and proteomic data to define the reaction sets
  • that comprise the contextualized metabolic models.

These subset of reactions are usually defined based on

  • the expression or absence of expression of the genes or proteins
    (present and absent calls), or
  • inferred from expression values or differential gene expression.

Comprehensive reviews of the methods are available
(Blazier and Papin, 2012; Hyduke et al. 2013).

Only the compilation of a large set of omics data sets

  • can result in a tissue (or cell-type) specific metabolic model, whereas

the representation of one particular experimental condition is achieved through

  • the integration of omics data set generated from one experiment only
    (condition-specific cell line model).

Recently, metabolomic data sets

  • have become more comprehensive and using these data sets allow
  • direct determination of the metabolic network components (the metabolites).

Additionally, metabolomics has proven to be

  1. stable,
  2. relatively inexpensive, and
  3. highly reproducible
    (Antonucci et al. 2012).

These factors make metabolomic data sets

  •  particularly valuable for interrogation of metabolic phenotypes. 

Thus, the integration of these data sets is now an active field of research
(Li et al. 2013; Mo et al. 2009; Paglia et al. 2012b; Schmidt et al. 2013).

Generally, metabolomic data can be incorporated into metabolic networks as

  1. qualitative,
  2. quantitative, and
  3. thermodynamic constraints
    (Fleming et al. 2009; Mo et al. 2009).

Mo et al. used metabolites detected in the spent medium
of yeast cells to determine

  • intracellular flux states through a sampling analysis (Mo et al. 2009),
  • which allowed unbiased interrogation of the possible network states
    (Schellenberger and Palsson 2009)
  • and prediction of internal pathway use.

Such analyses have also been used

  • to reveal the effects of enzymopathies on red blood cells (Price et al. 2004),
  • to study effects of diet on diabetes (Thiele et al. 2005) and
  • to define macrophage metabolic states (Bordbar et al. 2010).

This type of analysis is available as a function in the COBRA toolbox
(Schellenberger et al. 2011).

 

 

 

In this study, we established a workflow for the generation and analysis of

  • condition-specific metabolic cell line models that
  • can facilitate the interpretation of metabolomic data.

Our modeling yields meaningful predictions regarding

  • metabolic differences between two lymphoblastic leukemia cell lines
    (Fig. 1A).
Differences in the use of the TCA cycle by the CCRF-CEM

Differences in the use of the TCA cycle by the CCRF-CEM

 

 

 

http://link.springer.com/static-content/images/404/art%253A10.1007%252
Fs11306-014-0721-3/MediaObjects/11306_2014_721_Fig1_HTML.gif

Fig. 1

A  Combined experimental and computational pipeline to study human metabolism.
Experimental work and omics data analysis steps precede computational modeling. Model

  • predictions are validated based on targeted experimental data.

Metabolomic and transcriptomic data are used for

  • model refinement and submodel extraction.

Functional analysis methods are used to characterize

  • the metabolism of the cell-line models and compare it to additional experimental
    data.

The validated models are subsequently 

  • used for the prediction of drug targets.

B Uptake and secretion pattern of model.
All metabolite uptakes and secretions that were mapped during model
generation are shown.
Metabolite uptakes are depicted on the left, and

  • secreted metabolites are shown on the right.

A number of metabolite exchanges mapped to the model

  • were unique to one cell line.

Differences between cell lines were used to set

  • quantitative constraints for the sampling analysis.

C Statistics about the cell line-specific network generation.

 Quantitative constraints.
For the sampling analysis, an additional

  • set of constraints was imposed on the cell line specific models,
  • emphasizing the differences in metabolite uptake and secretion between cell lines.

Higher uptake of a metabolite was allowed in the model of the cell line

  • that consumed more of the metabolite in vitro, whereas
  • the supply was restricted for the model with lower in vitro uptake.

This was done by establishing the same ratio between the models bounds as detected in vitro.
X denotes the factor(slope ratio) that

  1. distinguishes the bounds, and
  2. which was individual for each metabolite.
  • (a) The uptake of a metabolite could be x times higher in CCRF-CEM cells,
    (b) the metabolite uptake could be x times higher in Molt-4,
    (c) metabolite secretion could be x times higher in CCRF-CEM, or
    (d) metabolite secretion could be x times higher in Molt-4 cells. LOD limit of detection.

The consequence of the adjustment was, in case of uptake, that  one model

  1. was constrained to a lower metabolite uptake (A, B), and the difference
  2. depended on the ratio detected in vitro.

In case of secretion,

  • one model had to secrete more of the metabolite, and again

the difference depended on

  • the experimental difference detected between the cell lines.

Q5. What is your expectation that this type of integrative approach could be used for facilitating medical data interpretations?

The most inventive approach was made years ago by using data constructions from the medical literature by a pioneer in the medical record development, but the technology was  not what it is today, and the cost of data input was high.  Nevertheless, the data acquisition would not be uniform across institutions, except for those that belong to a consolidated network with all of the data in the cloud, and the calculations would be carried out with a separate engine.  However, whether the uniform capture of the massive amount of data needed is not possible in the near foreseeable future.  There is no accurate way of assessing the system cost, and predicting the benefits.  In carrying this model forward there has to be a minimal amount of insufficient data.  The developments in the regulatory sphere have created a high barrier.

This concludes a first portion of this presentation.

 

Read Full Post »

Larry H Bernstein, MD, FCAP, Author and Curator

http://pharmaceuticalintelligence.com/2014/06/22/Proteomics – The Pathway to Understanding and Decision-making in Medicine

This dialogue is a series of discussions introducing several perspective on proteomics discovery, an emerging scientific enterprise in the -OMICS- family of disciplines that aim to clarify many of the challenges toward the understanding of disease and aiding in the diagnosis as well as guiding treatment decisions. Beyond that focus, it will contribute to personalized medical treatment in facilitating the identification of treatment targets for the pharmaceutical industry. Despite enormous advances in genomics research over the last two decades, there is a still a problem in reaching anticipated goals for introducing new targeted treatments that has seen repeated failures in stage III of clinical trials, and even when success has been achieved, it is temporal.  The other problem has been toxicity of agents widely used in chemotherapy.  Even though the genomic approach brings relieve to the issues of toxicity found in organic chemistry derivative blocking reactions, the specificity for the target cell without an effect on normal cells has been elusive.

This is not confined to cancer chemotherapy, but can also be seen in pain medication, and has been a growing problem in antimicrobial therapy.  The stumbling block has been inability to manage a multiplicity of reactions that also have to be modulated in a changing environment based on 3-dimension structure of proteins, pH changes, ionic balance, micro- and macrovascular circulation, and protein-protein and protein- membrane interactions. There is reason to consider that the present problems can be overcome through a much better modification of target cellular metabolism as we peel away the confounding and blinding factors with a multivariable control of these imbalances, like removing the skin of an onion.

This is the first of a series of articles, and for convenience we shall here  only emphasize the progress of application of proteomics to cardiovascular disease.

growth in funding proteomics 1990-2010

growth in funding proteomics 1990-2010

Part I.

Panomics: Decoding Biological Networks  (Clinical OMICs 2014; 5)

Technological advances such as high-throughput sequencing are transforming medicine from symptom-based diagnosis and treatment to personalized medicine as scientists employ novel rapid genomic methodologies to gain a broader comprehension of disease and disease progression. As next-generation sequencing becomes more rapid, researchers are turning toward large-scale pan-omics, the collective use of all omics such as genomics, epigenomics, transcriptomics, proteomics, metabolomics, lipidomics and lipoprotein proteomics, to better understand, identify, and treat complex disease.

Genomics has been a cornerstone in understanding disease, and the sequencing of the human genome has led to the identification of numerous disease biomarkers through genome-wide association studies (GWAS). It was the goal of these studies that these biomarkers would serve to predict individual disease risk, enable early detection of disease, help make treatment decisions, and identify new therapeutic targets. In reality, however, only a few have gone on to become established in clinical practice. For example in human GWAS studies for heart failure at least 35 biomarkers have been identified but only natriuretic peptides have moved into clinical practice, where they are limited primarily for use as a diagnostic tool.

Proteomics Advances Will Rival the Genetics Advances of the Last Ten Years

Seventy percent of the decisions made by physicians today are influenced by results of diagnostic tests, according to N. Leigh Anderson, founder of the Plasma Proteome Institute and CEO of SISCAPA Assay Technologies. Imagine the changes that will come about when future diagnostics tests are more accurate, more useful, more economical, and more accessible to healthcare practitioners. For Dr. Anderson, that’s the promise of proteomics, the study of the structure and function of proteins, the principal constituents of the protoplasm of all cells.

In explaining why proteomics is likely to have such a major impact, Dr. Anderson starts with a major difference between the genetic testing common today, and the proteomic testing that is fast coming on the scene. “Most genetic tests are aimed at measuring something that’s constant in a person over his or her entire lifetime. These tests provide information on the probability of something happening, and they can help us understand the basis of various diseases and their potential risks. What’s missing is, a genetic test is not going to tell you what’s happening to you right now.”

Mass Spec-Based Multiplexed Protein Biomarkers

Clinical proteomics applications rely on the translation of targeted protein quantitation technologies and methods to develop robust assays that can guide diagnostic, prognostic, and therapeutic decision-making. The development of a clinical proteomics-based test begins with the discovery of disease-relevant biomarkers, followed by validation of those biomarkers.

“In common practice, the discovery stage is performed on a MS-based platform for global unbiased sampling of the proteome, while biomarker qualification and clinical implementation generally involve the development of an antibody-based protocol, such as the commonly used enzyme linked ELISA assays,” state López et al. in Proteome Science (2012; 10: 35–45). “Although this process is potentially capable of delivering clinically important biomarkers, it is not the most efficient process as the latter is low-throughput, very costly, and time-consuming.”

Part II.  Proteomics for Clinical and Research Use: Combining Protein Chips, 2D Gels and Mass Spectrometry in 

The next Step: Exploring the Proteome: Translation and Beyond

N. Leigh Anderson, Ph.D., Chief Scientific Officer, Large Scale Proteomics Corporation

Three streams of technology will play major roles in quantitative (expression) proteomics over the coming decade. Two-dimensional electrophoresis and mass spectrometry represent well-established methods for, respectively, resolving and characterizing proteins, and both have now been automated to enable the high-throughput generation of data from large numbers of samples.

These methods can be powerfully applied to discover proteins of interest as diagnostics, small molecule therapeutic targets, and protein therapeutics. However, neither offers a simple, rapid, routine way to measure many proteins in common samples like blood or tissue homogenates.

Protein chips do offer this possibility, and thus complete the triumvirate of technologies that will deliver the benefits of proteomics to both research and clinical users. Integration of efforts in all three approaches are discussed, highlighting the application of the Human Protein Index® database as a source of protein leads.

leighAnderson

leighAnderson

N. Leigh Anderson, Ph D. is Chief Scientific Officer of the Proteomics subsidiary of Large Scale Biology Corporation (LSBC).
Dr. Anderson obtained his B.A. in Physics with honors from Yale and a Ph.D. in Molecular Biology from Cambridge University
(England) where he worked with M. F. Perutz as a Churchill Fellow at the MRC Laboratory of Molecular Biology. Subsequently
he co-founded the Molecular Anatomy Program at the Argonne National Laboratory (Chicago) where his work in the development
of 2D electrophoresis and molecular database technology earned him, among other distinctions, the American Association for
Clinical Chemistry’s Young Investigator Award for 1982, the 1983 Pittsburgh Analytical Chemistry Award, 2008 AACC Outstanding
Research Award, and 2013 National Science Medal..

In 1985 Dr. Anderson co-founded LSBC in order to pursue commercial development and large scale applications of 2-D electro-
phoretic protein mapping technology. This effort has resulted in a large-scale proteomics analytical facility supporting research
work for LSBC and its pharmaceutical industry partners. Dr. Anderson’s current primary interests are in the automation of proteomics
technologies, and the expansion of LSBC’s proteomics databases describing drug effects and disease processes in vivo and in vitro.
Large Scale Biology went public in August 2000.

Part II. Plasma Proteomics: Lessons in Biomarkers and Diagnostics

Exposome Workshop
N Leigh Anderson
Washington 8 Dec 2011

QUESTIONS AND LESSONS:

CLINICAL DIAGNOSTICS AS A MODEL FOR EXPOSOME INDICATORS
TECHNOLOGY OPTIONS FOR MEASURING PROTEIN RESPONSES TO EXPOSURES
SCALE OF THE PROBLEM: EXPOSURE SIGNALS VS POPULATION NOISE

The Clinical Plasma Proteome
• Plasma and serum are the dominant non-invasive clinical sample types
– standard materials for in vitro diagnostics (IVD)
• Proteins measured in clinically-available tests in the US
– 109 proteins via FDA-cleared or approved tests
• Clinical test costs range from $9 (albumin) to $122 (Her2)
• 90% of those ever approved are still in use
– 96 additional proteins via laboratory-developed tests (not FDA
cleared or approved)
– Total 205 proteins (≅ products of 211genes, excluding Ig’s)
• Clinically applied proteins thus account for
– About 1% of the baseline human proteome (1 gene :1 protein)
– About 10% of the 2,000+ proteins observed in deep discovery
plasma proteome datasets

“New” Protein Diagnostics Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

FDA clearance of protein diagnostics

FDA clearance of protein diagnostics

A  Major Technology Gulf Exists Between Discovery

Proteomics and Routine Diagnostic Platforms

Two Streams of Proteomics
A.  Problem Technology
Basic biology: maximum proteome coverage (including PTM’s, splices) to
provide unbiased discovery of mechanistic information
• Critical: Depth and breadth
• Not critical: Cost, throughput, quant precision

B.  Discovery proteomics
Specialized proteomics field,
large groups,
complex workflows and informatics

Part III.  Addressing the Clinical Proteome with Mass Spectrometric Assays

N. Leigh Anderson, PhD, SISCAPA Assay Technologies, Inc.

protein changes in biological mechanisms

protein changes in biological mechanisms

No Increase in FDA Cleared Protein Tests in 20 yr

“New” Protein Tests in Plasma Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

See figure above

An Explanation: the Biomarker Pipeline is Blocked at the Verification Step

Immunoassay Weaknesses Impact Biomarker Verification

1) Specificity: what actually forms the immunoassay sandwich – or prevents its
formation – is not directly visualized

2) Cost: an assay developed to FDA approvable quality costs $2-5M per
protein

Major_Plasma_Proteins

Major_Plasma_Proteins

Immunoassay vs Hybrid MS-based assays

Immunoassay vs Hybrid MS-based assays

MASS SPECTROMETRY: MRM’s provide what is missing in..IMMUNOASSAYS:

– SPECIFICITY
– INTERNAL STANDARDIZATION
– MULTIPLEXING
– RAPID CONFIGURATION PROVIDED A PROTEIN CAN ACT LIKE A SMALL
MOLECULE

MRM of Proteotypic Tryptic Peptides Provides Highly Specific Assays for Proteins > 1ug/ml in Plasma

Peptide-Level MS Provides High Structural Specificity
Multiple Reaction Monitoring (MRM) Quantitation

ADDRESSING MRM LIMITATIONS VIA SPECIFIC ENRICHMENT OF ANALYTE  PEPTIDES: SISCAPA

– SENSITIVITY
– THROUGHPUT (LC-MS/MS CYCLE TIME)

SISCAPA combines best features of immuno and MS

SISCAPA combines best features of immuno and MS

SISCAPA Process Schematic Diagram
Stable Isotope-labeled Standards with Capture on Anti-Peptide Antibodies

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

Antibodies sequence specific peptide binding

Antibodies sequence specific peptide binding

SISCAP target enrichmant

SISCAP target enrichmant

Multiple reaction monitoring (MRM) quantitation

Multiple reaction monitoring (MRM) quantitation

protein-quantitation-via-signature-peptides.png

protein-quantitation-via-signature-peptides.png

First SISCAP Assay - thyroglobulin

First SISCAP Assay – thyroglobulin

personalized reference range within population range

Glycemic control in DM

Glycemic control in DM

Part IV. National Heart, Lung, and Blood Institute Clinical

Proteomics Working Group Report
Christopher B. Granger, MD; Jennifer E. Van Eyk, PhD; Stephen C. Mockrin, PhD;
N. Leigh Anderson, PhD; on behalf of the Working Group Members*
Circulation. 2004;109:1697-1703 doi: 10.1161/01.CIR.0000121563.47232.2A
http://circ.ahajournals.org/content/109/14/1697

Abstract—The National Heart, Lung, and Blood Institute (NHLBI) Clinical Proteomics Working Group
was charged with identifying opportunities and challenges in clinical proteomics and using these as a
basis for recommendations aimed at directly improving patient care. The group included representatives
of clinical and translational research, proteomic technologies, laboratory medicine, bioinformatics, and
2 of the NHLBI Proteomics Centers, which form part of a program focused on innovative technology development.

This report represents the results from a one-and-a-half-day meeting on May 8 and 9, 2003. For the purposes
of this report, clinical proteomics is defined as the systematic, comprehensive, large-scale identification of
protein patterns (“fingerprints”) of disease and the application of this knowledge to improve patient care
and public health through better assessment of disease susceptibility, prevention of disease, selection of
therapy for the individual, and monitoring of treatment response. (Circulation. 2004;109:1697-1703.)
Key Words: proteins diagnosis prognosis genetics plasma

Part V.  Overview: The Maturing of Proteomics in Cardiovascular Research

Jennifer E. Van Eyk
Circ Res. 2011;108:490-498  doi: 10.1161/CIRCRESAHA.110.226894
http://circres.ahajournals.org/content/108/4/490

Abstract: Proteomic technologies are used to study the complexity of proteins, their roles, and biological functions.
It is based on the premise that the diversity of proteins, comprising their isoforms, and posttranslational modifications
(PTMs) underlies biology.

Based on an annotated human cardiac protein database, 62% have at least one PTM (phosphorylation currently dominating),
whereas 25% have more than one type of modification.

The field of proteomics strives to observe and quantify this protein diversity. It represents a broad group of technologies
and methods arising from analytic protein biochemistry, analytic separation, mass spectrometry, and bioinformatics.
Since the 1990s, the application of proteomic analysis has been increasingly used in cardiovascular research.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

Technology development and adaptation have been at the heart of this progress. Technology undergoes a maturation,

becoming routine and ultimately obsolete, being replaced by newer methods. Because of extensive methodological
improvements, many proteomic studies today observe 1000 to 5000 proteins.

Only 5 years ago, this was not feasible. Even so, there are still road blocks. Nowadays, there is a focus on obtaining
better characterization of protein isoforms and specific PTMs. Consequentl, new techniques for identification and
quantification of modified amino acid residues are required, as is the assessment of single-nucleotide polymorphisms
in addition to determination of the structural and functional consequences.

In this series, 4 articles provide concrete examples of how proteomics can be incorporated into cardiovascular
research and address specific biological questions. They also illustrate how novel discoveries can be made and
how proteomic technology has continued to evolve. (Circ Res. 2011;108:490-498.)
Key Words: proteomics technology protein isoform posttranslational modification polymorphism

Part VI.   The -omics era: Proteomics and lipidomics in vascular research

Athanasios Didangelos, Christin Stegemann, Manuel Mayr∗

King’s British Heart Foundation Centre, King’s College London, UK

Atherosclerosis 2012; 221: 12– 17     http://dx.doi.org/10.1016/j.atherosclerosis.2011.09.043

a b s t r a c t

A main limitation of the current approaches to atherosclerosis research is the focus on the investigation of individual
factors, which are presumed to be involved in the pathophysiology and whose biological functions are, at least in part, understood.

These molecules are investigated extensively while others are not studied at all. In comparison to our detailed
knowledge about the role of inflammation in atherosclerosis, little is known about extracellular matrix remodelling
and the retention of individual lipid species rather than lipid classes in early and advanced atherosclerotic lesions.

The recent development of mass spectrometry-based methods and advanced analytical tools are transforming
our ability to profile extracellular proteins and lipid species in animal models and clinical specimen with the goal
of illuminating pathological processes and discovering new biomarkers.

Fig. 1. ECM in atherosclerosis

Fig. 1. ECM in atherosclerosis. The bulk of the vascular ECM is synthesised by smooth muscle cells and composed primarily of collagens, proteoglycans and glycoproteins.During the early stages of atherosclerosis, LDL binds to the proteoglycans of the vessel wall, becomes modified, i.e. by oxidation (ox-LDL), and sustains a proinflammatory cascade that is proatherogenic

Lipidomics of atherosclerotic plaques

Lipidomics of atherosclerotic plaques

Fig. 2. Lipidomics of atherosclerotic plaques. Lipids were separated by ultra performance reverse phase
liquid chromatography on a Waters® ACQUITY UPLC® (HSS T3 Column, 100 mm × 2.1 mm i.d., 1.8 _m
particle size, 55 ◦C, flow rate 400 _L/min, Waters, Milford MA, USA) and analyzed on a quadrupole time-of-flight
mass spectrometer (Waters® SYNAPTTM HDMSTM system) in both positive (A) and negative ion mode (C).
In positive MS mode, lysophosphatidyl-cholines (lPCs) and lysophosphatidylethanolamines (lPEs) eluted first;
followed by phosphatidylcholines (PCs), sphingomyelin (SMs), phosphatidylethanol-amines (PEs) and cholesteryl
esters (CEs); diacylglycerols (DAGs) and triacylglycerols (TAGs) had the longest retention times. In negative MS mode,
fatty acids (FA) were followed by phosphatidyl-glycerols (PGs), phosphatidyl-inositols (PIs), phosphatidylserines (PS)
and PEs. The chromatographic peaks corresponding to the different classes were detected as retention time-mass to
charge ratio (m/z) pairs and their areas were recorded. Principal component analyses on 629 variables from triplicate
analysis (C1, 2, 3 = control 1, 2, 3; P1, 2, 3 = endarterectomy patient 1, 2, 3) demonstrated a clear separation of
atherosclerotic plaques and control radial arteries in positive (B) and negative (D) ion mode. The clustering of the
technical replicates and the central projection of the pooled sample within the scores plot confirm the reproducibility
of the analyses, and the Goodness of Fit test returned a chi-squared of 0.4 and a R-squared value of 0.6.

Challenges in mass spectrometry

Mass spectrometry is an evolving technology and the technological advances facilitate the detection and quantification
of scarce proteins. Nonetheless, the enrichment of specific subproteomes using differential solubilityor isolation of cellular
organelleswill remain important to increase coverage and, at least partially, overcome the inhomogeneity of diseased tissue,
one of the major factors affecting sample-to-sample variation.

Proteomics is also the method of choice for the identification of post-translational modifications, which play an essential
role in protein function, i.e. enzymatic activation, binding ability and formation of ECM structures. Again, efficient enrichment
is essential to increase the likelihood of identifying modified peptides in complex mixtures. Lipidomics faces similar challenges.
While the extraction of lipids is more selective, new enrichment methods are needed for scarce lipids as well as labile lipid
metabolites, that may have important bioactivity. Another pressing issue in lipidomics is data analysis, in particular the lack
of automated search engines that can analyze mass spectra obtained from instruments of different vendors. Efforts to
overcome this issue are currently underway.

Conclusions

Proteomics and lipidomics offer an unbiased platform for the investigation of ECM and lipids within atherosclerosis. In
combination, these innovative technologies will reveal key differences in proteolytic processes responsible for plaque rupture
and advance our understanding of ECM – lipoprotein interactions in atherosclerosis.

references

Virtualization in Proteomics: ‘Sakshat’ in India, at IIT Bombay(tginnovations.wordpress.com)

Proteome Portraits (the-scientist.com)

A Protease for ‘Middle-down’ Proteomics(pharmaceuticalintelligence.com)

Intrinsic Disorder in the Human Spliceosomal Proteome(ploscompbiol.org)

proteome

proteome

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

Table - metabolic  targets

Table – metabolic targets

HK-II Phosphorylation

Read Full Post »

Introduction to Translational Medicine (TM) – Part 1: Translational Medicine

Introduction to Translational Medicine (TM) – Part 1: Translational Medicine

Author and Curator: Larry H Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN 

Article ID #134: Introduction to Translational Medicine (TM) – Part 1: Translational Medicine. Published on 4/25/2014

WordCloud Image Produced by Adam Tubman

 

This document in the Series A: Cardiovascular Diseases e-Series Volume 4: Translational and Regenerative Medicine,  is a measure of the postgenomic and proteomic advances in the laboratory to the practice of clinical medicine.  The Chapters are preceded by several videos by prominent figures in the emergence of this transformative change.  When I was a medical student, a large body of the current language and technology that has extended the practice of medicine did not exist, but a new foundation, predicated on the principles of modern medical education set forth by Abraham Flexner, was sprouting.  The highlights of this evolution were:

  • Requirement for premedical education in biology, organic chemistry, physics, and genetics.
  • Medical education included two years of basic science education in anatomy, physiology, pharmacology, and pathology prior to introduction into the clinical course sequence of the last two years.
  • Post medical graduate education was an internship year followed by residency in pediatrics, OBGyn, internal medicine, general surgery, psychiatry, neurology, neurosurgery, pathology, radiology, and anesthesiology, emergency medicine.
  • Academic teaching centers were developing subspecialty centers in ophthalmology, ENT and head and neck surgery, cardiology and cardiothoracic surgery, and hematology, hematology/oncology, and neurology.
  • The expansion of postgraduate medical programs included significant postgraduate funding for programs by the National Institutes of Health, and the NIH had faculty development support in a system of peer-reviewed research grant programs in medical and allied sciences.

The period after the late 1980s saw a rapid expansion of research in genomics and drug development to treat emerging threats of infectious diseases as US had a large worldwide involvement after the end of the Vietnam War, and drug resistance was increasingly encountered (malaria, tick borne diseases, salmonellosis, pseudomonas aeruginosa, staphylococcus aureus, etc.).

Moreover, the post-millenium found a large, dwindling population of veterans who had served in WWII and Vietnam, and cardiovascular, musculoskeletal,  dementias, and cancer were now more common.  The Human Genome Project was undertaken to realign the existing knowledge of gene structure and genetic regulation with the needs for drug development, which was languishing in development failures due to unexpected toxicities.

A substantial disconnect existed between diagnostics and pharmaceutical development, which had been over-reliant on modification of known organic structures to increase potency and reduce toxicity.  This was about to change with changes in medical curricula, changes in residency programs and physicians cross-training in disciplines, and the emergence of bio-pharma, based on the emerging knowledge of the cell function, and at the same time, the medical profession was developing an evidence-base for therapeutics, and more pressure was placed on informed decision-making.

The great improvement in proteomics came from GCLC/MS-MS and is described in the video interview with Dr. Gyorgy Marko-Varga, Sweden, in video 1 of 3 (Advancing Translational Medicine).  This is a discussion that is focused on functional proteomics role in future diagnostics and therapy, involving a greater degree of accuracy in mass spectrometry (MS) than can be obtained by antibody-ligand binding, and is illustrated below, the last emphasizing the importance of information technology and predictive analytics

Thermo ScientificImmunoassays and LC–MS/MS have emerged as the two main approaches for quantifying peptides and proteins in biological samples. ELISA kits are available for quantification, but inherently lack the discriminative power to resolve isoforms and PTMs.

To address this issue we have developed and applied a mass spectrometry immunoassay–selected reaction monitoring (Thermo Scientific™ MSIA™ SRM technology) research method to quantify PCSK9 (and PTMs), a key player in the regulation of circulating low density lipoprotein cholesterol (LDL-C).

A Day in the (Future) Life of a Predictive Analytics Scientist

 

By Lars Rinnan, CEO, NextBridge   April 22, 2014

A look into a normal day in the near future, where predictive analytics is everywhere, incorporated in everything from household appliances to wearable computing devices.

During the test drive (of an automobile), the extreme acceleration makes your heart beat so fast that your personal health data sensor triggers an alarm. The health data sensor is integrated into the strap of your wrist watch. This data is transferred to your health insurance company, so you say a prayer that their data scientists are clever enough to exclude these abnormal values from your otherwise impressive health data. Based on such data, your health insurance company’s consulting unit regularly gives you advice about diet, exercise, and sleep. You have followed their advice in the past, and your performance has increased, which automatically reduced your insurance premiums. Win-win, you think to yourself, as you park the car, and decide to buy it.

In the clinical presentation at Harlan Krumholtz’ Yale Symposium, Prof. Robert Califf, Director of the Duke University Translational medicine Clinical Research Institute, defines translational medicine as effective translation of science to clinical medicine in two segments:

  1. Adherence to current standards
  2. Improving the enterprise by translating knowledge

He says that discrepancies between outcomes and medical science will bridge a gap in translation by traversing two parallel systems.

  1. Physician-health organization
  2. Personalized medicine

He emphasizes that the new basis for physician standards will be legitimized in the following:

  1. Comparative effectiveness (Krumholtz)
  2. Accountability

Some of these points are repeated below:

WATCH VIDEOS ON YOUTUBE

https://www.youtube.com/watch?v=JFdJRh9ZPps#t=678  Harlan Krumholtz

https://www.youtube.com/watch?v=JFdJRh9ZPps#t=678  complexity

https://www.youtube.com/watch?v=JFdJRh9ZPps#t=678  integration map

https://www.youtube.com/watch?v=JFdJRh9ZPps#t=678  progression

https://www.youtube.com/watch?v=JFdJRh9ZPps#t=678  informatics

An interesting sidebar to the scientific medical advances is the huge shift in pressure on an insurance system that has coexisted with a public system in Medicare and Medicaid, initially introduced by the health insurance industry for worker benefits (Kaiser, IBM, Rockefeller), and we are undertaking a formidable change in the ACA.

The current reality is that actuarially, the twin system that has existed was unsustainable in the long term because it is necessary to have a very large pool of the population to spread the costs, and in addition, the cost of pharmaceutical development has driven consolidation in the industry, and has relied on the successes from public and privately funded research.

https://www.youtube.com/watch?v=X6J_7PvWoMw#t=57  Corbett Report Nov 2013

(1979 ER Brown)  UCPress  Rockefeller Medicine Men

https://www.youtube.com/watch?v=X6J_7PvWoMw#t=57   Liz Fowler VP of Wellpoint (designed ACA)

I shall digress for a moment and insert a video history of DNA, that hits the high points very well, and is quite explanatory of the genomic revolution in medical science, biology, infectious disease and microbial antibiotic resistance, virology, stem cell biology, and the undeniability of evolution.

DNA History

https://www.youtube.com/watch?v=UUDzN4w8mKI&list=UUoHRSQ0ahscV14hlmPabkVQ

As I have noted above, genomics is necessary, but not sufficient.  The story began as replication of the genetic code, which accounted for variation, but the accounting for regulation of the cell and for metabolic processes was, and remains in the domain of an essential library of proteins. Moreover, the functional activity of proteins, at least but not only if they are catalytic, shows structural variants that is characterized by small differences in some amino acids that allow for separation by net charge and have an effect on protein-protein and other interactions.

Protein chemistry is so different from DNA chemistry that it is quite safe to consider that DNA in the nucleotide sequence does no more than establish the order of amino acids in proteins. On the other hand, proteins that we know so little about their function and regulation, do everything that matters including to set what and when to read something in the DNA.

Jose Eduardo de Salles Roselino

Chapters 2, 3, and 4 sequentially examine:

  • The causes and etiologies of cardiovascular diseases
  • The diagnosis, prognosis and risks determined by – biomarkers in serum, circulating cells, and solid tissue by contrast radiography
  • Treatment of cardiovascular diseases by translation of science from bench to bedside, including interventional cardiology and surgical repair

These are systematically examined within a framework of:

  • Genomics
  • Proteomics
  • Cardiac and Vascular Signaling
  • Platelet and Endothelial Signaling
  • Cell-protein interactions
  • Protein-protein interactions
  • Post-Translational Modifications (PTMs)
  • Epigenetics
  • Noncoding RNAs and regulatory considerations
  • Metabolomics (the metabolome)
  • Mitochondria and oxidative stress

 

Read Full Post »

World of Metabolites:  Lawrence Berkeley National Laboratory developed Imaging Technique for their Capturing

Reporter: Aviva Lev-Ari, PhD, RN

 

UPDATED on 9/27/2017

From: “Dr. Larry Bernstein” <larry.bernstein@gmail.com>

Reply-To: “Dr. Larry Bernstein” <larry.bernstein@gmail.com>

Date: Tuesday, September 26, 2017 at 10:45 AM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Precision or personalized medicine seeks to provide the right drug to the right patient at the right time. Hence the significance of the principal omics: disciplines of genomics, proteomics, and last but not least metabolomics, as diagnostic enablers. 

Primacy among the ‘omics is debatable, but the notion that metabolomics reflects the most accurate picture of disease states has reached significant momentum. “Almost every factor affecting health exerts its influence by altering metabolite levels,” says Mike Milburn, Ph.D., Chief Scientific Officer at Metabolon (Morrisville, North Carolina, USA). 

Where clinical chemistry blood tests typically quantify individual species for example, glucose or cholesterol, metabolomics measures hundreds or even thousands of metabolites to provide a nuanced view of disease states. 

Metabolon employs standard liquid chromatography-mass spectrometry (LC-MS) for metabolomic studies. Its proprietary informatics and processing platform, Precision MetabolomicsTM, overcomes the “big data” challenge, a natural consequence of measuring hundreds or thousands of small-molecule entities with widely differing concentrations in a single sample. Precision Metabolomics enables “n of 1” studies — meaningful clinical trials on a single patient, Milburn adds:

Diagnostic metabolomics resembles other medical testing, where results are compared against readings from healthy individuals or a reference population. Many metabolites serve that purpose but none on its own is sufficiently specific or diagnostic for a diagnosis — otherwise it would comprise a standalone test. Hence the reliance on metabolite panels or networks, which together may provide a clearer view of disease states than any single diagnostic molecule.

 

Imaging technique captures ever-changing world of metabolites

Thu, 06/13/2013 – 7:38am

The kinetic world of metabolites comes to life in this merged overlay of mass spectrometry images. It shows new versus pre-existing metabolites in a tumor section (yellow and red indicate newer metabolites). Image: Lawrence Berkeley National LaboratoryThe kinetic world of metabolites comes to life in this merged overlay of mass spectrometry images. It shows new versus pre-existing metabolites in a tumor section (yellow and red indicate newer metabolites). Image: Lawrence Berkeley National LaboratoryWhat would you do with a camera that can take a picture of something and tell you how new it is? If you’re Lawrence Berkeley National Laboratory scientists Katherine Louie, Ben Bowen, Jian-Hua Mao and Trent Northen, you use it to gain a better understanding of the ever-changing world of metabolites, the molecules that drive life-sustaining chemical transformations within cells.

They’re part of a team of researchers that developed a mass spectrometry imaging technique that not only maps the whereabouts of individual metabolites in a biological sample, but how new the metabolites are too.

That’s a big milestone, because metabolites are constantly in flux. They’re synthesized on-demand in order to sustain an organism’s energy requirements. When you eat lunch, metabolites momentarily fire up in various cell populations throughout your body to fuel your day. But they also have a dark side. Cancer cells tap metabolites to drive tumor development.

Unfortunately, the current ways to clinically analyze metabolites don’t capture their kinetics. Microscopy maps the cells and biomarkers in a tumor section. And traditional mass spectrometry reveals the abundance and spatial distribution of molecules such as metabolites.

But these images are static snapshots of a highly dynamic process. They’re blind to how recently the metabolites were synthesized, which is a key piece of information. The metabolic status of a cell population is a good indicator of what the cells were up to when the sample was taken.

To image the ebb and flow of metabolites, the scientists paired mass spectrometry with a clinically accepted way to label tissue that uses a hydrogen isotope called deuterium.

As reported in Nature Scientific Reports, they administered deuterium to mice with tumors. Newly synthesized lipids (a hallmark of metabolic activity) became labeled with deuterium, while pre-existing lipids remained unlabeled. The scientists then removed tumor sections and analyzed them with a type of mass spectrometry.

The resulting images look like freeze-frames of a slow-motion fireworks show. They reveal when and where metabolic turnover occurs in a tumor section, with the brighter colors depicting newly synthesized lipids.

The scientists also found that regions with new lipids had a higher tumor grade, which is a good predictor of how quickly a tumor is likely to grow.

“Our approach, called kinetic mass spectrometry imaging, could provide clinicians with quantifiable information they can use,” says Bowen.

The scientists are now applying their imaging technique to study metabolic flux in other biological systems, such as microbial communities.

Source: Lawrence Berkeley National Laboratory

http://www.rdmag.com/news/2013/06/imaging-technique-captures-ever-changing-world-metabolites?et_cid=3310531&et_rid=461755519&location=top

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

 

02/01/2013
Ashley Yeager

For the first time, scientists have identified all the proteins in the mitochondrial matrix, which is where the cell’s energy is generated. How? Find out…

 

Between the maze-like inner membranes of the mitochondria, there’s a thick, sticky region called the matrix. This region serves an important role in the generation of the cell’s energy, but the proteins that actually make up this matrix have remained a mystery. But now, researchers at the Massachusetts Institute of Technology (MIT) have catalogued all the proteins in the mitochondrial matrix, identifying 31 proteins not previously associated with mitochondria. They did this by combining the strengths of two methods: microscopy and mass spectrometry.

 

Electron microscopy of human embryonic kidney cells expressing mito-APEX. Credit: Alice Ting, MIT

“This method is really a new paradigm for doing mass spec proteomics because we’re recording proteomics in living cells,” said Alice Ting, a chemist at MIT and author of a paper published online yesterday in Science that describes the technique (1).Microscopy and mass spectrometry are valuable for studying proteins, but each has its drawbacks. While microscopy can show where a protein is located within a cell, it can only do so for a small number of a cell’s roughly 20,000 proteins at once. Meanwhile, mass spectrometry can identify all the proteins within a cell, but destroys the cell membrane in the process of releasing the cell’s contents, resulting in a mixture of proteins from different cell regions and organelles.

To overcome these limitations, Ting’s group genetically engineered the mitochondrial matrix to express a newly designed peroxidase called APEX. When biotin-phenol was added to these cells, APEX stripped an electron and a proton from the biotin molecule, creating highly reactive biotin-phenoxyl radicals. These radicals quickly bound to nearby proteins to stabilize themselves, effectively tagging the proteins in the matrix.

The scientists then identified these tagged proteins with fluorescent imaging, dissolved the cell membrane, and isolated the proteins from the mitochondrial matrix. Using mass spectrometry, the team then identified 495 proteins in the mitochondrial matrix, 31 of which had not been previously linked to the mitochondrial region.

One of the biggest surprises was the discovery that the enzyme PPOX is in the matrix. PPOX helps synthesize heme, the pigment in red blood cells and a cofactor of the protein hemoglobin. Previously, biologists believed that PPOX was located within the space between the outer and inner membranes of the mitochondria, but Ting’s team found that it was actually within the matrix, which the team said is an example of how locally precise their biotin-tagging technique is.

Now, Ting and her team are looking at proteins in the mitochondrial intermembrane space. In addition, the researchers are tweaking their labeling system to map proteins in the cell membrane and to detect specific protein-protein interactions.

Reference

1. Rhee, H.-W., P. Zou, N. D. Udeshi, J. D. Martell, V. K. Mootha, S. A. Carr, and A. Y. Ting. 2013. Proteomic mapping of mitochondria in living cells via spatially restricted enzymatic tagging. Science (January).

SOURCE:

http://www.biotechniques.com/news/biotechniquesNews/biotechniques-339645.html#.UQ37hRxiB0w

 

Read Full Post »

SDS-PAGE with Taq DNA Polymerase. SDS-PAGE is ...

SDS-PAGE with Taq DNA Polymerase. SDS-PAGE is an useful technique to separate proteins according to their electrophoretic mobility. (Photo credit: Wikipedia)

Proteomics and Biomarker Discovery

Reporter: Larry H. Bernstein, MD, FCAP

 

 

Advanced Proteomic Technologies for Cancer Biomarker Discovery

Sze Chuen Cesar Wong; Charles Ming Lok Chan; Brigette Buig Yue Ma; Money Yan Yee Lam; Gigi Ching Gee Choi; Thomas Chi Chuen Au; Andrew Sai Kit Chan; Anthony Tak Cheung Chan

Published: 06/10/2009

This report is extracted from the article above with editing and shortening as much as possible for the reader, and updated from LCGCNA Aug 12,  2012; 8
www.chromatographyonline.com

Part I

Abstract

This review will focus on four state-of-the-art proteomic technologies, namely 2D difference gel electrophoresis, MALDI imaging mass spectrometry, electron transfer dissociation mass spectrometry and reverse-phase protein array. The major advancements these techniques have brought about biomarker discovery will be presented in this review. The wide dynamic range of protein abundance, standardization of protocols and validation of cancer biomarkers, and a 5-year view of potential solutions to such problems is discussed.

English: Public domain image from cancer.gov h...

English: Public domain image from cancer.gov http://visualsonline.cancer.gov/details.cf?imageid=3483. TECAN Genesis 2000 robot preparing Ciphergen SELDI-TOF protein chips for proteomic  analysis. (Photo credit: Wikipedia)

Introduction

A common method used for isolating and identifying cancer biomarkers involves the use of serum or tissue protein identification. Unfortunately, currently used tumor markers have low sensitivities and specificities.[2] Therefore, the development of novel tumor markers might be helpful in improving cancer diagnosis, prognosis and treatment.

The rapid development of proteomic technologies during the past 10 years has brought about a massive increase in the discovery of novel cancer biomarkers. Such biomarkers may have broad applications, such as for the detection of the presence of a disease, monitoring of disease clearance and/or progression, monitoring of treatment response and demonstration of drug targeting of a particular pathway and/or target. In general, proteomic approaches begin with the collection of biological specimens representing two different physiological conditions, cancer patients and reference subjects. Proteins or peptides are extracted and separated, and the protein or peptide profiles are compared against each other in order to detect differentially expressed proteins. Commonly, quantitative proteomics is mainly performed by protein separation using either 2DE- or liquid chromatography (LC)-based methods coupled with protein identification using mass spectrometry (MS). Limitations include inability to obtain protein profiles directly from tissue sections for correlation with tissue morphology, limited ability to analyze post-translational modifications (PTMs) and low capacity for high-throughput validation of identified markers. Progress in proteomic technologies has led to the development of 2D DIGE, MALDI imaging MS (IMS), electron transfer dissociation (ETD) MS, and reverse-phase protein array (RPA).

2D Difference Gel Electrophoresis

The 2DE method has been one of the mainstream technologies used for proteomic investigations.[3,4] In this method, proteins are separated in the first dimension according to charge by isoelectric focusing, followed by separation in the second dimension according to molecular weight, using polyacrylamide gel electrophoresis. The gels are then stained to visualize separated protein spots,[5] separating up to 1000 protein spots in a single experiment and  protein spots are then excised and identified using mass spectrometry (MS).[6,7]

We previously used a 2DE approach to compare the proteomic profiles to identify differentially expressed proteins that may be involved in the development of nasopharyngeal cancer, [8]   as well as proteins that were responsive to treatment with the chemotherapeutic agent 5-fluorouracil (5FU) in the colorectal cancer SW480 cell line. Briefly, cell lysates from SW480 cells that were either treated with 5FU or were controls were separated using 2DE. After staining and analysis of the gels, differentially expressed protein spots were excised and identified using MS. The upregulation of heat-shock protein (Hsp)-27 and peroxiredoxin 6 and the downregulation of Hsp-70 were successfully validated by immunohistochemical (IHC) staining of SW480 cells.[9]

The 2D DIGE method improved the 2DE technique. Figure 1 shows how two different protein samples (e.g., control and disease) and, optionally, one reference sample (e.g., control and disease pooled together) are labeled with one of three spectrally different fluorophores: cyanine (Cy)2, 3 or 5. They have the same charge, similar molecular weight and distinct fluorescent properties, allowing their discrimination during fluorometric scanning.[10-12]  The minimal dye causes minimal change in the electrophoretic mobility pattern of the protein, whereas the saturation dye labels all available cysteine residues but causes a shift in electrophoretic mobility labeled proteins.[13]  The same pooled reference sample used for all gels within an experiment is an internal reference for normalization and spot matching.[12] The gel is scanned at three different wavelengths yielding images for each of the different samples, and variation between gels is minimized and difficulties are reduced in correctly matching of protein spots across different gels.[10,11]  Significant advantages of the DIGE technology includes a dynamic range of over four orders of magnitude and full compatibility with MS.  However, careful validations of identified markers using alternative techniques are still needed.

In a study that compared three commonly used DIGE analysis software packages, Kang et al. concluded that although the three softwares performed satisfactorily with minimal user intervention, significant improvements in the accuracy of analysis could be achieved .[14] Moreover, it was suggested that results concerning the magnitude of differential expression between protein spots after statistical analysis by such softwares must be examined with care.[14]

Figure 1.  Procedures for performing a 2D DIGE experiment. CY: Cyanine; DIGE: Differential in gel electrophoresis.

The choice of appropriate statistical methods for the analysis of DIGE data has to be considered. Statistical methodological error can be addressed by the use of statistical methods that apply a false-discovery rate (FDR) for the determination of significance. In this method, q-values are calculated for all protein spots. The q-value of each spot corresponds to the expected proportion of false-positives incurred by a change in expression level of that protein spot found to be significant.

Despite the ease of use and enabling the researcher to select an appropriate FDR according to study requirements, this approach was found to be only applicable to DIGE experiments using a two-dye labeling scheme, as a three-dye labeling approach violated the assumption of data independence required for statistical analysis.[16] Other statistical tests that have been applied for the analysis of DIGE results include significance analysis of microarrays,[7] principal components analysis[17,18] and partial least squares discriminant analysis.[18,19] Detailed discussions of the different statistical approaches applicable to proteomic research are beyond the scope of this review and readers may refer to[18,20] for further reading.

Using 2D DIGE, Yu et al. successfully identified biomarkers that were associated with pancreatic cancer.[21] In the study, 24 upregulated and 17 downregulated proteins were identified by MS. Among those proteins, upregulation of apolipoprotein E, α-1-antichymotrypsin and inter-α-trypsin inhibitor were confirmed by western blot analysis. Furthermore, the association of those three proteins with pancreatic cancer was successfully validated in another series of 20 serum samples from pancreatic cancer patients. Using a similar approach, Huang et al. identified and confirmed the upregulation of transferrin in the sera of patients with breast cancer.[22] When Sun et al. compared the proteomic profiles between malignant and adjacent benign tissue samples from patients with hepatocellular carcinoma, they proved 2D DIGE is not limited to serum or plasma samples.[23] In their study, overexpression of Hsp70/Hsp90-organizing protein and heterogenous nuclear ribonucleoproteins C1 and C2 were identified by 2D DIGE coupled with MS analysis, and the findings were successfully validated by both western blotting and IHC staining. Next, Kondo et al. applied 2D DIGE to laser-microdissected cells from fresh patient tissues.[13] Using this protocol, a 1-mm area of an 8-12-µm-thick tissue section was shown to be sufficient. These examples demonstrate the high sensitivity and broad applicability of 2D DIGE for proteomic investigations using various types of patient samples and provide evidence that 2D DIGE is a powerful technique for biomarker discovery.

MALDI Imaging Mass Spectrometry

A deeper understanding of the complex biochemical processes occurring within tumor cells and tissues requires a knowledge of the spatial and temporal expression of individual proteins. Currently, such information is mainly obtained by IHC staining for specific proteins in patient tissues.[8,24,25] Nevertheless, IHC has limited use in high-throughput proteomic biomarker discovery because only a few proteins can be immunostained simultaneously. MALDI IMS allows researchers to analyze proteomic expression profiles directly from patient tissue sections.[26-28] The protocol begins with mounting a tissue section onto a sample plate (Figure 2). MALDI matrix is then applied onto the tissue sample, which is analyzed by MALDI MS in order to obtain mass spectra from predefined locations across the entire patient tissue section. The mass spectrum from each location is a complete proteomic profile for that particular area. All acquired mass spectra from the entire tissue are then compiled to create a 2D map for that tissue sample. This map could then be compared with those from other tissue samples to identify changes in protein or peptide expression or comparisons of the maps from different areas within the same tissue section could be performed. This technology  importantly allows the high-throughput discovery of novel protein markers. In addition, correlations between protein expression and tissue histology can also be studied easily.

Most studies using MALDI IMS have been performed on frozen tissue sections ranging from 5 to 20 µm in thickness.[26,27,29] After sectioning, a MALDI matrix is applied either by automated spraying or spotting. The matrix of choice is usually α-cyano-4-hydroxy-cinnamic acid for peptides and sinapinic acid (3,5-dimethoxy-4-hydroxycinnamic acid) for proteins.

Figure 2.  Procedures for MALDI imaging. IMS: Imaging mass spectrometry; MS: Mass spectrometry.

Spotting allows the precise application of matrix to areas of interest and minimizes the diffusion of analyte material across the sample, although the imaging resolution achieved by spotting is lower (~150 µm). A laser beam is then fired towards the area of interest on the tissue section to generate protein ions for analysis by a mass analyzer.[29] Among the different mass analyzers, TOF analyzers are the most commonly used owing to their high sensitivity, broad mass range and suitability for detection of ions generated by MALDI. Use of other mass analyzers such as TOF-TOF, quadrupole TOF (QTOF), ion traps (ITs) and Fourier transform-ion cyclotron resonance (FT-ICR) have also been reported in other studies.[30-33]

After obtaining the mass spectra, statistical analysis needs to be performed to identify statistically significant features that could have potential use as biomarkers. But before such analyses can be applied, there has to be background-noise subtraction, spectral normalization and spectral alignment.[34,35,34] Statistical methods used to identify significant differences in peak intensity are symbolic discriminant analysis and principal component analysis. Symbolic discriminant analysis determines discriminatory features and builds functions based on such features for distinguishing samples according to their classification.[36,37] Using this approach, Lemaire et al. found a putative proteomic biomarker from ovarian cancer tissues by MALDI IMS that was later identified to be the Reg-α protein, a member of the proteasome activator 11S.[37] This result was later successfully validated by western blot (protein expression found in 88.8% carcinoma cases vs 18.7% benign disease) and IHC (protein expression found in 63.6% carcinoma tissues vs 16.6% benign tissues).[37] On the other hand, principal component analysis reduces data complexity by transforming data based on peak intensities to information based on data variance, termed ‘principal components’, resulting in a list of significant peaks (principal components) ordered by decreasing variance.[35,38,39] Neither symbolic discriminant analysis or principal component analysis is capable of performing unsupervised classification. This aim requires the use of other methods such as hierarchical clustering.[39,40] In this method identified peaks are clustered as nodes in a pair-wise manner according to similarity until a dendogram is obtained, providing information as to the degree of association of all peak masses in a hierarchical fashion. Peaks that are capable of differentiating between different histological/pathological features could then be chosen for further validation of their value as tumor markers.[39]

In MALDI IMS, protein identification cannot be performed with confidence solely on the molecular weight. However, Groseclose et al. have developed a method using in situ digestion of proteins directly on tissue section.[41] They first used MALDI IMS to obtain a map of the protein and peptide spectra, then spotted a consecutive section of the same tissue sample with trypsin for protein digestion, and then spotted matrix solution onto the digested spots and the resulting peptides are identified directly from the tissue by MS/MS. This modification increases the confidence in protein identification. The time required for MALDI IMS analysis per tissue section is as follows: tissue sectioning, mounting and matrix application: 4-8 h; MALDI image acquisition: 1-2 days; spectral analysis: 1-2 h.[33,39]

Recently, in situ enzymatic digestion has been successfully applied for improving the retrieval of peptides directly from formalin-fixed, paraffin-embedded FFPE tissue samples.[27] Such development has greatly facilitated the application of MALDI IMS in FFPE tissues.[26,42] In fact, Stauber et al. identified the downregulation of ubiquitin, transelongation factor 1, hexokinase and neurofilament M from FFPE brain tissues of rat models of Parkinson disease using this modified technique.[42] The success of performing proteomic profiling using MALDI IMS directly on FFPE tissues opens up great possibility for using archival patient materials in high-throughput biomarker discovery. Novel cancer biomarkers identified using MALDI IMS still require validation by other techniques such as IHC.

Electron Transfer Dissociation MS

Post-translational modifications play important roles in the structure and function of proteins such as protein folding, protein localization, regulation of protein activity and mediation of protein-protein interaction. Two common forms of PTM that have been implicated in cancer development are phosphorylation and glycosylation. Previously, phosphoproteomic studies have led to the identification of novel tyrosine kinase substrates in breast cancer,[43] discovery of novel therapeutic targets for brain cancer[44] and increased understanding of signaling pathways involved in lung cancer formation.[45,46] Conversely, the identification of abnormally glycosylated proteins, such as mucins, has provided novel biomarkers and therapeutic targets for ovarian cancer.[47]

The study of PTM begins with digesting the target protein using enzymes such as trypsin,   introducing the fragments into MS for determination of the sites and types of modification and, at the same time, identification of the protein. The analysis is conventionally carried out using collision-induced dissociation (CID) MS, where peptides are collided with a neutral gas for cleavage of peptide bonds to produce b- and y-type ions (Figure 3). A complete series of peptides differing in length by one amino acid is produced, leading to identification of the protein by peptide-sequence determination. However, for phosphopeptides, the presence of phosphate groups would compete with the peptide backbone as the preferred cleavage site. The end result is a reduced set of peptide fragments, which hinders protein identification, and the exact location of the phosphate group on the peptide cannot be determined accurately when there are more than one possible phosphorylation sites.[48,49]

Figure 3.  Peptide bond-cleavage site for a-, b-, c-, x-, y– and z-type ions.

Electron transfer dissociation is a recently developed dissociation technique for the analysis of peptides by MS, utilizing radiofrequency quadrupole ion traps such as 2D linear IT, spherical IT and Orbitrap™ (Thermo Fisher Scientific Inc., MA, USA) mass analyzers.[48,49] In this technology, peptides are fragmented by transfer of electrons from anions to induce cleavage of Cα-N bonds along the peptide backbone, hence producing c- and z-type ions (Figure 3). In contrast to CID, ETD preserves the localization of labile PTM and also provides peptide-sequence information.[48] But ETD fails to fragment peptide bonds adjacent to proline, which are readily cleaved by CID.[50] A study that compared the performance of CID with that of ETD found that only 12% of the identified peptides were commonly detected between the two techniques. A study reported that CID successfully identified more peptides with charge states of +2 and below, whereas ETD was found to be better at identifying peptide ions with charge states of greater than +2.[51] Therefore, it is suggested that CID and ETD should be used together to complement each other.[52]  Han et al. successfully differentiated the isobaric amino acids isoleucine and leucine from one another by performing CID on the resulting z-ions after ETD. The presence of isoleucine residue was then confirmed by the detection of a specific 29-Da loss from the peptide.[53]  A clear advantage of using ETD for the analysis of phosphopeptides is a near complete series of c- and z-ions without loss of phosphoric acid,[48] greatly facilitating the determination of the phosphorylation sites and the identification of phosphopeptides. Recently, an analysis of yeast phosphoproteome using ETD successfully identified 1252 phosphorylation sites on 629 proteins, whose expression levels ranged from less than 50 to 1,200,000 copies per cell.[54] In another study using ETD, a total of 1435 phosphorylation sites were identified from human embryonic kidney 293T cells, of which 1141 (80%) were previously unidentified. Finally, a study by Molina et al. successfully identified 80% of the known phosphorylation sites in more than 1000 yeast phosphopeptides in one single study using a combination of ETD and CID.[55] In addition, ETD could be applied to investigate other forms of PTM, such as N-linked glycosylations.[56,57] N-linked glycans contain a common core with branched structures. These can be processed by stepwise addition or removal of monosaccharide residues linked by glycosidic bonds, producing highly varied forms of N-linked glycan structures.[58-60] A weakness of analyzing glycopeptides using CID is that cleavage of glycosidic bonds occurs with little peptide backbone fragmentation, so that only the glycan structure is available.[61]  Hogan et al. used CID and ETD together to overcome this problem determining the glycan structure and glycosylation site.[61] ICID was initially used for cleavage of glycosidic bonds that allowed the entire glycan structure to be inferred from the CID spectrum alone. ETD was later performed to dissociate the same peptide that resulted in a contiguous series of fragment ions with no loss of glycan molecules, allowing the identification of both the site of glycosylation and the identity of the glycoprotein.[61] Readers are strongly encouraged to refer to[49] and.[62] In a comprehensive comparison of CID versus ETD for the identification of peptides without PTMs, CID was found to identify 50% more peptides than ETD (3518 by CID vs 2235 by ETD), but ETD provided somewhat better sequence coverage (67% for CID vs 82% for ETD). It turns out that ETD produced more uniformly fragmented ions with intensities that were five- to ten-times lower than those produced by CID.[55] Finally, the best sequence coverage of up to 92% was achieved when consecutive CID and ETD were performed.[55]

This increase in sequence coverage using the combined approach is needed for studies requiring de novo peptide identifications. As such, this strategy is particularly suited for studies involved in the discovery, identification and characterization of novel peptides or proteins and their PTMs for biomarker use. A prerequisite of this technique is that the biological samples under investigation must undergo some form of fractionation before they are amenable to analysis by ETD or CID. This is achieved by the use of LC techniques, such as reverse-phase, strong cation exchange or strong anion exchange chromatography, and serves to reduce the complexity and wide dynamic range of protein-expression levels commonly found in biological specimens. Given the important roles of PTM in the function and activity of proteins, this technology paves the way for exploring the intricate cellular activities within a cancer cell.

References

  1. Duffy MJ, van Dalen A, Haglund C et al. Tumor markers in colorectal caner: European Group on Tumor Markers (EGTM) guidelines for clinical use. Eur. J. Cancer 43(9),1348-1360 (2007).
  2. Duffy MJ. Role of tumor markers in patients with solid cancers: a critical review. Eur. J. Intern. Med. 18(3),175-184 (2007).
  3. Bertucci F, Birnbaum D, Goncalves A. Proteomics of breast cancer: principles and potential clinical applications. Mol. Cell. Proteomics 5(10),1772-1786 (2006).
  4. Feng JT, Shang S, Beretta L. Proteomics for the early detection and treatment of hepatocellular carcinoma. Oncogene 25(27),3810-3817 (2006).
  5. Miller I, Crawford J, Gianazza E. Protein stains for proteomic applications: which, when, why? Proteomics 6(20),5385-5408 (2006).
  6. Kumarathasan P, Mohottalage S, Goegan P, Vincent R. An optimized protein in-gel digest method for reliable proteome characterization by MALDI-TOF-MS analysis. Anal. Biochem. 346(1),85-89 (2005).
  7. Meunier B, Bouley J, Piec I, Bernard C, Picard B, Hocquette JF. Data analysis methods for detection of differential protein expression in two-dimensional gel electrophoresis. Anal. Biochem. 340(2),226-230 (2005).
  8. Chan CM, Wong SC, Lam MY et al. Proteomic comparison of nasopharyngeal cancer cell lines C666-1 and NP69 identifies down-regulation of annexin II and ß2-tubulin for nasopharyngeal carcinoma. Arch. Pathol. Lab. Med. 132(4),675-683 (2008).
  9. Wong SC, Wong VW, Chan CM et al. Identification of 5-fluorouracil response proteins in colorectal carcinoma cell line SW480 by two-dimensional electrophoresis and MALDI-TOF mass spectrometry. Oncol. Rep. 20(1),89-98 (2008).
  10. Marouga R, David S, Hawkins E. The development of the DIGE system: 2D fluorescence difference gel analysis technology. Anal. BioAnal. Chem. 382(3),669-678 (2005).
  11. Timms JF, Cramer R. Difference gel electrophoresis. Proteomics 8(23-24),4886-4897 (2008).
  12. Minden J. Comparative proteomics and difference gel electrophoresis. Biotechniques 43(6),739-745 (2007).
  13. Kondo T, Hirohashi S. Application of highly sensitive fluorescent dyes (CyDye DIGE Fluor saturation dyes) to laser microdissection and two-dimensional difference gel electrophoresis (2D-DIGE) for cancer proteomics. Nat. Protoc. 1(6),2940-2986 (2007).
  14. Kang Y, Techanukul T, Mantalaris A, Nagy JM. Comparison of three commercially available DIGE analysis software packages: minimal user intervention in gel-based proteomics. J. Proteome Res. 8(2),1077-1084 (2009).
  15. Kreil DP, Karp NA, Lilley KS. DNA microarray normalization methods can remove bias from differential protein expression analysis of 2D difference gel electrophoresis results. Bioinformatics 20(13),2026-2034 (2004).
  16. Karp NA, McCormick PS, Russell MR, Lilley KS. Experimental and statistical considerations to avoid false conclusions in proteomics studies using differential in-gel electrophoresis. Mol. Cell. Proteomics 6(8),1354-1364 (2007).
  17. Kleno TG, Leonardsen LR, Kjeldal HØ, Laursen SM, Jensen ON, Baunsgaard D. Mechanisms of hydrazine toxicity in rat liver investigated by proteomics and multivariate data analysis. Proteomics 4(3),868-880 (2004).
  18. Smit S, Hoefsloot HCJ, Smilde AK. Statistical data processing in clinical proteomics. J. Chromatogr. B Analyt. Technol. Biomed. Life Sci. 866(1-2),77-88 (2008).
  19. Karp NA, Griffin JL, Lilley KS. Application of partial least squares discriminant analysis to two-dimensional difference gel studies in expression proteomics. Proteomics 5(1),81-90 (2005).
  20. Grove H, Jørgensen BM, Jessen F et al. Combination of statistical approaches for analysis of 2-DE data gives complementary results. J. Proteome Res. 7(12),5119-5124 (2008).
  21. Yu KH, Rustgi AK, Blair IA. Characterization of proteins in human pancreatic serum using differential gel electrophoresis and tandem mass spectrometry. J. Proteome Res. 4(5),1742-1751 (2005).
  22. Huang H-L, Stasyk T, Morandell S et al. Biomarker discovery in breast cancer serum using 2-D differential gel electrophoresis/MALDI-TOF/TOF and data validation by routine clinical assays. Electrophoresis 27(8),1641-1650 (2006).
  23. Sun W, Xing B, Sun Y et al. Proteome analysis of hepatocellular carcinoma by two-dimensional difference gel electrophoresis. Mol. Cell. Proteomics 6(10),1798-1808 (2007).
    •• Presents a very detailed account of the procedures for 2D difference gel electrophoresis analysis.
  24. Wong SC, Chan AT, Chan JK, Lo YM. Nuclear ß-catenin and Ki-67 expression in chriocarcinoma and its pre-malignant form. J. Clin. Pathol. 59(4),387-392 (2006).
  25. Chan CM, Ma BB, Hui EP et al. Cyclooxygenase-2 expression in advanced nasopharyngeal carcinoma: a prognostic evaluation and correlation with hypoxia inducible factor 1 α and vascular endothelial growth factor. Oral Oncol. 43(4),373-378 (2007).
  26. Groseclose MR, Massion PP, Chaurand P, Caprioli RM. High throughput proteomic analysis of formalin-fixed paraffin embedded tissue microarrays using MALDI imaging mass spectrometry. Proteomics 8(18),3715-3724 (2008).
  27. Lemaire R, Desmons A, Tabet JC, Day R, Salzet M, Fournier I. Direct analysis and MALDI imaging of formalin-fixed, paraffin-embedded tissue sections. J. Proteome Res. 6(4),1295-1305 (2007).
    •• Provides a detailed account of procedures for the analysis of paraffin-embedded tissue sections using MALDI imaging mass spectrometry (MS).
  28. Meistermann H, Norris JL, Aerni HR et al. Biomarker discovery by imaging mass spectrometry: transthyretin is a biomarker for gentamicin-induced nephrotoxicity in rat. Mol. Cell Proteomics 5(10),1876-1886 (2006).
  29. Cornett DS, Reyzer ML, Chaurand P, Caprioli RM. MALDI imaging mass spectrometry: molecular snapshots of biochemical systems. Nat. Methods 4(10),828-823 (2007).
    •• Excellent review on the application of MALDI imaging MS for studying biological systems.
  30. Shimma S, Sugiura Y, Hayasaka T, Zaima N, Matsumoto M, Setou M. Mass imaging and identification of biomolecules with MALDI-QIT-TOF-based system. Anal. Chem. 80(3),878-885 (2008).
  31. Taban IM, Altelaar AFM, van der Burgt YEM et al. Imaging of peptides in the rat brain using MALDI-FTICR mass spectrometry. J. Am. Soc. Mass Spectrom. 18(1),145-151 (2007).
  32. Hsieh Y, Casale R, Fukuda E et al. Matrix-assisted laser desorption/ionization imaging mass spectrometry for direct measurement of clozapine in rat brain tissue. Rapid Commun. Mass Spectrom. 20(6),965-972 (2006).
  33. Goodwin RJA, Penington SR, Pitt AR. Protein and peptides in pictures: imaging with MALDI mass spectrometry. Proteomics 8(18),3785-3800 (2008).
  34. Chaurand P, Norris JL, Cornett DS, Mobley JA, Caprioli RM. New developments in profiling and imaging of proteins from tissue sections by MALDI mass spectrometry. J. Proteome Res. 5(11),2889-2900 (2006).
  35. Yao I, Sugiura Y, Matsumoto M, Setou M. In situ proteomics with imaging mass spectrometry and principal component analysis in the Scrapper-knockout mouse brain. Proteomics 8(18),3692-3701 (2008).
  36. Schwartz SA, Weil RJ, Thompson RC et al. Proteomic-based prognosis of brain tumor patients using direct-tissue matrix-assisted laser desorption ionization mass spectrometry. Cancer Res. 65(17),7674-7681 (2005).
  37. Lemaire R, Menguellet SA, Stauber J et al. Specific MALDI imaging and profiling for biomarler hunting and validation: fragment of the 11S proteasome activator complex, Reg α fragment, is a new potential ovary cancer biomarker. J. Proteome Res. 6(11),4127-4134 (2007).
  38. Walch A, Rauser S, Deninger SO, Höfler H. MALDI imaging mass spectrometry for direct tissue analysis: a new frontier for molecular histology. Histochem. Cell Biol. 130(3),421-434 (2008).
  39. Deninger SO, Ebert MP, Fütterer A, Gerhard M, Röcken C. MALDI imaging combined with hierarchical clustering as a new tool for the interpretation of complex human cancers. J. Proteome Res. 7(12),5230-5236 (2008).
  40. McCombie G, Staab D, Stoeckli M, Knochenmuss R. Spatial and spectral correlations in MALDI mass spectrometry images by clustering and multivariate analysis. Anal. Chem. 77(19),6118-6124 (2005).
  41. Groseclose MR, Andersson M, Hardesty WM et al. Identification of proteins directly from tissue: in situ tryptic digestions coupled with imaging mass spectrometry. J. Mass Spectrom. 42(2),254-262 (2007).
    • First report of protein identification performed directly from tissue sections.
  42. Stauber J, Lemaire R, Franck J et al. MALDI imaging of formalin-fixed paraffin-embedded tissues: application to model animals of Parkinson disease for biomarker hunting. J. Proteome Res. 7(3),969-978 (2008).
  43. Chen Y, Choong LY, Lin Q et al. Differential expression of novel tyrosine kinase substrates during breast cancer development. Mol. Cell Proteomics 6(12),2072-2087 (2007).
  44. Huang PY, Cavenee WK, Furnari FB, White FM. Uncovering therapeutic targets for glioblastoma: a systems biology approach. Cell Cycle 6(22),2750-2754 (2007).
  45. Guha U, Chaerkady R, Marimuthu A et al. Comparisons of tyrosine phosphorylated proteins in cells expressing lung cancer-specific alleles of EGFR and KRAS. Proc. Natl Acad. Sci. USA 105(37),14112-14117 (2008).
  46. Rikova K, Guo A, Zeng Q et al. Global survey of phosphotyrosine signaling identifies oncogenic kinases in lung cancer. Cell 131(6),1190-1203 (2007).
  47. Singh AP, Senapati S, Ponnusamy MP et al. Clinical potential of mucins in diagnosis, prognosis, and therapy of ovarian cancer. Lancet Oncol. 9(11),1076-1085 (2008).
  48. Syka JEP, Coon JJ, Schroeder MJ, Shabanowitz J, Hunt DF. Peptide and protein sequence analysis by electron transfer dissociation mass spectrometry. Proc. Natl Acad. Sci. USA 101(26),9528-9533 (2004).
    • First report of the application of electron transfer dissociation MS for the analysis of peptides and proteins.
  49. Wiesner J, Premsler T, Sickmann A. Application of electron transfer dissociation (ETD) for the analysis of posttranslational modifications. Proteomics 8(21),4466-4483 (2008).
  50. Hayakawa S, Hashimoto M, Matsubara H, Turecek F. Dissecting the proline effect: dissociations of proline radicals formed by electron transfer to protonated Pro-Gly and Gly-Pro dipeptides in the gas phase. J. Am. Chem. Soc. 129(25),7936-7949 (2007).
  51. Good DM, Wirtala M, McAlister GC, Coon JJ. Performance characteristics of electron transfer dissociation mass spectrometry. Mol. Cell Proteomics 6(11),1942-1951 (2007).
  52. Han H, Xia Y, Yang M, McLuckey SA. Rapidly alternating transmission mode electron-transfer dissociation and collisional activation for the characterization of polypeptide ions. Anal. Chem. 80(9),3492-3497 (2008).
  53. Han H, Xia Y, McLuckey SA. Ion trap collisional activation of c and z ions formed via gas-phase ion/ion electron-transfer dissociation. J. Proteome Res. 6(8),3062-3069 (2007).
  54. Chi A, Huttenhower C, Geer LY et al. Analysis of phosphorylation sites on proteins from Saccharomyces cerevisiae by electron transfer dissociation (ETD) mass spectrometry. Proc. Natl Acad. Sci. USA 104(7),2193-2198 (2007).
  55. Molina H, Matthiesen R, Kandasamy K, Pandey A. Comprehensive comparison of collision induced dissociation and electron transfer dissociation. Anal. Chem. 80(13),4825-4835 (2008).
    • Study comparing the characteristics of peptide fragmentation performed by collision-induced dissociation with that of electron transfer dissociation.
  56. Catalina MI, Koeleman CAM, Deelder AM, Wuhrer M. Electron transfer dissociation of N-glycopeptides: loss of the entire N-glycosylated asparagine side chain. Rapid Commun. Mass Spectrom. 21(6),1053-1061 (2007).
  57. Abbott KL, Aoki K, Lim JM et al. Targeted glycoproteomic identification of biomarkers for human breast carcinoma. J. Proteome Res. 7(4),1470-1480 (2008).
  58. Yan A, Lennarz WJ. Unraveling the mechanism of protein N-glycosylation. J. Biol. Chem. 280(5),3121-3124 (2005).
  59. Danielle H, Bertozzi CR. Glycans in cancer and inflammation – potential for therapeutics and diagnostics. Nat. Rev. Drug Discov. 4(6),477-488 (2005).
  60. Morelle W, Canis K, Chirat F, Faid V, Michalski J-C. The use of mass spectrometry for the proteomic analysis of glycosylation. Proteomics 6(14),3993-4015 (2006).
  61. Hogan JM, Pitteri SJ, Chrisman PA, McLuckey SA. Complementary structural information from a tryptic N-linked glycopeptide via electron transfer ion/ion reactions and collision induced dissociation. J. Proteome Res. 4(2),628-632 (2005).
  62. Mikesh LM, Ueberheide B, Chi A et al. The utility of ETD mass spectrometry in proteomic analysis. Biochim. Biophys. Acta 1764(12),1811-1822 (2006).

Advanced Proteomic Technologies for Cancer Biomarker Discovery

Part II

Reverse-phase Protein Array

One of the goals of proteomics is to identify protein changes associated with the development of diseases such as cancer.  Even with the rapid development of proteomic technologies during the past few years, analysis of patient samples is still a challenge. Difficulties arise from the fact that[63,64]:

  • Proteomic patterns differ among cell types;
  • Protein expression changes occur over time;
  • Proteins have a broad dynamic range of expression levels spanning several orders of magnitude;
  • Proteins can be present in multiple forms, such as polymorphisms and splice variants;
  • Traditional proteomic methods require relatively large amounts of protein
  • Many proteomic technologies cannot be used to study protein-protein interactions.

The principle of RPA is simple and involves the spotting of patient samples in an array format onto a nitrocellulose support (Figure 4). Hundreds of patient specimens can be spotted onto an array, allowing a comparison of a large number of samples at once.[65] Each array is incubated with one particular antibody, and signal intensity proportional to the amount of analyte in the sample spot is generated.[66] Signal detection is commonly performed by fluorescence, chemiluminescence or colorimetric methods. The results are quantified by scanning and analyzed by softwares such as P-SCAN and ProteinScan, which can be downloaded from[84] for free.[67,68]

Figure 4.  Principle of reverse-phase protein array.

Main advantages of RPA technology include[69-71]:

  • Various types of biological samples can be used;
  • The possibility of investigating PTMs;
  • Protein-protein interactions can be studied;
  • Labeling of patient samples with fluorescent dyes (e.g., 2D DIGE) or mass tags (e.g., isotope-coded affinity tag [ICAT]) are not required;
  • Any samples spotted as a dilution allows quantifying in the linear range of detection;
  • Quantitative measurement of any protein is possible compared to reference standards of known amounts on the same array.

It has been shown that RPA is extremely sensitive as it is capable of detecting up to zeptomole (1 x 10-21 mole) levels of target proteins with less than 10% variance. The analysis of few cell signaling events is known.[65,70,71] The assay sensitivity depends on antibody affinity, which depends upon antigen-antibody pairs.[68] Of course, only known proteins with available antibodies can be identified. Therefore, this method is more suitable for biomarker screening or validation than discovery of novel proteins. To assist researchers in selecting suitable antibodies, two open antibody databases show their western blot results using cell lysates.[72,73,85,86]

One application of RPA is to investigate the signaling pathways in human cancers. Zha et al. compared the survival signaling events between Bcl 2-positive and -negative lymphomas and found that survival signals, independent of Bcl 2 expression, were detected in follicular lymphoma and confirmed by validation with IHC.[71] In another study, patient-specific signaling pathways have been identified in breast cancers using RPA. Bayesian clustering of a set of 54 subjects successfully separated normal subjects from cancer patients based on an epithelial signaling signature. Principal component analysis was capable of distinguishing normal from cancer patient samples by using a signature composed of a panel of kinase substrates.[69] Differences in cell signaling between patient-matched primary and metastatic lesions have also been found using RPA. In the study, six patient-matched primary ovarian tumors probed with antibodies against signaling proteins, and the signaling profiles differed significantly between primary and metastatic tumors and upregulation of phosphor c-kit was capable of distinguishing five of the six metastatic tumors from the primary lesions.[70] These findings suggest that treatment strategies may need to target signaling events among disseminated tumor cells.

Reverse-phase protein array has also been used to validate mathematical models of cellular pathways. The p53-Mdm2 feedback loop is one of the most well-studied cellular-feedback mechanisms.[74] Normally, p53 activates transcription and expression of Mdm2, which, in turn, suppresses p53 activity. This negative-feedback loop ensures the low-level expression of p53 under normal conditions. Mathematical models have previously been used to investigate this negative-feedback loop.[67] Ramalingam et al. has shown, by using RPA, that part of the mechanism of the p53-Mdm2 feedback loop can be explained by current mathematical models.[75]

Another important application of RPA is for the identification of cancer specific antigens.  Using this method serum from 14 lung cancer patients, colon cancer patients and normal subjects were incubated and eight fractions of the cell lysate were recognized by the sera from four patients, while none of the sera from normal individuals was positive.[76] This study demonstrates the diagnostic potential of identifying cancer antigens that induce immune response in cancer patients by using RPA.

Expert Commentary and Five-year View

The development of 2D DIGE in the past few years has provided researchers with a more accurate method for relative quantification of proteins substantially reducing the number of replicates required for 2D gels and increased its applicability for high-throughput biomarker discovery. MALDI MS has immensely facilitated the direct discovery of biomarkers from patient tissue. Even though archival patient tissue samples are a potential source of materials for tumor marker research, high-throughput techniques for biomarker discovery using such samples has been problematic. With the development of MALDI IMS, investigators can now perform studies that aim to discover novel biomarkers directly from tissue sections and are able to correlate their expression with the histopathological changes of tumors. Previously, investigation into the sites of protein PTM has been difficult since MS-dissociation techniques, such as CID, would lead to preferential loss of PTM, but the use of ETD as a complementary peptide ion-dissociation method has allowed researchers to investigate the precise location and structure of the PTM, and to identify peptide sequence with higher confidence.

The rapid technological improvements in proteomic technologies will identify potential biomarkers for clinical use. Independent validation studies using clinical specimens must be performed before such markers can be applied clinically,. In this regard, RPA has added a potential for high-throughput screening or validation of newly found markers. Using this technique, it will be possible for researchers to quantitatively measure and validate novel markers on hundreds of patient samples simultaneously.

A big problem for proteomic researchers iincludes the abundance of proteins in biological samples. This could be partially solved by depletion of abundant proteins or by fractionation of protein samples according to characteristics. It is envisaged that, in the future, proteomic technologies will be developed to a stage that is capable of analyzing complex protein mixtures without preparatory fractionation. Such progress has recently been achieved in LC-MS, where the use of a high-field, asymmetric waveform, ion-mobility spectrometry device as an interface to an IT MS resulted in a more than fivefold increase in dynamic range without increasing the length of the LC-MS analysis.[77]

Another area that needs improvement is the standardization of protocols for patient-sample collection because results were found to be inconsistent among various studies using MS.[78] It is also considered that part of the reason for this inconsistency is due to the differences in sample-collection or sample-handling procedures.[78,79] The Human Proteome Organization previously published its findings on pre-analytical factors that affect plasma proteomic patterns and provides suggestions for sample handling.[80,81] In addition to the pre-analytical stages, it is imperative to stress that consistent and strict adherence to predefined procedures or standards, from sample collection, sample processing, experimentation, data analysis through to result validation, are of utmost importance to minimize variations and achieve consistent and reproducible results.

Any newly identified potential biomarker must also be validated using an independent cohort of patients in order to establish its clinical value, but the translation of results from the laboratory to the clinic has been slow. Consequently, it has been suggested that quantitative MS could be used for the detection of proteins.[82] The increasing availability of MS facilities to researchers worldwide will facilitate the detection, measurement and validation of protein biomarkers using quantitative MS techniques. Even after validation of such results in the laboratory, diagnostic tests will need to be developed for the marker and large-scale clinical trials would also have to be performed to confirm the results.  All these efforts require cooperation of personnel from various disciplines, such as scientists, medical professionals, pharmaceutical companies and governments. Finally, it is hoped that, through improved understanding of the protein expression as cancer progresses will lead to the discovery and development of useful cancer biomarkers for patient diagnosis, prognosis, monitoring and treatment.

Key Issues

  • 2DE coupled with mass spectrometry has been the main workhorse for the proteomic discovery of novel biomarkers in the past 10 years, and the development of 2D difference gel electrophoresis has substantially improved the quantification accuracy of 2DE.
  • MALDI imaging mass spectrometry has allowed the identification of novel proteomic features directly from patient tissue section for correlation with histopathological changes.
  • Electron transfer dissociation mass spectrometry has opened up the possibility of identifying the structure and localization of the post-translational modification and the peptide/protein.
  • Reverse-phase protein array is a powerful tool for the high-throughput validation of novel biomarkers across hundreds of patient samples simultaneously.

References

63.  States DJ, Omenn GS, Blackwell TW et al. Challenges in deriving high-confidence protein identifications from data gathered by a HUPO plasma proteome collaborative study. Nat. Biotechnol. 24(3),333-338 (2006).

64. Wulfkuhle JD, Edmiston KH, Liotta LA, Petricoin EF 3rd. Technology insight: pharmacoproteomics for cancer – promises of patient-tailored medicine using protein microarrays. Nat. Clin. Pract. Oncol. 3(5),256-268 (2006).

•• Excellent review on the clinical application of reverse-phase protein array.

65. Tibes R, Qiu Y, Lu Y et al. Reverse phase protein array: validation of a novel proteomic technology and utility for analysis of primary leukemia specimens and hematopoietic stem cells. Mol. Cancer Ther. 5(10),2512-2521 (2006).

66. LaBaer J, Ramachandran N. Protein microarrays as tools for functional proteomics. Curr. Opin. Chem. Biol. 9(1),14-19 (2005).

67. Ramalingam S, Honkanen P, Young L et al. Quantitative assessment of the p53-Mdm2 feedback loop using protein lysate microarrays. Cancer Res. 67(13),6247-6252 (2007).

68. Nishizuka S, Ramalingam S, Spurrier B et al. Quantitative protein network monitoring in response to DNA damage. J. Proteome Res. 7(2),803-808 (2008).

69. Petricoin EF 3rd, Bichsel VE, Calvert VS et al. Mapping molecular networks using proteomics: a vision for patient-tailored combination therapy. J. Clin. Oncol. 23(15),3614-3621 (2005).

70. Sheehan KM, Calvert VS, Kay EW et al. Use of reverse-phase protein microarrays and reference standard development for molecular network analysis of metastatic ovarian carcinoma. Mol. Cell Proteomics 4(4),346-355 (2005).

71. Zha H, Raffled M, Charboneau L et al. Similarities of prosurvival signals in Bcl 2-positive and Bcl 2-negative follicular lymphomas identified by reverse phase protein microarray. Lab. Invest. 84(2),235-244 (2004).

72. Major SM, Nishizuka S, Morita D et al. AbMiner: a bioinformatic resource on available monoclonal antibodies and corresponding gene identifiers for genomic, proteomic, and immunologic studies. BMC Bioinformatics 7,192 (2006).

73. Spurrier B, Washburn FL, Asin S, Ramalingam S, Nishizuka S. Antibody screening database for protein kinetic modeling. Proteomics 7(18),3259-3263 (2007).

74. Ciliberto A, Novak B, Tyson JJ. Steady states and oscillations in the p53/Mdm2 network. Cell Cycle 4(3),488-493 (2005).

75. Ma L, Wagner J, Rice JJ, Hu W, Levine AJ, Stolovitzky GA. A plausible model for the digital response of p53 to DNA damage. Proc. Natl Acad. Sci. USA 102(40),14266-14271 (2005).

76. Madoz-Gurpide J, Kuick R, Wang H, Misek DE, Hanash SM. Integral protein microarrays for the identification of lung cancer antigens in sera that induce a humoral immune response. Mol. Cell. Proteomics 7(2),268-281 (2007).

77. Canterbury JD, Yi X, Hoopmann MR, MacCoss MJ. Assessing the dynamic range and peak capacity of nanoflow LC-FAIMS-MS on an ion trap mass spectrometer for proteomics. Anal. Chem. 80(18),6888-6897 (2008).

78. Coombes KR, Morris JS, Hu J, Edmonson SR, Baggerly KA. Serum proteomics – a young technology begins to mature. Nat. Biotechnol. 23(3),291-292 (2005).

78. Hortin GL. Can mass spectrometric protein profiling meet desired standards of clinical laboratory practice? Clin. Chem. 51(1),3-5 (2005).

79. Omenn GS, States DJ, Adamski M et al. Overview of the HUPO plasma proteome project: results from the pilot phase with 35 collaborating laboratories and multiple analytical groups, generating a core dataset of 3020 proteins and a publicly-available database. Proteomics 5(13),3226-3245 (2005).

80. Rai AJ, Gelfrand CA, Haywood BC et al. HUPO plasma proteome project specimen collection and handling: towards the standardization of parameters for plasma proteome samples. Proteomics 5(13),3262-3277 (2005).

• Concise report on several pre-analytical factors that impact the results of plasma proteomic profiling.

81. Mann M. Can proteomics retire the western blot? J. Proteome Res. 7(8),3065 (2008).

Update from LC/GC North America.

Solutions for Separation Scientists. Aug 2012; 30(8).

30 years of LCGC

www.chromatographyonline.com

The key advances in separation science is covered in five areas of the discipline:

  1. sample preparation
  2. gas chromatography(GC) columns
  3. GC instrumentation
  4. liquid cheomatography (LC) columns
  5. LC instrumentation

In the first, there is automated sample preparation in kit form (QuEChERS). A short list of automated sample preparation techniques includes: supercritical fluid extraction (SFE), microwave extraction, automated solvent extraction (ASE), and solid phase extraction (SPE). A panel of experts views the bast basic method of extraction is SPE, and one uses solid phase microextraction with direct immersion and static headspace extraction, along with liquid-liquid extraction.[2] In GC incremental improvements have been made with ionic liquids, multidimentional GC, and fast GC. LC has advanced dramatically with ultra-high pressure LC and superficially porous particles. LC-MS has become standard equipment routinely used in many labs.[1]

Biomarkers have to be detected in a background of 104-106 other components of comparable concentration that also partition with the stationary phase. The partition coefficients of many species are similar, or identical to the biomarker target. The issue is how to select and resolve fewer than 100 biomarkers from a milieu of 1 million components in a complex mixture. The novel idea is to target structure instead of general properties of molecules.[3] How might this work?  A single substrate, metabolite, hormone, or toxin is identified in milliseconds by specific protein receptors. The combinatorial chemistry community has shown that synthetic polynucleotides (aptamers) can be found and amplified that have selectivities approaching antibodies.This is a method well know for years as affinity chromatography. A distinct problem has been the natural process of post translational modification (PTMs), which may create isoforms by addition of a single phosphate ester to be found in the proverbial soup.

1. Bush L. Separation Science: Past, Present and Future. LCGC NA 2012; 30(8):620.

2.McNally ME. Analysis of the State of the Art: Sample Preparation. LCGC NA 2012; 30(8):648-651.

2. Regnier FE. Plates vs Selectivity: An Emerging Issue with Complex Samples.  LCGC NA 2012; 30(8):622.

Read Full Post »

A Protease for ‘Middle-down’ Proteomics

Author and Reporter: Ritu Saxena, Ph.D.

Neil Kelleher and his research team at Northwestern University have developed a method for enzymatic proteolysis large peptides for mass spectrometry–based proteomics using a protease OmpT. The method was published in a recent issue of the journal Nature. http://www.ncbi.nlm.nih.gov/pubmed/22706673

Proteomics is defined as the study of the structure and function of proteins. Proteomic technologies will play an important role in drug discovery, diagnostics and molecular medicine because is the link between genes, proteins and disease. As researchers study defective proteins that cause particular diseases, their findings will help develop new drugs that either alter the shape of a defective protein or mimic a missing one. http://www.ama-assn.org/ama/pub/physician-resources/medical-science/genetics-molecular-medicine/current-topics/proteomics.page Proteomics, although refers to the study of the structure and function of proteins, it is often specifically used for protein purification and mass spectrometry.

‘Bottom-up’ and ‘Top-down’ are the two main strategies for proteomic studies using mass spectrometry. In Bottom-up proteomics referred to as the more common method, proteins are broken down into smaller pieces through enzymatic digestion followed by characterization into amino acid sequences and post translational modifications prior to analysis by mass spectrometry. By identifying and sequencing these smaller pieces, researchers can then determine the identity of the protein they make up. In Top-down proteomics, on the other hand, the process of proteolysis is skipped and it focuses on complete characterization of intact proteins and their post-translational modifications (PTMs).

“Although both the top-down and bottom-up approaches continue to mature, they each have limitations. The tryptic peptides used in the bottom-up approach are the primary unit of measurement, but their relatively small size (typically ~8–25 residues long) leads to problems such as sample complex­ity, difficulties in assigning peptides to specific gene products rather than protein groups, and loss of single and combinato­rial PTM information. The top-down approach handles these issues by characterizing intact proteins, but its success declines in the high-mass region. Therefore, a hybrid approach based on 2–20 kDa peptides could unite positive aspects of both bottom-up and top-down proteomics” says Kelleher et al in the research article.

The hybrid approach, referred to as ‘middle-down’ proteomics would enable the analysis of complex mixtures pre-sorted by protein size. Previously research efforts ‘middle-down’ proteomics included exploring the restricted proteolysis with enzyme alternatives to Trypsin and chemical methods (such as microwave-assisted acid hydrolysis), However, these methods generated peptides that were marginally longer than those produced by trypsin digestion. For the current study, Kelleher adds “We established an OmpT-based middle-down platform to analyze complex mixtures pre-sorted by protein size. After inte­grating the data from the middle-down workflow that was applied to ~20–100-kDa proteins fractionated from the HeLa cell proteome, we identified 3,697 unique peptides (average size: 6.3 kDa) from 1,038 unique proteins (26% average sequence coverage) at an esti­mated 1% false discovery rate”.

OmpT, a protease derived from Escherichia coli K12 outer membrane belongs to the novel omptin protease family10 and is known to cleave between two consecutive basic amino acid residues (Lys/Arg-Lys/Arg). The authors developed OmpT into an efficient rea­gent to generate >2-kDa peptides for middle-down proteomics, thus, utilizing OmpT to achieve robust, yet restricted, proteolysis of a complex genome. http://www.ncbi.nlm.nih.gov/pubmed/22706673

Researcher Kelleher and his team have been in news earlier for their work on ‘top-down’ proteomics when his team developed a new method that could separate and identify thousands of protein molecules quickly. In the first large-scale demonstration of the top-down method, the researchers were able to identify more than 3,000 protein forms created from 1,043 genes from human HeLa cells. The study was published in last year in the October issue of the journal Nature. http://www.ncbi.nlm.nih.gov/pubmed?term=22037311

Thus, Kelleher and his group was able to demonstrate that OmpT-based proteomic approach has a robust and restricted proteolysis capacity making it an attractive option for mass-spectrometry-based analysis of primary structure of protein.

Read Full Post »

Metabolic Disturbances Associated with Systemic Lupus Erythematosus.

via Metabolic Disturbances Associated with Systemic Lupus Erythematosus.

Read Full Post »

« Newer Posts