Feeds:
Posts
Comments

Archive for the ‘Rapid automation of plasma protein pools’ Category

Introduction to Proteomics

Author and Curator: Larry H. Bernstein, MD, FCAP  

 

We have had a considerable extended discussion of preoteins and peptides, protein sinthesis, amino acid incorporation into protein, and metabolism of carbohydrates and lipids.  It is also clear that the historic practice of medicine, and the classification of biological systems has been highly dependent on the observations related to the observed phenotypical traits and disturbances of normal function that could be measured by traditional metabolic pathways for over a century.

What did we gain from the genomic revolution?

  1. Traceability of protein expression to a basic coded message
  2. The possibility of tracing disturbed cellular function to mutation related loss-of-function
  3. The ability to trace generational traits over long periods of time
  4. The promise of regenerating the enterprise of pharmacology and pharmaceutical intervention based on the silencing of or readjustment of regulated metabolic pathways to bring an adaptive rebalancing favoring extended life

What can we expect as we progress further as a result of the last two decades?

  1. There is a huge amount of information, as well as missing information that is necessary for adequately tackling the mastery of the life processes.
  2. There is a complex web of knowledge that goes beyond the genome and the one-gene one-enzyme, and the DNA-RNA-protein hypotheses that can only be realized by more full disclosure of the many metabolic control circuits involved in cellular homeostasis and adaptive control.
  3. The ability to come to disclosure and understanding of this cellular balancing will require the comprehensive exploration of the proteome and the active role of proteins and peptides in the functioning of all cells, and the organism.
  4. Proteomics will open up the discovery of new approaches to diagnostics and pharmaceutical discovery.

What about proteins?  What can proteins do? What can’t they do!

  • Enzymes are proteins that make sure that chemical reactions in your body take place up to a million times faster than they would without enzymes.
  • Antibodies are proteins that help your immune system to fight disease.
  • When you get an injury, the bleeding stops because of blood clots, thanks to the proteins fibrinogen and thrombin.
  • Transport! Some proteins carry vitamins ot hormones from one place to another, or form tunnels (pores) in cell membranes that will let only specific molecules (or ions) through. Hemoglobin, a protein in your blood, carries oxygen from your lungs to your cells.
  • Strength and support! Other proteins like collagen and keratin are strong and tough and make up your skin, hair, and fingernails. Collagen also supports your cells and organs so they don’t slosh around.
  • Motion! The proteins myosin and actin make up much of your muscle tissue. They work together so your muscles can move you around. Some bacteria have cilia and flagella made out of proteins. The bacteria can whip these around to move from place to place.

http://www.pslc.ws/macrog/kidsmac/protein.htm

Proteins (/ˈprˌtnz/ or /ˈprti.ɨnz/) are large biological molecules, or macromolecules,

Proteins perform a vast array of functions within living organisms, including

  1. catalyzing metabolic reactions,
  2. replicating DNA,
  3. responding to stimuli, and
  4. transporting molecules from one location to another.

Proteins differ from one another primarily in

  1. their sequence of amino acids,
  2. which is dictated by the nucleotide sequence of their genes, and
  3. which usually results in folding of the protein into

A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than about 20-30 residues, are rarely considered to be proteins and are commonly called peptides, or sometimes oligopeptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by

In general, the genetic code specifies 20 standard amino acids; however, in certain organisms the genetic code can include selenocysteine and—in certain archaeapyrrolysine. Shortly after or even during synthesis,

  • the residues in a protein are often chemically modified by posttranslational modification,
  • which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins.

http://en.wikipedia.org/wiki/Protein

Posttranslational modification (PTM) is a step in protein biosynthesis. Proteins created by ribosomes translating mRNA into polypeptide chains may undergo PTM (such as folding, cutting and other processes) before becoming the mature protein product.  After translation, the posttranslational modification of amino acids extends the range of functions of the protein by attaching it to other biochemical functional groups (such as acetate, phosphate, various lipids and carbohydrates), changing the chemical nature of an amino acid (e.g. citrullination), or making structural changes (e.g. formation of disulfide bridges).

Also, enzymes may remove amino acids from the amino end of the protein, or cut the peptide chain in the middle. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds. Also, most nascent polypeptides start with the amino acid methionine because the “start” n mRNA also codes for this amino acid. This amino acid is usually taken off during post-translational modification. Other modifications, like phosphorylation, are part of common mechanisms for controlling the behavior of a protein, for instance activating or inactivating an enzyme.

posttranslational modification of insulin

posttranslational modification of insulin

Posttranslational modification of insulin. At the top, the ribosome translates a mRNA sequence into a protein, insulin, and passes the protein through the endoplasmic reticulum, where it is cut, folded and held in shape by disulfide (-S-S-) bonds. Then the protein passes through the golgi apparatus, where it is packaged into a vesicle. In the vesicle, more parts are cut off, and it turns into mature insulin.

Genetic Code mapped

Genetic Code mapped

The genetic code diagram showing the amino acid residues as target of modification.

PTMs involving addition of cofactors for enhanced enzymatic activity

http://en.wikipedia.org/wiki/Posttranslational_modification

Sometimes proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors.  Examples of cofactors include metal ions like iron and zinc. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes.

cofactor-examples

cofactor-examples

Coenzymes are molecules that work at the active site of an enzyme and aid in recognizing, attracting, or repulsing a substrate or product. Many are derived from vitamins. The substrate is the molecule upon which an enzyme catalyzes a reaction transforming A to B by removal or addition of a hydrogen, or a hydroxyl group, or a methyl group, and so forth. This is  how an alcohol or an aldehyde is produced. Such a reaction is critical is carbohydrate metabolism for producing two 3-carbon sugars from a 6-carbon sugar. Coenzymes shuttle chemical groups from one enzyme to another enzyme. They may bind loosely to enzymes, while another group of cofactors do not.

Prosthetic groups are cofactors that bind tightly to proteins or enzymes. As if holding on for dear life, they are not easily removed. They can be organic or metal ions and are often attached to proteins by a covalent bond. The same cofactors can bind multiple different types of enzymes and may bind some enzymes loosely, as a coenzyme, and others tightly, as a prosthetic group. Some cofactors may always tightly bind their enzymes. It’s important to note, though, that these prosthetic groups can also bind to proteins other than enzymes.  A holoenzyme is an enzyme with any metal ions or coenzymes attached to it that is now ready to catalyze a reaction.

prosthetic-groups

prosthetic-groups

http://education-portal.com/academy/lesson/coenzymes-cofactors-prosthetic-groups-function-and-interactions.html#lesson

Around the world, millions of people don’t get enough protein. Protein malnutrition leads to the condition known as kwashiorkor. Lack of protein can cause growth failure, loss of muscle mass, decreased immunity, weakening of the heart and respiratory system, and death.

All Protein Isn’t Alike

Protein is built from building blocks called amino acids. Our bodies make amino acids in two different ways: Either from scratch, or by modifying others. A few amino acids (known as the essential amino acids) must come from food.

  • Animal sources of protein tend to deliver all the amino acids we need.
  • Other protein sources, such as fruits, vegetables, grains, nuts and seeds, lack one or more essential amino acids.

Vegetarians need to be aware of this. People who don’t eat meat, fish, poultry, eggs, or dairy products need to eat a variety of protein-containing foods each day in order to get all the amino acids needed to make new protein.

http://www.hsph.harvard.edu/nutritionsource/what-should-you-eat/protein/
Molecular Biologists Guide to Proteomics

PR. Graves and TA.J. Haystead*
Microbiol Mol Biol Rev. Mar 2002; 66(1): 39–63  PMC120780
http://dx.doi.org:/10.1128/MMBR.66.1.39-63.2002

The emergence of proteomics, the large-scale analysis of proteins, has been inspired by the realization that

  • the final product of a gene is inherently more complex and
  • closer to function than the gene itself.

Shortfalls in the ability of bioinformatics to predict

  • both the existence and function of genes have also illustrated
  • the need for protein analysis.

Moreover, only through the study of proteins can posttranslational modifications be determined,

  • which can profoundly affect protein function.

Proteomics has been enabled by

  • the accumulation of both DNA and protein sequence databases,
  • improvements in mass spectrometry, and
  • the development of computer algorithms for database searching.

In this review, we describe why proteomics is important,

  • how it is conducted, and
  • how it can be applied to complement other existing technologies.

We conclude that currently, the most practical application of proteomics is

  • the analysis of target proteins as opposed to entire proteomes.

This type of proteomics, referred to as functional proteomics, is always

  • driven by a specific biological question.

In this way, protein identification and characterization has a meaningful outcome. We discuss some of the advantages

  • of a functional proteomics approach and

provide examples of how different methodologies can be utilized to address a wide variety of biological problems.

Entry of our laboratory into proteomics 5 years ago was driven by a need to define a complex mixture of proteins (∼36 proteins) we had affinity isolated that bound specifically to the catalytic subunit of protein phosphatase 1 (PP-1, a serine/threonine protein phosphatase that regulates multiple dephosphorylation events in cells). We were faced with the task of trying to understand the significance of these proteins, and the only obvious way to begin to do this was to identify them by sequencing. Since the majority of intact eukaryotic proteins are not immediately accessible to Edman sequencing

  • due to posttranslational N-terminal modifications,
  • we invented mixed-peptide sequencing.

This method enables internal peptide sequence information to be derived from proteins

  • electroblotted onto hydrophobic membranes.

Using the mixed-peptide sequencing strategy, we identified all 36 proteins in about a week. The mixture contained at least two known PP-1 regulatory subunits, but most were novel proteins of unknown function. Herein lies the lesson of proteomics. Identifying long lists of potentially interesting proteins often generates more questions than it seeks to answer.

Despite learning this obvious lesson, our early sequencing experiences were an epiphany that has subsequently altered our whole scientific strategy for probing protein function in cells. The sequencing of the 36 proteins has opened new avenues to further explore the functions of PP-1 in intact cells. Because of increased sensitivity, our approaches now routinely use state-of-the-art mass spectrometry (MS) techniques. However, rather than using proteomics to simply characterize large numbers of proteins in complex mixtures, we see the real application of this technology as a tool to enhance the power of existing approaches currently used by the modern molecular biologist such as classical yeast and mouse genetics, tissue culture, protein expression systems, and site-directed mutagenesis.

Importantly, the one message we would want the reader to take away from reading this review is that one should always let the biological question in mind drive the application of proteomics rather than simply engaging in an orgy of protein sequencing. From our experiences, we believe that if the appropriate controls are performed, proteomics is an extremely powerful approach for addressing important physiological questions. One should always design experiments to define a selected number of relevant proteins in the mixture of interest. Examples of such experiments that we routinely perform include defining early phosphorylation events in complex protein mixtures after hormone treatment of intact cells or comparing patterns of protein derived from a stimulated versus nonstimulated cell in an affinity pull-down experiment. Only the proteins that were specifically phosphorylated or bound in response to the stimulus are sequenced in the complex mixtures. Sequencing proteins that are regulated then has a meaningful outcome and directs all subsequent biological investigation.

The term “proteomics” was first coined in 1995 and was defined as the large-scale characterization of the entire protein complement of a cell line, tissue, or organism. Today, two definitions of proteomics are encountered. The first is the more classical definition, restricting the large-scale analysis of gene products to studies involving only proteins. The second and more inclusive definition combines protein studies with analyses that have a genetic readout such as mRNA analysis, genomics, and the yeast two-hybrid analysis. However, the goal of proteomics remains the same, i.e., to obtain a more global and integrated view of biology by studying all the proteins of a cell rather than each one individually.

Using the more inclusive definition of proteomics, many different areas of study are now grouped under the rubric of proteomics (Fig. (Fig.1).1). These include protein-protein interaction studies, protein modifications, protein function, and protein localization studies to name a few. The aim of proteomics is not only to identify all the proteins in a cell but also to create a complete three-dimensional (3-D) map of the cell indicating where proteins are located. These ambitious goals will certainly require the involvement of a large number of different disciplines such as molecular biology, biochemistry, and bioinformatics. It is likely that in bioinformatics alone, more powerful computers will have to be devised to organize the immense amount of information generated from these endeavors.

Types of proteomics and their applications to biology

Types of proteomics and their applications to biology

In the quest to characterize the proteome of a given cell or organism, it should be remembered that the proteome is dynamic. The proteome of a cell will reflect the immediate environment in which it is studied. In response to internal or external cues, proteins can be modified by posttranslational modifications, undergo translocations within the cell, or be synthesized or degraded. Thus, examination of the proteome of a cell is like taking a “snapshot” of the protein environment at any given time. Considering all the possibilities, it is likely that any given genome can potentially give rise to an infinite number of proteomes.

The first major technology to emerge for the identification of proteins was the sequencing of proteins by Edman degradation. A major breakthrough was the development of microsequencing techniques for electroblotted proteins. This technique was used for the identification of proteins from 2-D gels to create the first 2-D databases.  One of the most important developments in protein identification has been the development of MS technology. In the last decade, the sensitivity of analysis and accuracy of results for protein identification by MS have increased by several orders of magnitude. It is now estimated that proteins in the femtomolar range can be identified in gels. Because MS is more sensitive, can tolerate protein mixtures, and is amenable to high-throughput operations, it has essentially replaced Edman sequencing as the protein identification tool of choice.

The growth of proteomics is a direct result of advances made in large-scale nucleotide sequencing of expressed sequence tags and genomic DNA. Without this information, proteins could not be identified even with the improvements made in MS. Protein identification (by MS or Edman sequencing) relies on the presence of some form of database for the given organism. The majority of DNA and protein sequence information has accumulated within the last 5 to 10 years. In 1995, the first complete genome of an organism was sequenced, that of Haemophilus influenzae. At the time of this writing, the sequencing of the genomes of 45 microorganisms has been completed and that of 170 more is under way (http://www.tiger.org/tdb/mdb/mdbcomplete.html). To date, five eukaryotic genomes have been completed: Arabidopsis thaliana, Saccharomyces cerevisiae, Schizosaccharomyces pombe, Caenorhabditis elegans, and Drosophila melanogaster. In addition, the rice, mouse, and human genomes are near completion.

One of the first applications of proteomics will be to identify the total number of genes in a given genome. This “functional annotation” of a genome is necessary because

  • it is still difficult to predict genes accurately from genomic data. One problem is that
  • the exon-intron structure of most genes cannot be accurately predicted by bioinformatics.

To achieve this goal, genomic information will have to be integrated with

  • data obtained from protein studies to confirm the existence of a particular gene.

The analysis of mRNA is

  • not a direct reflection of the protein content in the cell.

Many studies have shown a poor correlation

  • between mRNA and protein expression levels.

The formation of mRNA is only the first step in a long sequence of events resulting in the synthesis of a protein (Fig. (Fig.2).2).

  1. mRNA is subject to posttranscriptional control in the form of alternative splicing, polyadenylation, and mRNA editing. Many different protein isoforms can be generated from a single gene at this step.
  2. mRNA then can be subject to regulation at the level of protein translation. Proteins, having been formed, are subject to posttranslational modification. It is estimated that up to 200 different types of posttranslational protein modification exist. Proteins can also be regulated by proteolysis and compartmentalization. It is clear that the tenet of “one gene, one protein” is an oversimplification.
Mechanisms by which a single gene can give rise to multiple gene products

Mechanisms by which a single gene can give rise to multiple gene products

Mechanisms by which a single gene can give rise to multiple gene products. Multiple protein isoforms can be generated by RNA processing when RNA is alternatively spliced or edited to form mature mRNA. mRNA, in turn, can be regulated by stability and efficiency
One of the most important applications of proteomics will be the characterization of posttranslational protein modifications. Proteins are known to be modified posttranslationally in response to a variety of intracellular and extracellular signals. For example, protein phosphorylation is an important signaling mechanism and disregulation of protein kinases or phosphatases can result in oncogenesis. By using a proteomics approach, changes in the modifications of many proteins expressed by a cell can be analyzed simultaneously.
Of fundamental importance in biology is the understanding of protein-protein interactions. The process of cell growth, programmed cell death, and the decision to proceed through the cell cycle are all regulated by signal transduction through protein complexes. Proteomics aims to develop a complete 3-D map of all protein interactions in the cell. One step toward this goal was recently completed for the microorganism Helicobacter pylori. Using the yeast two-hybrid method to detect protein interactions, 1,200 connections were identified between H. pylori proteins covering 46.6% of the genome. A comprehensive two-hybrid analysis has also been performed on all the proteins from the yeast S. cerevisiae.
mixed peptide sequencing with MS

mixed peptide sequencing with MS

The process of mixed-peptide sequencing involves separation of a complex protein mixture by polyacrylamide gel electrophoresis (1-D or 2-D) and then transfer of the proteins to an inert membrane by electroblotting (Fig. (Fig.4).4). The proteins of interest are visualized on the membrane surface, excised, and fragmented chemically at methionine (by CNBr) or tryptophan (by skatole) into several large peptide fragments.
FASTF and FASTS search programs

FASTF and FASTS search programs

The mixed-sequence data are fed into the FASTF or TFASTF algorithms, which sort and match the data against protein (FASTF) and DNA (TFASTF) databases to unambiguously identify the protein. The FASTF and TFASTF programs were written in collaboration with William Pearson (Department of Biochemistry, University of Virginia). Because minimal sample handling is involved, mixed-peptide sequencing can be a sensitive approach for identifying proteins in polyacrylamide gels at the 0.1- to 1-pmol level.  A recent variation of T/FASTF has been devised for MS (101) (Fig. (Fig.5B).5B). The T/FASTF/S programs are available at http://fasta.bioch.virginia.edu/ (Table (Table11).

triple quadrupole MS

triple quadrupole MS

Triple-quadrupole mass spectrometers are most commonly used to obtain amino acid sequences. In the first stage of analysis, the machine is operated in MS scan mode and all ions above a certain m/z ratio are transmitted to the third quadrupole for mass analysis (Fig. (Fig.6)6) (82, 173). In the second stage, the mass spectrometer is operated in MS/MS mode and a particular peptide ion is selectively passed into the collision chamber. Inside the collision chamber, peptide ions are fragmented by interactions with an inert gas by a process known as collision-induced dissociation or collisionally activated dissociation. The peptide ion fragments are then resolved on the basis of their m/z ratio by the third quadrupole (Fig. (Fig.6).6). Since two different mass spectra are obtained in this analysis, it is referred to as tandem mass spectrometry (MS/MS). MS/MS is used to obtain the amino acid sequence of peptides by generating a series of peptides that differ in mass by a single amino acid.

The largest application of proteomics continues to be protein expression profiling. Through the use of two-dimensional gels or novel techniques such as ICAT, the expression levels of proteins or changes in their level of modification between two different samples can be compared and the proteins can be identified. This approach can facilitate the dissection of signaling mechanisms or identify disease-specific proteins.

Cancer cells are good candidates for proteomics studies because they can be compared to their non-transformed counterparts. Analysis of differentially expressed proteins in normal versus cancer cells can

(i) identify novel tumor cell biomarkers that can be used for diagnosis,

(ii) provide clues to mechanisms of cancer development, and

(iii) identify novel targets for therapeutic intervention. Protein expression profiling has been used in the study of breast, esophageal, bladder and prostate cancer. From these studies, tumor-specific proteins were identified and 2-D protein expression databases were generated. Many of these 2-D protein databases are now available on the World Wide Web.

Read Full Post »

Introduction to Metabolomics

Introduction to Metabolomics

Author: Larry H. Bernstein, MD, FCAP

 

This is the first volume of the Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases.  It is written for comprehension at the third year medical student level, or as a reference for licensing board exams, but it is also written for the education of a first time bachalaureate degree reader in the biological sciences.  Hopefully, it can be read with great interest by the undergraduate student who is undecided in the choice of a career.

In the Preface, I failed to disclose that the term Metabolomics applies to plants, animals, bacteria, and both prokaryotes and eukaryotes.  The metabolome for each organism is unique, but from an evolutionary perspective has metabolic pathways in common, and expressed in concert with the environment that these living creatures exist. The metabolome of each has adaptive accommodation with suppression and activation of pathways that are functional and necessary in balance, for its existence.  Was it William Faulkner who said in his Nobel Prize acceptance that mankind shall not merely exist, but survive? That seems to be the overlying theme for all of life. If life cannot persist, a surviving “remnant” might continue. The history of life may well be etched into the genetic code, some of which is not expressed.

This work is apportioned into chapters in a sequence that is first directed at the major sources for the energy and the structure of life, in the carbohydrates, lipids, and fats, which are sourced from both plants and animals, and depending on their balance, results in an equilibrium, and a disequilibrium we refer to as disease.  There is also a need to consider the nonorganic essentials which are derived from the soil, from water, and from the energy of the sun and the air we breathe, or in the case of water-bound metabolomes, dissolved gases.

In addition to the basic essential nutrients and their metabolic utilization, they are under cellular metabolic regulation that is tied to signaling pathways.  In addition, the genetic expression of the organism is under regulatory control by the interaction of RNAs that interact with the chromatin genetic framework, with exosomes, and with protein modulators.This is referred to as epigenetics, but there are also drivers of metabolism that are shaped by the interactions between enzymes and substartes, and are related to the tertiary structure of a protein.  The framework for diseases in a separate chapter.  Pharmaceutical interventions that are designed to modulate specific metabolic targets are addressed as the pathways are unfolded. Neutraceuticals and plant based nutrition are covered in Chapter 8.

Chapter 1: Metabolic Pathways

Chapter 2. Lipid Metabolism

Chapter 3. Cell Signaling

Chapter 4. Protein Synthesis and Degradation

Chapter 5: Sub-cellular Structure

Chapter 6: Proteomics

Chapter 7: Metabolomics

Chapter 8. Impairments in Pathological States: Endocrine Disorders; Stress Hypermetabolism and Cancer

Read Full Post »

Metabolomics Summary and Perspective

Metabolomics Summary and Perspective

Author and Curator: Larry H Bernstein, MD, FCAP 

 

This is the final article in a robust series on metabolism, metabolomics, and  the “-OMICS-“ biological synthesis that is creating a more holistic and interoperable view of natural sciences, including the biological disciplines, climate science, physics, chemistry, toxicology, pharmacology, and pathophysiology with as yet unforeseen consequences.

There have been impressive advances already in the research into developmental biology, plant sciences, microbiology, mycology, and human diseases, most notably, cancer, metabolic , and infectious, as well as neurodegenerative diseases.

Acknowledgements:

I write this article in honor of my first mentor, Harry Maisel, Professor and Emeritus Chairman of Anatomy, Wayne State University, Detroit, MI and to my stimulating mentors, students, fellows, and associates over many years:

Masahiro Chiga, MD, PhD, Averill A Liebow, MD, Nathan O Kaplan, PhD, Johannes Everse, PhD, Norio Shioura, PhD, Abraham Braude, MD, Percy J Russell, PhD, Debby Peters, Walter D Foster, PhD, Herschel Sidransky, MD, Sherman Bloom, MD, Matthew Grisham, PhD, Christos Tsokos, PhD,  IJ Good, PhD, Distinguished Professor, Raool Banagale, MD, Gustavo Reynoso, MD,Gustave Davis, MD, Marguerite M Pinto, MD, Walter Pleban, MD, Marion Feietelson-Winkler, RD, PhD,  John Adan,MD, Joseph Babb, MD, Stuart Zarich, MD,  Inder Mayall, MD, A Qamar, MD, Yves Ingenbleek, MD, PhD, Emeritus Professor, Bette Seamonds, PhD, Larry Kaplan, PhD, Pauline Y Lau, PhD, Gil David, PhD, Ronald Coifman, PhD, Emeritus Professor, Linda Brugler, RD, MBA, James Rucinski, MD, Gitta Pancer, Ester Engelman, Farhana Hoque, Mohammed Alam, Michael Zions, William Fleischman, MD, Salman Haq, MD, Jerard Kneifati-Hayek, Madeleine Schleffer, John F Heitner, MD, Arun Devakonda,MD, Liziamma George,MD, Suhail Raoof, MD, Charles Oribabor,MD, Anthony Tortolani, MD, Prof and Chairman, JRDS Rosalino, PhD, Aviva Lev Ari, PhD, RN, Rosser Rudolph, MD, PhD, Eugene Rypka, PhD, Jay Magidson, PhD, Izaak Mayzlin, PhD, Maurice Bernstein, PhD, Richard Bing, Eli Kaplan, PhD, Maurice Bernstein, PhD.

This article has EIGHT parts, as follows:

Part 1

Metabolomics Continues Auspicious Climb

Part 2

Biologists Find ‘Missing Link’ in the Production of Protein Factories in Cells

Part 3

Neuroscience

Part 4

Cancer Research

Part 5

Metabolic Syndrome

Part 6

Biomarkers

Part 7

Epigenetics and Drug Metabolism

Part 8

Pictorial

genome cartoon

genome cartoon

 iron metabolism

iron metabolism

personalized reference range within population range

personalized reference range within population range

Part 1.  MetabolomicsSurge

metagraph  _OMICS

metagraph _OMICS

Metabolomics Continues Auspicious Climb

Jeffery Herman, Ph.D.
GEN May 1, 2012 (Vol. 32, No. 9)

Aberrant biochemical and metabolite signaling plays an important role in

  • the development and progression of diseased tissue.

This concept has been studied by the science community for decades. However, with relatively

  1. recent advances in analytical technology and bioinformatics as well as
  2. the development of the Human Metabolome Database (HMDB),

metabolomics has become an invaluable field of research.

At the “International Conference and Exhibition on Metabolomics & Systems Biology” held recently in San Francisco, researchers and industry leaders discussed how

  • the underlying cellular biochemical/metabolite fingerprint in response to
  1. a specific disease state,
  2. toxin exposure, or
  3. pharmaceutical compound
  • is useful in clinical diagnosis and biomarker discovery and
  • in understanding disease development and progression.

Developed by BASF, MetaMap® Tox is

  • a database that helps identify in vivo systemic effects of a tested compound, including
  1. targeted organs,
  2. mechanism of action, and
  3. adverse events.

Based on 28-day systemic rat toxicity studies, MetaMap Tox is composed of

  • differential plasma metabolite profiles of rats
  • after exposure to a large variety of chemical toxins and pharmaceutical compounds.

“Using the reference data,

  • we have developed more than 110 patterns of metabolite changes, which are
  • specific and predictive for certain toxicological modes of action,”

said Hennicke Kamp, Ph.D., group leader, department of experimental toxicology and ecology at BASF.

With MetaMap Tox, a potential drug candidate

  • can be compared to a similar reference compound
  • using statistical correlation algorithms,
  • which allow for the creation of a toxicity and mechanism of action profile.

“MetaMap Tox, in the context of early pre-clinical safety enablement in pharmaceutical development,” continued Dr. Kamp,

  • has been independently validated “
  • by an industry consortium (Drug Safety Executive Council) of 12 leading biopharmaceutical companies.”

Dr. Kamp added that this technology may prove invaluable

  • allowing for quick and accurate decisions and
  • for high-throughput drug candidate screening, in evaluation
  1. on the safety and efficacy of compounds
  2. during early and preclinical toxicological studies,
  3. by comparing a lead compound to a variety of molecular derivatives, and
  • the rapid identification of the most optimal molecular structure
  • with the best efficacy and safety profiles might be streamlined.
Dynamic Construct of the –Omics

Dynamic Construct of the –Omics

Targeted Tandem Mass Spectrometry

Biocrates Life Sciences focuses on targeted metabolomics, an important approach for

  • the accurate quantification of known metabolites within a biological sample.

Originally used for the clinical screening of inherent metabolic disorders from dried blood-spots of newborn children, Biocrates has developed

  • a tandem mass spectrometry (MS/MS) platform, which allows for
  1. the identification,
  2. quantification, and
  3. mapping of more than 800 metabolites to specific cellular pathways.

It is based on flow injection analysis and high-performance liquid chromatography MS/MS.

Clarification of Pathway-Specific Inhibition by Fourier Transform Ion Cyclotron Resonance.Mass Spectrometry-Based Metabolic Phenotyping Studies F5.large

common drug targets

common drug targets

The MetaDisIDQ® Kit is a

  • “multiparamatic” diagnostic assay designed for the “comprehensive assessment of a person’s metabolic state” and
  • the early determination of pathophysiological events with regards to a specific disease.

MetaDisIDQ is designed to quantify

  • a diverse range of 181 metabolites involved in major metabolic pathways
  • from a small amount of human serum (10 µL) using isotopically labeled internal standards,

This kit has been demonstrated to detect changes in metabolites that are commonly associated with the development of

  • metabolic syndrome, type 2 diabetes, and diabetic nephropathy,

Dr. Dallman reports that data generated with the MetaDisIDQ kit correlates strongly with

  • routine chemical analyses of common metabolites including glucose and creatinine

Biocrates has also developed the MS/MS-based AbsoluteIDQ® kits, which are

  • an “easy-to-use” biomarker analysis tool for laboratory research.

The kit functions on MS machines from a variety of vendors, and allows for the quantification of 150-180 metabolites.

The SteroIDQ® kit is a high-throughput standardized MS/MS diagnostic assay,

  • validated in human serum, for the rapid and accurate clinical determination of 16 known steroids.

Initially focusing on the analysis of steroid ranges for use in hormone replacement therapy, the SteroIDQ Kit is expected to have a wide clinical application.

Hormone-Resistant Breast Cancer

Scientists at Georgetown University have shown that

  • breast cancer cells can functionally coordinate cell-survival and cell-proliferation mechanisms,
  • while maintaining a certain degree of cellular metabolism.

To grow, cells need energy, and energy is a product of cellular metabolism. For nearly a century, it was thought that

  1. the uncoupling of glycolysis from the mitochondria,
  2. leading to the inefficient but rapid metabolism of glucose and
  3. the formation of lactic acid (the Warburg effect), was

the major and only metabolism driving force for unchecked proliferation and tumorigenesis of cancer cells.

Other aspects of metabolism were often overlooked.

“.. we understand now that

  • cellular metabolism is a lot more than just metabolizing glucose,”

said Robert Clarke, Ph.D., professor of oncology and physiology and biophysics at Georgetown University. Dr. Clarke, in collaboration with the Waters Center for Innovation at Georgetown University (led by Albert J. Fornace, Jr., M.D.), obtained

  • the metabolomic profile of hormone-sensitive and -resistant breast cancer cells through the use of UPLC-MS.

They demonstrated that breast cancer cells, through a rather complex and not yet completely understood process,

  1. can functionally coordinate cell-survival and cell-proliferation mechanisms,
  2. while maintaining a certain degree of cellular metabolism.

This is at least partly accomplished through the upregulation of important pro-survival mechanisms; including

  • the unfolded protein response;
  • a regulator of endoplasmic reticulum stress and
  • initiator of autophagy.

Normally, during a stressful situation, a cell may

  • enter a state of quiescence and undergo autophagy,
  • a process by which a cell can recycle organelles
  • in order to maintain enough energy to survive during a stressful situation or,

if the stress is too great,

  • undergo apoptosis.

By integrating cell-survival mechanisms and cellular metabolism

  • advanced ER+ hormone-resistant breast cancer cells
  • can maintain a low level of autophagy
  • to adapt and resist hormone/chemotherapy treatment.

This adaptation allows cells

  • to reallocate important metabolites recovered from organelle degradation and
  • provide enough energy to also promote proliferation.

With further research, we can gain a better understanding of the underlying causes of hormone-resistant breast cancer, with

  • the overall goal of developing effective diagnostic, prognostic, and therapeutic tools.

NMR

Over the last two decades, NMR has established itself as a major tool for metabolomics analysis. It is especially adept at testing biological fluids. [Bruker BioSpin]

Historically, nuclear magnetic resonance spectroscopy (NMR) has been used for structural elucidation of pure molecular compounds. However, in the last two decades, NMR has established itself as a major tool for metabolomics analysis. Since

  • the integral of an NMR signal is directly proportional to
  • the molar concentration throughout the dynamic range of a sample,

“the simultaneous quantification of compounds is possible

  • without the need for specific reference standards or calibration curves,” according to Lea Heintz of Bruker BioSpin.

NMR is adept at testing biological fluids because of

  1.  high reproducibility,
  2. standardized protocols,
  3. low sample manipulation, and
  4. the production of a large subset of data,

Bruker BioSpin is presently involved in a project for the screening of inborn errors of metabolism in newborn children from Turkey, based on their urine NMR profiles. More than 20 clinics are participating to the project that is coordinated by INFAI, a specialist in the transfer of advanced analytical technology into medical diagnostics. The construction of statistical models are being developed

  • for the detection of deviations from normality, as well as
  • automatic quantification methods for indicative metabolites

Bruker BioSpin recently installed high-resolution magic angle spinning NMR (HRMAS-NMR) systems that can rapidly analyze tissue biopsies. The main objective for HRMAS-NMR is to establish a rapid and effective clinical method to assess tumor grade and other important aspects of cancer during surgery.

Combined NMR and Mass Spec

There is increasing interest in combining NMR and MS, two of the main analytical assays in metabolomic research, as a means

  • to improve data sensitivity and to
  • fully elucidate the complex metabolome within a given biological sample.
  •  to realize a potential for cancer biomarker discovery in the realms of diagnosis, prognosis, and treatment.

.

Using combined NMR and MS to measure the levels of nearly 250 separate metabolites in the patient’s blood, Dr. Weljie and other researchers at the University of Calgary were able to rapidly determine the malignancy of a  pancreatic lesion (in 10–15% of the cases, it is difficult to discern between benign and malignant), while avoiding unnecessary surgery in patients with benign lesions.

When performing NMR and MS on a single biological fluid, ultimately “we are,” noted Dr. Weljie,

  1. “splitting up information content, processing, and introducing a lot of background noise and error and
  2. then trying to reintegrate the data…
    It’s like taking a complex item, with multiple pieces, out of an IKEA box and trying to repackage it perfectly into another box.”

By improving the workflow between the initial splitting of the sample, they improved endpoint data integration, proving that

  • a streamlined approach to combined NMR/MS can be achieved,
  • leading to a very strong, robust and precise metabolomics toolset.

Metabolomics Research Picks Up Speed

Field Advances in Quest to Improve Disease Diagnosis and Predict Drug Response

John Morrow Jr., Ph.D.
GEN May 1, 2011 (Vol. 31, No. 9)

As an important discipline within systems biology, metabolomics is being explored by a number of laboratories for

  • its potential in pharmaceutical development.

Studying metabolites can offer insights into the relationships between genotype and phenotype, as well as between genotype and environment. In addition, there is plenty to work with—there are estimated to be some 2,900 detectable metabolites in the human body, of which

  1. 309 have been identified in cerebrospinal fluid,
  2. 1,122 in serum,
  3. 458 in urine, and
  4. roughly 300 in other compartments.

Guowang Xu, Ph.D., a researcher at the Dalian Institute of Chemical Physics.  is investigating the causes of death in China,

  • and how they have been changing over the years as the country has become a more industrialized nation.
  •  the increase in the incidence of metabolic disorders such as diabetes has grown to affect 9.7% of the Chinese population.

Dr. Xu,  collaborating with Rainer Lehman, Ph.D., of the University of Tübingen, Germany, compared urinary metabolites in samples from healthy individuals with samples taken from prediabetic, insulin-resistant subjects. Using mass spectrometry coupled with electrospray ionization in the positive mode, they observed striking dissimilarities in levels of various metabolites in the two groups.

“When we performed a comprehensive two-dimensional gas chromatography, time-of-flight mass spectrometry analysis of our samples, we observed several metabolites, including

  • 2-hydroxybutyric acid in plasma,
  •  as potential diabetes biomarkers,” Dr. Xu explains.

In other, unrelated studies, Dr. Xu and the German researchers used a metabolomics approach to investigate the changes in plasma metabolite profiles immediately after exercise and following a 3-hour and 24-hour period of recovery. They found that

  • medium-chain acylcarnitines were the most distinctive exercise biomarkers, and
  • they are released as intermediates of partial beta oxidation in human myotubes and mouse muscle tissue.

Dr. Xu says. “The traditional approach of assessment based on a singular biomarker is being superseded by the introduction of multiple marker profiles.”

Typical of the studies under way by Dr. Kaddurah-Daouk and her colleaguesat Duke University

  • is a recently published investigation highlighting the role of an SNP variant in
  • the glycine dehydrogenase gene on individual response to antidepressants.
  •  patients who do not respond to the selective serotonin uptake inhibitors citalopram and escitalopram
  • carried a particular single nucleotide polymorphism in the GD gene.

“These results allow us to pinpoint a possible

  • role for glycine in selective serotonin reuptake inhibitor response and
  • illustrate the use of pharmacometabolomics to inform pharmacogenomics.

These discoveries give us the tools for prognostics and diagnostics so that

  • we can predict what conditions will respond to treatment.

“This approach to defining health or disease in terms of metabolic states opens a whole new paradigm.

By screening hundreds of thousands of molecules, we can understand

  • the relationship between human genetic variability and the metabolome.”

Dr. Kaddurah-Daouk talks about statins as a current

  • model of metabolomics investigations.

It is now known that the statins  have widespread effects, altering a range of metabolites. To sort out these changes and develop recommendations for which individuals should be receiving statins will require substantial investments of energy and resources into defining the complex web of biochemical changes that these drugs initiate.
Furthermore, Dr. Kaddurah-Daouk asserts that,

  • “genetics only encodes part of the phenotypic response.

One needs to take into account the

  • net environment contribution in order to determine
  • how both factors guide the changes in our metabolic state that determine the phenotype.”

Interactive Metabolomics

Researchers at the University of Nottingham use diffusion-edited nuclear magnetic resonance spectroscopy to assess the effects of a biological matrix on metabolites. Diffusion-edited NMR experiments provide a way to

  • separate the different compounds in a mixture
  • based on the differing translational diffusion coefficients (which reflect the size and shape of the molecule).

The measurements are carried out by observing

  • the attenuation of the NMR signals during a pulsed field gradient experiment.

Clare Daykin, Ph.D., is a lecturer at the University of Nottingham, U.K. Her field of investigation encompasses “interactive metabolomics,”which she defines as

“the study of the interactions between low molecular weight biochemicals and macromolecules in biological samples ..

  • without preselection of the components of interest.

“Blood plasma is a heterogeneous mixture of molecules that

  1. undergo a variety of interactions including metal complexation,
  2. chemical exchange processes,
  3. micellar compartmentation,
  4. enzyme-mediated biotransformations, and
  5. small molecule–macromolecular binding.”

Many low molecular weight compounds can exist

  • freely in solution,
  • bound to proteins, or
  • within organized aggregates such as lipoprotein complexes.

Therefore, quantitative comparison of plasma composition from

  • diseased individuals compared to matched controls provides an incomplete insight to plasma metabolism.

“It is not simply the concentrations of metabolites that must be investigated,

  • but their interactions with the proteins and lipoproteins within this complex web.

Rather than targeting specific metabolites of interest, Dr. Daykin’s metabolite–protein binding studies aim to study

  • the interactions of all detectable metabolites within the macromolecular sample.

Such activities can be studied through the use of diffusion-edited nuclear magnetic resonance (NMR) spectroscopy, in which one can assess

  • the effects of the biological matrix on the metabolites.

“This can lead to a more relevant and exact interpretation

  • for systems where metabolite–macromolecule interactions occur.”

Diffusion-edited NMR experiments provide a way to separate the different compounds in a mixture based on

  • the differing translational diffusion coefficients (which reflect the size and shape of the molecule).

The measurements are carried out by observing

  • the attenuation of the NMR signals during a pulsed field gradient experiment.

Pushing the Limits

It is widely recognized that many drug candidates fail during development due to ancillary toxicity. Uwe Sauer, Ph.D., professor, and Nicola Zamboni, Ph.D., researcher, both at the Eidgenössische Technische Hochschule, Zürich (ETH Zürich), are applying

  • high-throughput intracellular metabolomics to understand
  • the basis of these unfortunate events and
  • head them off early in the course of drug discovery.

“Since metabolism is at the core of drug toxicity, we developed a platform for

  • measurement of 50–100 targeted metabolites by
  • a high-throughput system consisting of flow injection
  • coupled to tandem mass spectrometry.”

Using this approach, Dr. Sauer’s team focused on

  • the central metabolism of the yeast Saccharomyces cerevisiae, reasoning that
  • this core network would be most susceptible to potential drug toxicity.

Screening approximately 41 drugs that were administered at seven concentrations over three orders of magnitude, they observed changes in metabolome patterns at much lower drug concentrations without attendant physiological toxicity.

The group carried out statistical modeling of about

  • 60 metabolite profiles for each drug they evaluated.

This data allowed the construction of a “profile effect map” in which

  • the influence of each drug on metabolite levels can be followed, including off-target effects, which
  • provide an indirect measure of the possible side effects of the various drugs.

Dr. Sauer says.“We have found that this approach is

  • at least 100 times as fast as other omics screening platforms,”

“Some drugs, including many anticancer agents,

  • disrupt metabolism long before affecting growth.”
killing cancer cells

killing cancer cells

Furthermore, they used the principle of 13C-based flux analysis, in which

  • metabolites labeled with 13C are used to follow the utilization of metabolic pathways in the cell.

These 13C-determined intracellular responses of metabolic fluxes to drug treatment demonstrate

  • the functional performance of the network to be rather robust,
conformational changes leading to substrate efflux.

conformational changes leading to substrate efflux.

leading Dr. Sauer to the conclusion that

  • the phenotypic vigor he observes to drug challenges
  • is achieved by a flexible make up of the metabolome.

Dr. Sauer is confident that it will be possible to expand the scope of these investigations to hundreds of thousands of samples per study. This will allow answers to the questions of

  • how cells establish a stable functioning network in the face of inevitable concentration fluctuations.

Is Now the Hour?

There is great enthusiasm and agitation within the biotech community for

  • metabolomics approaches as a means of reversing the dismal record of drug discovery

that has accumulated in the last decade.

While the concept clearly makes sense and is being widely applied today, there are many reasons why drugs fail in development, and metabolomics will not be a panacea for resolving all of these questions. It is too early at this point to recognize a trend or a track record, and it will take some time to see how this approach can aid in drug discovery and shorten the timeline for the introduction of new pharmaceutical agents.

Degree of binding correlated with function

Degree of binding correlated with function

Diagram_of_a_two-photon_excitation_microscope_

Diagram_of_a_two-photon_excitation_microscope_

Part 2.  Biologists Find ‘Missing Link’ in the Production of Protein Factories in Cells

Biologists at UC San Diego have found

  • the “missing link” in the chemical system that
  • enables animal cells to produce ribosomes

—the thousands of protein “factories” contained within each cell that

  • manufacture all of the proteins needed to build tissue and sustain life.
‘Missing Link’

‘Missing Link’

Their discovery, detailed in the June 23 issue of the journal Genes & Development, will not only force

  • a revision of basic textbooks on molecular biology, but also
  • provide scientists with a better understanding of
  • how to limit uncontrolled cell growth, such as cancer,
  • that might be regulated by controlling the output of ribosomes.

Ribosomes are responsible for the production of the wide variety of proteins that include

  1. enzymes;
  2. structural molecules, such as hair,
  3. skin and bones;
  4. hormones like insulin; and
  5. components of our immune system such as antibodies.

Regarded as life’s most important molecular machine, ribosomes have been intensively studied by scientists (the 2009 Nobel Prize in Chemistry, for example, was awarded for studies of its structure and function). But until now researchers had not uncovered all of the details of how the proteins that are used to construct ribosomes are themselves produced.

In multicellular animals such as humans,

  • ribosomes are made up of about 80 different proteins
    (humans have 79 while some other animals have a slightly different number) as well as
  • four different kinds of RNA molecules.

In 1969, scientists discovered that

  • the synthesis of the ribosomal RNAs is carried out by specialized systems using two key enzymes:
  • RNA polymerase I and RNA polymerase III.

But until now, scientists were unsure if a complementary system was also responsible for

  • the production of the 80 proteins that make up the ribosome.

That’s essentially what the UC San Diego researchers headed by Jim Kadonaga, a professor of biology, set out to examine. What they found was the missing link—the specialized

  • system that allows ribosomal proteins themselves to be synthesized by the cell.

Kadonaga says that he and coworkers found that ribosomal proteins are synthesized via

  • a novel regulatory system with the enzyme RNA polymerase II and
  • a factor termed TRF2,”

“For the production of most proteins,

  1. RNA polymerase II functions with
  2. a factor termed TBP,
  3. but for the synthesis of ribosomal proteins, it uses TRF2.”
  •  this specialized TRF2-based system for ribosome biogenesis
  • provides a new avenue for the study of ribosomes and
  • its control of cell growth, and

“it should lead to a better understanding and potential treatment of diseases such as cancer.”

Coordination of the transcriptome and metabolome

Coordination of the transcriptome and metabolome

the potential advantages conferred by distal-site protein synthesis

the potential advantages conferred by distal-site protein synthesis

Other authors of the paper were UC San Diego biologists Yuan-Liang Wang, Sascha Duttke and George Kassavetis, and Kai Chen, Jeff Johnston, and Julia Zeitlinger of the Stowers Institute for Medical Research in Kansas City, Missouri. Their research was supported by two grants from the National Institutes of Health (1DP2OD004561-01 and R01 GM041249).

Turning Off a Powerful Cancer Protein

Scientists have discovered how to shut down a master regulatory transcription factor that is

  • key to the survival of a majority of aggressive lymphomas,
  • which arise from the B cells of the immune system.

The protein, Bcl6, has long been considered too complex to target with a drug since it is also crucial

  • to the healthy functioning of many immune cells in the body, not just B cells gone bad.

The researchers at Weill Cornell Medical College report that it is possible

  • to shut down Bcl6 in diffuse large B-cell lymphoma (DLBCL)
  • while not affecting its vital function in T cells and macrophages
  • that are needed to support a healthy immune system.

If Bcl6 is completely inhibited, patients might suffer from systemic inflammation and atherosclerosis. The team conducted this new study to help clarify possible risks, as well as to understand

  • how Bcl6 controls the various aspects of the immune system.

The findings in this study were inspired from

  • preclinical testing of two Bcl6-targeting agents that Dr. Melnick and his Weill Cornell colleagues have developed
  • to treat DLBCLs.

These experimental drugs are

  • RI-BPI, a peptide mimic, and
  • the small molecule agent 79-6.

“This means the drugs we have developed against Bcl6 are more likely to be

  • significantly less toxic and safer for patients with this cancer than we realized,”

says Ari Melnick, M.D., professor of hematology/oncology and a hematologist-oncologist at NewYork-Presbyterian Hospital/Weill Cornell Medical Center.

Dr. Melnick says the discovery that

  • a master regulatory transcription factor can be targeted
  • offers implications beyond just treating DLBCL.

Recent studies from Dr. Melnick and others have revealed that

  • Bcl6 plays a key role in the most aggressive forms of acute leukemia, as well as certain solid tumors.

Bcl6 can control the type of immune cell that develops in the bone marrow—playing many roles

  • in the development of B cells, T cells, macrophages, and other cells—including a primary and essential role in
  • enabling B-cells to generate specific antibodies against pathogens.

According to Dr. Melnick, “When cells lose control of Bcl6,

  • lymphomas develop in the immune system.

Lymphomas are ‘addicted’ to Bcl6, and therefore

  • Bcl6 inhibitors powerfully and quickly destroy lymphoma cells,” .

The big surprise in the current study is that rather than functioning as a single molecular machine,

  • Bcl6 functions like a Swiss Army knife,
  • using different tools to control different cell types.

This multifunction paradigm could represent a general model for the functioning of other master regulatory transcription factors.

“In this analogy, the Swiss Army knife, or transcription factor, keeps most of its tools folded,

  • opening only the one it needs in any given cell type,”

He makes the following analogy:

  • “For B cells, it might open and use the knife tool;
  • for T cells, the cork screw;
  • for macrophages, the scissors.”

“this means that you only need to prevent the master regulator from using certain tools to treat cancer. You don’t need to eliminate the whole knife,” . “In fact, we show that taking out the whole knife is harmful since

  • the transcription factor has many other vital functions that other cells in the body need.”

Prior to these study results, it was not known that a master regulator could separate its functions so precisely. Researchers hope this will be a major benefit to the treatment of DLBCL and perhaps other disorders that are influenced by Bcl6 and other master regulatory transcription factors.

The study is published in the journal Nature Immunology, in a paper titled “Lineage-specific functions of Bcl-6 in immunity and inflammation are mediated by distinct biochemical mechanisms”.

Part 3. Neuroscience

Vesicles influence function of nerve cells 
Oct, 06 2014        source: http://feeds.sciencedaily.com

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Tiny vesicles containing protective substances

  • which they transmit to nerve cells apparently
  • play an important role in the functioning of neurons.

As cell biologists at Johannes Gutenberg University Mainz (JGU) have discovered,

  • nerve cells can enlist the aid of mini-vesicles of neighboring glial cells
  • to defend themselves against stress and other potentially detrimental factors.

These vesicles, called exosomes, appear to stimulate the neurons on various levels:

  • they influence electrical stimulus conduction,
  • biochemical signal transfer, and
  • gene regulation.

Exosomes are thus multifunctional signal emitters

  • that can have a significant effect in the brain.
Exosome

Exosome

The researchers in Mainz already observed in a previous study that

  • oligodendrocytes release exosomes on exposure to neuronal stimuli.
  • these are absorbed by the neurons and improve neuronal stress tolerance.

Oligodendrocytes, a type of glial cell, form an

  • insulating myelin sheath around the axons of neurons.

The exosomes transport protective proteins such as

  • heat shock proteins,
  • glycolytic enzymes, and
  • enzymes that reduce oxidative stress from one cell type to another,
  • but also transmit genetic information in the form of ribonucleic acids.

“As we have now discovered in cell cultures, exosomes seem to have a whole range of functions,” explained Dr. Eva-Maria Krmer-Albers. By means of their transmission activity, the small bubbles that are the vesicles

  • not only promote electrical activity in the nerve cells, but also
  • influence them on the biochemical and gene regulatory level.

“The extent of activities of the exosomes is impressive,” added Krmer-Albers. The researchers hope that the understanding of these processes will contribute to the development of new strategies for the treatment of neuronal diseases. Their next aim is to uncover how vesicles actually function in the brains of living organisms.

http://labroots.com/user/news/article/id/217438/title/vesicles-influence-function-of-nerve-cells

The above story is based on materials provided by Universitt Mainz.

Universitt Mainz. “Vesicles influence function of nerve cells.” ScienceDaily. ScienceDaily, 6 October 2014. www.sciencedaily.com/releases/2014/10/141006174214.htm

Neuroscientists use snail research to help explain “chemo brain”

10/08/2014
It is estimated that as many as half of patients taking cancer drugs experience a decrease in mental sharpness. While there have been many theories, what causes “chemo brain” has eluded scientists.

In an effort to solve this mystery, neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) conducted an experiment in an animal memory model and their results point to a possible explanation. Findings appeared in The Journal of Neuroscience.

In the study involving a sea snail that shares many of the same memory mechanisms as humans and a drug used to treat a variety of cancers, the scientists identified

  • memory mechanisms blocked by the drug.

Then, they were able to counteract or

  • unblock the mechanisms by administering another agent.

“Our research has implications in the care of people given to cognitive deficits following drug treatment for cancer,” said John H. “Jack” Byrne, Ph.D., senior author, holder of the June and Virgil Waggoner Chair and Chairman of the Department of Neurobiology and Anatomy at the UTHealth Medical School. “There is no satisfactory treatment at this time.”

Byrne’s laboratory is known for its use of a large snail called Aplysia californica to further the understanding of the biochemical signaling among nerve cells (neurons).  The snails have large neurons that relay information much like those in humans.

When Byrne’s team compared cell cultures taken from normal snails to

  • those administered a dose of a cancer drug called doxorubicin,

the investigators pinpointed a neuronal pathway

  • that was no longer passing along information properly.

With the aid of an experimental drug,

  • the scientists were able to reopen the pathway.

Unfortunately, this drug would not be appropriate for humans, Byrne said. “We want to identify other drugs that can rescue these memory mechanisms,” he added.

According the American Cancer Society, some of the distressing mental changes cancer patients experience may last a short time or go on for years.

Byrne’s UT Health research team includes co-lead authors Rong-Yu Liu, Ph.D., and Yili Zhang, Ph.D., as well as Brittany Coughlin and Leonard J. Cleary, Ph.D. All are affiliated with the W.M. Keck Center for the Neurobiology of Learning and Memory.

Byrne and Cleary also are on the faculty of The University of Texas Graduate School of Biomedical Sciences at Houston. Coughlin is a student at the school, which is jointly operated by UT Health and The University of Texas MD Anderson Cancer Center.

The study titled “Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase” received support from National Institutes of Health grant (NS019895) and the Zilkha Family Discovery Fellowship.

Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase

Source: Univ. of Texas Health Science Center at Houston

http://www.rdmag.com/news/2014/10/neuroscientists-use-snail-research-help-explain-E2_9_Cchemo-brain

Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase

Rong-Yu Liu*,  Yili Zhang*,  Brittany L. Coughlin,  Leonard J. Cleary, and  John H. Byrne   +Show Affiliations
The Journal of Neuroscience, 1 Oct 2014, 34(40): 13289-13300;
http://dx.doi.org:/10.1523/JNEUROSCI.0538-14.2014

Doxorubicin (DOX) is an anthracycline used widely for cancer chemotherapy. Its primary mode of action appears to be

  • topoisomerase II inhibition, DNA cleavage, and free radical generation.

However, in non-neuronal cells, DOX also inhibits the expression of

  • dual-specificity phosphatases (also referred to as MAPK phosphatases) and thereby
  1. inhibits the dephosphorylation of extracellular signal-regulated kinase (ERK) and
  2. p38 mitogen-activated protein kinase (p38 MAPK),
  3. two MAPK isoforms important for long-term memory (LTM) formation.

Activation of these kinases by DOX in neurons, if present,

  • could have secondary effects on cognitive functions, such as learning and memory.

The present study used cultures of rat cortical neurons and sensory neurons (SNs) of Aplysia

  • to examine the effects of DOX on levels of phosphorylated ERK (pERK) and
  • phosphorylated p38 (p-p38) MAPK.

In addition, Aplysia neurons were used to examine the effects of DOX on

  • long-term enhanced excitability, long-term synaptic facilitation (LTF), and
  • long-term synaptic depression (LTD).

DOX treatment led to elevated levels of

  • pERK and p-p38 MAPK in SNs and cortical neurons.

In addition, it increased phosphorylation of

  • the downstream transcriptional repressor cAMP response element-binding protein 2 in SNs.

DOX treatment blocked serotonin-induced LTF and enhanced LTD induced by the neuropeptide Phe-Met-Arg-Phe-NH2. The block of LTF appeared to be attributable to

  • overriding inhibitory effects of p-p38 MAPK, because
  • LTF was rescued in the presence of an inhibitor of p38 MAPK
    (SB203580 [4-(4-fluorophenyl)-2-(4-methylsulfinylphenyl)-5-(4-pyridyl)-1H-imidazole]) .

These results suggest that acute application of DOX might impair the formation of LTM via the p38 MAPK pathway.
Terms: Aplysia chemotherapy ERK  p38 MAPK serotonin synaptic plasticity

Technology that controls brain cells with radio waves earns early BRAIN grant

10/08/2014

bright spots = cells with increased calcium after treatment with radio waves,  allows neurons to fire

bright spots = cells with increased calcium after treatment with radio waves, allows neurons to fire

BRAIN control: The new technology uses radio waves to activate or silence cells remotely. The bright spots above represent cells with increased calcium after treatment with radio waves, a change that would allow neurons to fire.

A proposal to develop a new way to

  • remotely control brain cells

from Sarah Stanley, a research associate in Rockefeller University’s Laboratory of Molecular Genetics, headed by Jeffrey M. Friedman, is

  • among the first to receive funding from U.S. President Barack Obama’s BRAIN initiative.

The project will make use of a technique called

  • radiogenetics that combines the use of radio waves or magnetic fields with
  • nanoparticles to turn neurons on or off.

The National Institutes of Health is one of four federal agencies involved in the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative. Following in the ambitious footsteps of the Human Genome Project, the BRAIN initiative seeks

  • to create a dynamic map of the brain in action,

a goal that requires the development of new technologies. The BRAIN initiative working group, which outlined the broad scope of the ambitious project, was co-chaired by Rockefeller’s Cori Bargmann, head of the Laboratory of Neural Circuits and Behavior.

Stanley’s grant, for $1.26 million over three years, is one of 58 projects to get BRAIN grants, the NIH announced. The NIH’s plan for its part of this national project, which has been pitched as “America’s next moonshot,” calls for $4.5 billion in federal funds over 12 years.

The technology Stanley is developing would

  • enable researchers to manipulate the activity of neurons, as well as other cell types,
  • in freely moving animals in order to better understand what these cells do.

Other techniques for controlling selected groups of neurons exist, but her new nanoparticle-based technique has a

  • unique combination of features that may enable new types of experimentation.
  • it would allow researchers to rapidly activate or silence neurons within a small area of the brain or
  • dispersed across a larger region, including those in difficult-to-access locations.

Stanley also plans to explore the potential this method has for use treating patients.

“Francis Collins, director of the NIH, has discussed

  • the need for studying the circuitry of the brain,
  • which is formed by interconnected neurons.

Our remote-control technology may provide a tool with which researchers can ask new questions about the roles of complex circuits in regulating behavior,” Stanley says.
Rockefeller University’s Laboratory of Molecular Genetics
Source: Rockefeller Univ.

Part 4.  Cancer

Two Proteins Found to Block Cancer Metastasis

Why do some cancers spread while others don’t? Scientists have now demonstrated that

  • metastatic incompetent cancers actually “poison the soil”
  • by generating a micro-environment that blocks cancer cells
  • from settling and growing in distant organs.

The “seed and the soil” hypothesis proposed by Stephen Paget in 1889 is now widely accepted to explain how

  • cancer cells (seeds) are able to generate fertile soil (the micro-environment)
  • in distant organs that promotes cancer’s spread.

However, this concept had not explained why some tumors do not spread or metastasize.

The researchers, from Weill Cornell Medical College, found that

  • two key proteins involved in this process work by
  • dramatically suppressing cancer’s spread.

The study offers hope that a drug based on these

  • potentially therapeutic proteins, prosaposin and Thrombospondin 1 (Tsp-1),

might help keep human cancer at bay and from metastasizing.

Scientists don’t understand why some tumors wouldn’t “want” to spread. It goes against their “job description,” says the study’s senior investigator, Vivek Mittal, Ph.D., an associate professor of cell and developmental biology in cardiothoracic surgery and director of the Neuberger Berman Foundation Lung Cancer Laboratory at Weill Cornell Medical College. He theorizes that metastasis occurs when

  • the barriers that the body throws up to protect itself against cancer fail.

But there are some tumors in which some of the barriers may still be intact. “So that suggests

  • those primary tumors will continue to grow, but that
  • an innate protective barrier still exists that prevents them from spreading and invading other organs,”

The researchers found that, like typical tumors,

  • metastasis-incompetent tumors also send out signaling molecules
  • that establish what is known as the “premetastatic niche” in distant organs.

These niches composed of bone marrow cells and various growth factors have been described previously by others including Dr. Mittal as the fertile “soil” that the disseminated cancer cell “seeds” grow in.

Weill Cornell’s Raúl Catena, Ph.D., a postdoctoral fellow in Dr. Mittal’s laboratory, found an important difference between the tumor types. Metastatic-incompetent tumors

  • systemically increased expression of Tsp-1, a molecule known to fight cancer growth.
  • increased Tsp-1 production was found specifically in the bone marrow myeloid cells
  • that comprise the metastatic niche.

These results were striking, because for the first time Dr. Mittal says

  • the bone marrow-derived myeloid cells were implicated as
  • the main producers of Tsp-1,.

In addition, Weill Cornell and Harvard researchers found that

  • prosaposin secreted predominantly by the metastatic-incompetent tumors
  • increased expression of Tsp-1 in the premetastatic lungs.

Thus, Dr. Mittal posits that prosaposin works in combination with Tsp-1

  • to convert pro-metastatic bone marrow myeloid cells in the niche
  • into cells that are not hospitable to cancer cells that spread from a primary tumor.
  • “The very same myeloid cells in the niche that we know can promote metastasis
  • can also be induced under the command of the metastatic incompetent primary tumor to inhibit metastasis,”

The research team found that

  • the Tsp-1–inducing activity of prosaposin
  • was contained in only a 5-amino acid peptide region of the protein, and
  • this peptide alone induced Tsp-1 in the bone marrow cells and
  • effectively suppressed metastatic spread in the lungs
  • in mouse models of breast and prostate cancer.

This 5-amino acid peptide with Tsp-1–inducing activity

  • has the potential to be used as a therapeutic agent against metastatic cancer,

The scientists have begun to test prosaposin in other tumor types or metastatic sites.

Dr. Mittal says that “The clinical implications of the study are:

  • “Not only is it theoretically possible to design a prosaposin-based drug or drugs
  • that induce Tsp-1 to block cancer spread, but
  • you could potentially create noninvasive prognostic tests
  • to predict whether a cancer will metastasize.”

The study was reported in the April 30 issue of Cancer Discovery, in a paper titled “Bone Marrow-Derived Gr1+ Cells Can Generate a Metastasis-Resistant Microenvironment Via Induced Secretion of Thrombospondin-1”.

Disabling Enzyme Cripples Tumors, Cancer Cells

First Step of Metastasis

First Step of Metastasis

Published: Sep 05, 2013  http://www.technologynetworks.com/Metabolomics/news.aspx?id=157138

Knocking out a single enzyme dramatically cripples the ability of aggressive cancer cells to spread and grow tumors.

The paper, published in the journal Proceedings of the National Academy of Sciences, sheds new light on the importance of lipids, a group of molecules that includes fatty acids and cholesterol, in the development of cancer.

Researchers have long known that cancer cells metabolize lipids differently than normal cells. Levels of ether lipids – a class of lipids that are harder to break down – are particularly elevated in highly malignant tumors.

“Cancer cells make and use a lot of fat and lipids, and that makes sense because cancer cells divide and proliferate at an accelerated rate, and to do that,

  • they need lipids, which make up the membranes of the cell,”

said study principal investigator Daniel Nomura, assistant professor in UC Berkeley’s Department of Nutritional Sciences and Toxicology. “Lipids have a variety of uses for cellular structure, but what we’re showing with our study is that

  • lipids can send signals that fuel cancer growth.”

In the study, Nomura and his team tested the effects of reducing ether lipids on human skin cancer cells and primary breast tumors. They targeted an enzyme,

  • alkylglycerone phosphate synthase, or AGPS,
  • known to be critical to the formation of ether lipids.

The researchers confirmed that

  1. AGPS expression increased when normal cells turned cancerous.
  2. inactivating AGPS substantially reduced the aggressiveness of the cancer cells.

“The cancer cells were less able to move and invade,” said Nomura.

The researchers also compared the impact of

  • disabling the AGPS enzyme in mice that had been injected with cancer cells.

Nomura. observes -“Among the mice that had the AGPS enzyme inactivated,

  • the tumors were nonexistent,”

“The mice that did not have this enzyme

  • disabled rapidly developed tumors.”

The researchers determined that

  • inhibiting AGPS expression depleted the cancer cells of ether lipids.
  • AGPS altered levels of other types of lipids important to the ability of the cancer cells to survive and spread, including
    • prostaglandins and acyl phospholipids.

“What makes AGPS stand out as a treatment target is that the enzyme seems to simultaneously

  • regulate multiple aspects of lipid metabolism
  • important for tumor growth and malignancy.”

Future steps include the

  • development of AGPS inhibitors for use in cancer therapy,

“This study sheds considerable light on the important role that AGPS plays in ether lipid metabolism in cancer cells, and it suggests that

  • inhibitors of this enzyme could impair tumor formation,”

said Benjamin Cravatt, Professor and Chair of Chemical Physiology at The Scripps Research Institute, who is not part of the UC.

Agilent Technologies Thought Leader Award Supports Translational Research Program
Published: Mon, March 04, 2013

The award will support Dr DePinho’s research into

  • metabolic reprogramming in the earliest stages of cancer.

Agilent Technologies Inc. announces that Dr. Ronald A. DePinho, a world-renowned oncologist and researcher, has received an Agilent Thought Leader Award.

DePinho is president of the University of Texas MD Anderson Cancer Center. DePinho and his team hope to discover and characterize

  • alterations in metabolic flux during tumor initiation and maintenance, and to identify biomarkers for early detection of pancreatic cancer together with
  • novel therapeutic targets.

Researchers on his team will work with scientists from the university’s newly formed Institute of Applied Cancer Sciences.

The Agilent Thought Leader Award provides funds to support personnel as well as a state-of-the-art Agilent 6550 iFunnel Q-TOF LC/MS system.

“I am extremely pleased to receive this award for metabolomics research, as the survival rates for pancreatic cancer have not significantly improved over the past 20 years,” DePinho said. “This technology will allow us to

  • rapidly identify new targets that drive the formation, progression and maintenance of pancreatic cancer.

Discoveries from this research will also lead to

  • the development of effective early detection biomarkers and novel therapeutic interventions.”

“We are proud to support Dr. DePinho’s exciting translational research program, which will make use of

  • metabolomics and integrated biology workflows and solutions in biomarker discovery,”

said Patrick Kaltenbach, Agilent vice president, general manager of the Liquid Phase Division, and the executive sponsor of this award.

The Agilent Thought Leader Program promotes fundamental scientific advances by support of influential thought leaders in the life sciences and chemical analysis fields.

The covalent modifier Nedd8 is critical for the activation of Smurf1 ubiquitin ligase in tumorigenesis

Ping Xie, Minghua Zhang, Shan He, Kefeng Lu, Yuhan Chen, Guichun Xing, et al.
Nature Communications
  2014; 5(3733).  http://dx.doi.org:/10.1038/ncomms4733

Neddylation, the covalent attachment of ubiquitin-like protein Nedd8, of the Cullin-RING E3 ligase family

  • regulates their ubiquitylation activity.

However, regulation of HECT ligases by neddylation has not been reported to date. Here we show that

  • the C2-WW-HECT ligase Smurf1 is activated by neddylation.

Smurf1 physically interacts with

  1. Nedd8 and Ubc12,
  2. forms a Nedd8-thioester intermediate, and then
  3. catalyses its own neddylation on multiple lysine residues.

Intriguingly, this autoneddylation needs

  • an active site at C426 in the HECT N-lobe.

Neddylation of Smurf1 potently enhances

  • ubiquitin E2 recruitment and
  • augments the ubiquitin ligase activity of Smurf1.

The regulatory role of neddylation

  • is conserved in human Smurf1 and yeast Rsp5.

Furthermore, in human colorectal cancers,

  • the elevated expression of Smurf1, Nedd8, NAE1 and Ubc12
  • correlates with cancer progression and poor prognosis.

These findings provide evidence that

  • neddylation is important in HECT ubiquitin ligase activation and
  • shed new light on the tumour-promoting role of Smurf1.
 Swinging domains in HECT E3

Swinging domains in HECT E3

Subject terms: Biological sciences Cancer Cell biology

Figure 1: Smurf1 expression is elevated in colorectal cancer tissues.

Smurf1 expression is elevated in colorectal cancer tissues.

Smurf1 expression is elevated in colorectal cancer tissues.

(a) Smurf1 expression scores are shown as box plots, with the horizontal lines representing the median; the bottom and top of the boxes representing the 25th and 75th percentiles, respectively; and the vertical bars representing the ra

Figure 2: Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer.

Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer

Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer

(a) Representative images from immunohistochemical staining of Smurf1, Ubc12, NAE1 and Nedd8 in the same colorectal cancer tumour. Scale bars, 100 μm. (bd) The expression scores of Nedd8 (b, n=283 ), NAE1 (c, n=281) and Ubc12 (d, n=19…

Figure 3: Smurf1 interacts with Ubc12.

Smurf1 interacts with Ubc12

Smurf1 interacts with Ubc12

(a) GST pull-down assay of Smurf1 with Ubc12. Both input and pull-down samples were subjected to immunoblotting with anti-His and anti-GST antibodies. Smurf1 interacted with Ubc12 and UbcH5c, but not with Ubc9. (b) Mapping the regions…

Figure 4: Nedd8 is attached to Smurf1through C426-catalysed autoneddylation.

Nedd8 is attached to Smurf1through C426-catalysed autoneddylation

Nedd8 is attached to Smurf1through C426-catalysed autoneddylation

(a) Covalent neddylation of Smurf1 in vitro.Purified His-Smurf1-WT or C699A proteins were incubated with Nedd8 and Nedd8-E1/E2. Reactions were performed as described in the Methods section. Samples were analysed by western blotting wi…

Figure 5: Neddylation of Smurf1 activates its ubiquitin ligase activity.

Neddylation of Smurf1 activates its ubiquitin ligase activity.

Neddylation of Smurf1 activates its ubiquitin ligase activity.

(a) In vivo Smurf1 ubiquitylation assay. Nedd8 was co-expressed with Smurf1 WT or C699A in HCT116 cells (left panels). Twenty-four hours post transfection, cells were treated with MG132 (20 μM, 8 h). HCT116 cells were transfected with…

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f1.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f2.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f3.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f4.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f5.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f6.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f7.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f8.jpg

The deubiquitylase USP33 discriminates between RALB functions in autophagy and innate immune response

M Simicek, S Lievens, M Laga, D Guzenko, VN. Aushev, et al.
Nature Cell Biology 2013; 15, 1220–1230    http://dx.doi.org:/10.1038/ncb2847

The RAS-like GTPase RALB mediates cellular responses to nutrient availability or viral infection by respectively

  • engaging two components of the exocyst complex, EXO84 and SEC5.
  1. RALB employs SEC5 to trigger innate immunity signalling, whereas
  2. RALB–EXO84 interaction induces autophagocytosis.

How this differential interaction is achieved molecularly by the RAL GTPase remains unknown.

We found that whereas GTP binding

  • turns on RALB activity,

ubiquitylation of RALB at Lys 47

  • tunes its activity towards a particular effector.

Specifically, ubiquitylation at Lys 47

  • sterically inhibits RALB binding to EXO84, while
  • facilitating its interaction with SEC5.

Double-stranded RNA promotes

  • RALB ubiquitylation and
  • SEC5–TBK1 complex formation.

In contrast, nutrient starvation

  • induces RALB deubiquitylation
  • by accumulation and relocalization of the deubiquitylase USP33
  • to RALB-positive vesicles.

Deubiquitylated RALB

  • promotes the assembly of the RALB–EXO84–beclin-1 complexes
  • driving autophagosome formation. Thus,
  • ubiquitylation within the effector-binding domain
  • provides the switch for the dual functions of RALB in
    • autophagy and innate immune responses.

Part 5. Metabolic Syndrome

Single Enzyme is Necessary for Development of Diabetes

Published: Aug 20, 2014 http://www.technologynetworks.com/Metabolomics/news.aspx?ID=169416

12-LO enzyme promotes the obesity-induced oxidative stress in the pancreatic cells.

An enzyme called 12-LO promotes the obesity-induced oxidative stress in the pancreatic cells that leads

  • to pre-diabetes, and diabetes.

12-LO’s enzymatic action is the last step in

  • the production of certain small molecules that harm the cell,

according to a team from Indiana University School of Medicine, Indianapolis.

The findings will enable the development of drugs that can interfere with this enzyme, preventing or even reversing diabetes. The research is published ahead of print in the journal Molecular and Cellular Biology.

In earlier studies, these researchers and their collaborators at Eastern Virginia Medical School showed that

  • 12-LO (which stands for 12-lipoxygenase) is present in these cells
  • only in people who become overweight.

The harmful small molecules resulting from 12-LO’s enzymatic action are known as HETEs, short for hydroxyeicosatetraenoic acid.

  1. HETEs harm the mitochondria, which then
  2. fail to produce sufficient energy to enable
  3. the pancreatic cells to manufacture the necessary quantities of insulin.

For the study, the investigators genetically engineered mice that

  • lacked the gene for 12-LO exclusively in their pancreas cells.

Mice were either fed a low-fat or high-fat diet.

Both the control mice and the knockout mice on the high fat diet

  • developed obesity and insulin resistance.

The investigators also examined the pancreatic beta cells of both knockout and control mice, using both microscopic studies and molecular analysis. Those from the knockout mice were intact and healthy, while

  • those from the control mice showed oxidative damage,
  • demonstrating that 12-LO and the resulting HETEs
  • caused the beta cell failure.

Mirmira notes that fatty diet used in the study was the Western Diet, which comprises mostly saturated-“bad”-fats. Based partly on a recent study of related metabolic pathways, he says that

  • the unsaturated and mono-unsaturated fats-which comprise most fats in the healthy,
  • relatively high fat Mediterranean diet-are unlikely to have the same effects.

“Our research is the first to show that 12-LO in the beta cell

  • is the culprit in the development of pre-diabetes, following high fat diets,” says Mirmira.

“Our work also lends important credence to the notion that

  • the beta cell is the primary defective cell in virtually all forms of diabetes and pre-diabetes.”

A New Player in Lipid Metabolism Discovered

Published: Aug18, 2014  http://www.technologynetworks.com/Metabolomics/news.aspx?ID=169356

Specially engineered mice gained no weight, and normal counterparts became obese

  • on the same high-fat, obesity-inducing Western diet.

Specially engineered mice that lacked a particular gene did not gain weight

  • when fed a typical high-fat, obesity-inducing Western diet.

Yet, these mice ate the same amount as their normal counterparts that became obese.

The mice were engineered with fat cells that lacked a gene called SEL1L,

  • known to be involved in the clearance of mis-folded proteins
  • in the cell’s protein making machinery called the endoplasmic reticulum (ER).

When mis-folded proteins are not cleared but accumulate,

  • they destroy the cell and contribute to such diseases as
  1. mad cow disease,
  2. Type 1 diabetes and
  3. cystic fibrosis.

“The million-dollar question is why don’t these mice gain weight? Is this related to its inability to clear mis-folded proteins in the ER?” said Ling Qi, associate professor of molecular and biochemical nutrition and senior author of the study published online July 24 in Cell Metabolism. Haibo Sha, a research associate in Qi’s lab, is the paper’s lead author.

Interestingly, the experimental mice developed a host of other problems, including

  • postprandial hypertriglyceridemia,
  • and fatty livers.

“Although we are yet to find out whether these conditions contribute to the lean phenotype, we found that

  • there was a lipid partitioning defect in the mice lacking SEL1L in fat cells,
  • where fat cells cannot store fat [lipids], and consequently
  • fat goes to the liver.

During the investigation of possible underlying mechanisms, we discovered

  • a novel function for SEL1L as a regulator of lipid metabolism,” said Qi.

Sha said “We were very excited to find that

  • SEL1L is required for the intracellular trafficking of
  • lipoprotein lipase (LPL), acting as a chaperone,” .

and added that “Using several tissue-specific knockout mouse models,

  • we showed that this is a general phenomenon,”

Without LPL, lipids remain in the circulation;

  • fat and muscle cells cannot absorb fat molecules for storage and energy combustion,

People with LPL mutations develop

  • postprandial hypertriglyceridemia similar to
  • conditions found in fat cell-specific SEL1L-deficient mice, said Qi.

Future work will investigate the

  • role of SEL1L in human patients carrying LPL mutations and
  • determine why fat cell-specific SEL1L-deficient mice remain lean under Western diets, said Sha.

Co-authors include researchers from Cedars-Sinai Medical Center in Los Angeles; Wageningen University in the Netherlands; Georgia State University; University of California, Los Angeles; and the Medical College of Soochow University in China.

The study was funded by the U.S. National Institutes of Health, the Netherlands Organization for Health Research and Development National Institutes of Health, the Cedars-Sinai Medical Center, Chinese National Science Foundation, the American Diabetes Association, Cornell’s Center for Vertebrate Genomics and the Howard Hughes Medical Institute.

Part 6. Biomarkers

Biomarkers Take Center Stage

Josh P. Roberts
GEN May 1, 2013 (Vol. 33, No. 9)  http://www.genengnews.com/

While work with biomarkers continues to grow, scientists are also grappling with research-related bottlenecks, such as

  1. affinity reagent development,
  2. platform reproducibility, and
  3. sensitivity.

Biomarkers by definition indicate some state or process that generally occurs

  • at a spatial or temporal distance from the marker itself, and

it would not be an exaggeration to say that biomedicine has become infatuated with them:

  1. where to find them,
  2. when they may appear,
  3. what form they may take, and
  4. how they can be used to diagnose a condition or
  5. predict whether a therapy may be successful.

Biomarkers are on the agenda of many if not most industry gatherings, and in cases such as Oxford Global’s recent “Biomarker Congress” and the GTC “Biomarker Summit”, they hold the naming rights. There, some basic principles were built upon, amended, and sometimes challenged.

In oncology, for example, biomarker discovery is often predicated on the premise that

  • proteins shed from a tumor will traverse to and persist in, and be detectable in, the circulation.

By quantifying these proteins—singularly or as part of a larger “signature”—the hope is

  1. to garner information about the molecular characteristics of the cancer
  2. that will help with cancer detection and
  3. personalization of the treatment strategy.

Yet this approach has not yet turned into the panacea that was hoped for. Bottlenecks exist in

  • affinity reagent development,
  • platform reproducibility, and
  • sensitivity.

There is also a dearth of understanding of some of the

  • fundamental principles of biomarker biology that we need to know the answers to,

said Parag Mallick, Ph.D., whose lab at Stanford University is “working on trying to understand where biomarkers come from.”

There are dogmas saying that

  • circulating biomarkers come solely from secreted proteins.

But Dr. Mallick’s studies indicate that fully

  • 50% of circulating proteins may come from intracellular sources or
  • proteins that are annotated as such.

“We don’t understand the processes governing

  • which tumor-derived proteins end up in the blood.”

Other questions include “how does the size of a tumor affect how much of a given protein will be in the blood?”—perhaps

  • the tumor is necrotic at the center, or
  • it’s hypervascular or hypovascular.

He points out “The problem is that these are highly nonlinear processes at work, and

  • there is a large number of factors that might affect the answer to that question,” .

Their research focuses on using

  1. mass spectrometry and
  2. computational analysis
  • to characterize the biophysical properties of the circulating proteome, and
  • relate these to measurements made of the tumor itself.

Furthermore, he said – “We’ve observed that the proteins that are likely to

  • first show up and persist in the circulation, ..
  • are more stable than proteins that don’t,”
  • “we can quantify how significant the effect is.”

The goal is ultimately to be able to

  1. build rigorous, formal mathematical models that will allow something measured in the blood
  2. to be tied back to the molecular biology taking place in the tumor.

And conversely, to use those models

  • to predict from a tumor what will be found in the circulation.

“Ultimately, the models will allow you to connect the dots between

  • what you measure in the blood and the biology of the tumor.”

Bound for Affinity Arrays

Affinity reagents are the main tools for large-scale protein biomarker discovery. And while this has tended to mean antibodies (or their derivatives), other affinity reagents are demanding a place in the toolbox.

Affimers, a type of affinity reagent being developed by Avacta, consist of

  1. a biologically inert, biophysically stable protein scaffold
  2. containing three variable regions into which
  3. distinct peptides are inserted.

The resulting three-dimensional surface formed by these peptides

  • interacts and binds to proteins and other molecules in solution,
  • much like the antigen-binding site of antibodies.

Unlike antibodies, Affimers are relatively small (13 KDa),

  • non-post-translationally modified proteins
  • that can readily be expressed in bacterial culture.

They may be made to bind surfaces through unique residues

  • engineered onto the opposite face of the Affimer,
  • allowing the binding site to be exposed to the target in solution.

“We don’t seem to see in what we’ve done so far

  • any real loss of activity or functionality of Affimers when bound to surfaces—

they’re very robust,” said CEO Alastair Smith, Ph.D.

Avacta is taking advantage of this stability and its large libraries of Affimers to develop

  • very large affinity microarrays for
  • drug and biomarker discovery.

To date they have printed arrays with around 20–25,000 features, and Dr. Smith is “sure that we can get toward about 50,000 on a slide,” he said. “There’s no real impediment to us doing that other than us expressing the proteins and getting on with it.”

Customers will be provided with these large, complex “naïve” discovery arrays, readable with standard equipment. The plan is for the company to then “support our customers by providing smaller arrays with

  • the Affimers that are binding targets of interest to them,” Dr. Smith foretold.

And since the intellectual property rights are unencumbered,

  • Affimers in those arrays can be licensed to the end users
  • to develop diagnostics that can be validated as time goes on.

Around 20,000-Affimer discovery arrays were recently tested by collaborator Professor Ann Morgan of the University of Leeds with pools of unfractionated serum from patients with symptoms of inflammatory disease. The arrays

  • “rediscovered” elevated C-reactive protein (CRP, the clinical gold standard marker)
  • as well as uncovered an additional 22 candidate biomarkers.
  • other candidates combined with CRP, appear able to distinguish between different diseases such as
  1. rheumatoid arthritis,
  2. psoriatic arthritis,
  3. SLE, or
  4. giant cell arteritis.

Epigenetic Biomarkers

Methylation of adenine

Sometimes biomarkers are used not to find disease but

  • to distinguish healthy human cell types, with
  •  examples being found in flow cytometry and immunohistochemistry.

These widespread applications, however, are difficult to standardize, being

  • subject to arbitrary or subjective gating protocols and other imprecise criteria.

Epiontis instead uses an epigenetic approach. “What we need is a unique marker that is

  • demethylated only in one cell type and
  • methylated in all the other cell types,”

Each cell of the right cell type will have

  • two demethylated copies of a certain gene locus,
  • allowing them to be enumerated by quantitative PCR.

The biggest challenge is finding that unique epigenetic marker. To do so they look through the literature for proteins and genes described as playing a role in the cell type’s biology, and then

  • look at the methylation patterns to see if one can be used as a marker,

They also “use customized Affymetrix chips to look at the

  • differential epigenetic status of different cell types on a genomewide scale.”

explained CBO and founder Ulrich Hoffmueller, Ph.D.

The company currently has a panel of 12 assays for 12 immune cell types. Among these is an assay for

  • regulatory T (Treg) cells that queries the Foxp3 gene—which is uniquely demethylated in Treg
  • even though it is transiently expressed in activated T cells of other subtypes.

Also assayed are Th17 cells, difficult to detect by flow cytometry because

  • “the cells have to be stimulated in vitro,” he pointed out.

Developing New Assays for Cancer Biomarkers

Researchers at Myriad RBM and the Cancer Prevention Research Institute of Texas are collaborating to develop

  • new assays for cancer biomarkers on the Myriad RBM Multi-Analyte Profile (MAP) platform.

The release of OncologyMAP 2.0 expanded Myriad RBM’s biomarker menu to over 250 analytes, which can be measured from a small single sample, according to the company. Using this menu, L. Stephen et al., published a poster, “Analysis of Protein Biomarkers in Prostate and Colorectal Tumor Lysates,” which showed the results of

  • a survey of proteins relevant to colorectal (CRC) and prostate (PC) tumors
  • to identify potential proteins of interest for cancer research.

The study looked at CRC and PC tumor lysates and found that 102 of the 115 proteins showed levels above the lower limit of quantification.

  • Four markers were significantly higher in PC and 10 were greater in CRC.

For most of the analytes, duplicate sections of the tumor were similar, although some analytes did show differences. In four of the CRC analytes, tumor number four showed differences for CEA and tumor number 2 for uPA.

Thirty analytes were shown to be

  • different in CRC tumor compared to its adjacent tissue.
  • Ten of the analytes were higher in adjacent tissue compared to CRC.
  • Eighteen of the markers examined demonstrated  —-

significant correlations of CRC tumor concentration to serum levels.

“This suggests.. that the Oncology MAP 2.0 platform “provides a good method for studying changes in tumor levels because many proteins can be assessed with a very small sample.”

Clinical Test Development with MALDI-ToF

While there have been many attempts to translate results from early discovery work on the serum proteome into clinical practice, few of these efforts have progressed past the discovery phase.

Matrix-assisted laser desorption/ionization-time of flight (MALDI-ToF) mass spectrometry on unfractionated serum/plasma samples offers many practical advantages over alternative techniques, and does not require

  • a shift from discovery to development and commercialization platforms.

Biodesix claims it has been able to develop the technology into

  • a reproducible, high-throughput tool to
  • routinely measure protein abundance from serum/plasma samples.

“.. we improved data-analysis algorithms to

  • reproducibly obtain quantitative measurements of relative protein abundance from MALDI-ToF mass spectra.

Heinrich Röder, CTO points out that the MALDI-ToF measurements

  • are combined with clinical outcome data using
  • modern learning theory techniques
  • to define specific disease states
  • based on a patient’s serum protein content,”

The clinical utility of the identification of these disease states can be investigated through a retrospective analysis of differing sample sets. For example, Biodesix clinically validated its first commercialized serum proteomic test, VeriStrat®, in 85 different retrospective sample sets.

Röder adds that “It is becoming increasingly clear that

  • the patients whose serum is characterized as VeriStrat Poor show
  • consistently poor outcomes irrespective of
  1. tumor type,
  2. histology, or
  3. molecular tumor characteristics,”

MALDI-ToF mass spectrometry, in its standard implementation,

  • allows for the observation of around 100 mostly high-abundant serum proteins.

Further, “while this does not limit the usefulness of tests developed from differential expression of these proteins,

  • the discovery potential would be greatly enhanced
  • if we could probe deeper into the proteome
  • while not giving up the advantages of the MALDI-ToF approach,”

Biodesix reports that its new MALDI approach, Deep MALDI™, can perform

  • simultaneous quantitative measurement of more than 1,000 serum protein features (or peaks) from 10 µL of serum in a high-throughput manner.
  • it increases the observable signal noise ratio from a few hundred to over 50,000,
  • resulting in the observation of many lower-abundance serum proteins.

Breast cancer, a disease now considered to be a collection of many complexes of symptoms and signatures—the dominant ones are labeled Luminal A, Luminal B, Her2, and Basal— which suggests different prognose, and

  • these labels are considered too simplistic for understanding and managing a woman’s cancer.

Studies published in the past year have looked at

  1. somatic mutations,
  2. gene copy number aberrations,
  3. gene expression abnormalities,
  4. protein and miRNA expression, and
  5. DNA methylation,

coming up with a list of significantly mutated genes—hot spots—in different categories of breast cancers. Targeting these will inevitably be the focus of much coming research.

“We’ve been taking these large trials and profiling these on a variety of array or sequence platforms. We think we’ll get

  1. prognostic drivers
  2. predictive markers for taxanes and
  3. monoclonal antibodies and
  4. tamoxifen and aromatase inhibitors,”
    explained Brian Leyland-Jones, Ph.D., director of Edith Sanford Breast Cancer Research. “We will end up with 20–40 different diseases, maybe more.”

Edith Sanford Breast Cancer Research is undertaking a pilot study in collaboration with The Scripps Research Institute, using a variety of tests on 25 patients to see how the information they provide complements each other, the overall flow, and the time required to get and compile results.

Laser-captured tumor samples will be subjected to low passage whole-genome, exome, and RNA sequencing (with targeted resequencing done in parallel), and reverse-phase protein and phosphorylation arrays, with circulating nucleic acids and circulating tumor cells being queried as well. “After that we hope to do a 100- or 150-patient trial when we have some idea of the best techniques,” he said.

Dr. Leyland-Jones predicted that ultimately most tumors will be found

  • to have multiple drivers,
  • with most patients receiving a combination of two, three, or perhaps four different targeted therapies.

Reduce to Practice

According to Randox, the evidence Investigator is a sophisticated semi-automated biochip sys­tem designed for research, clinical, forensic, and veterinary applications.

Once biomarkers that may have an impact on therapy are discovered, it is not always routine to get them into clinical practice. Leaving regulatory and financial, intellectual property and cultural issues aside, developing a diagnostic based on a biomarker often requires expertise or patience that its discoverer may not possess.

Andrew Gribben is a clinical assay and development scientist at Randox Laboratories, based in Northern Ireland, U.K. The company utilizes academic and industrial collaborators together with in-house discovery platforms to identify biomarkers that are

  • augmented or diminished in a particular pathology
  • relative to appropriate control populations.

Biomarkers can be developed to be run individually or

  • combined into panels of immunoassays on its multiplex biochip array technology.

Specificity can also be gained—or lost—by the affinity of reagents in an assay. The diagnostic potential of Heart-type fatty acid binding protein (H-FABP) abundantly expressed in human myocardial cells was recognized by Jan Glatz of Maastricht University, The Netherlands, back in 1988. Levels rise quickly within 30 minutes after a myocardial infarction, peaking at 6–8 hours and return to normal within 24–30 hours. Yet at the time it was not known that H-FABP was a member of a multiprotein family, with which the polyclonal antibodies being used in development of an assay were cross-reacting, Gribben related.

Randox developed monoclonal antibodies specific to H-FABP, funded trials investigating its use alone, and multiplexed with cardiac biomarker assays, and, more than 30 years after the biomarker was identified, in 2011, released a validated assay for H-FABP as a biomarker for early detection of acute myocardial infarction.

Ultrasensitive Immunoassays for Biomarker Development

Research has shown that detection and monitoring of biomarker concentrations can provide

  • insights into disease risk and progression.

Cytokines have become attractive biomarkers and candidates

  • for targeted therapies for a number of autoimmune diseases, including rheumatoid arthritis (RA), Crohn’s disease, and psoriasis, among others.

However, due to the low-abundance of circulating cytokines, such as IL-17A, obtaining robust measurements in clinical samples has been difficult.

Singulex reports that its digital single-molecule counting technology provides

  • increased precision and detection sensitivity over traditional ELISA techniques,
  • helping to shed light on biomarker verification and validation programs.

The company’s Erenna® immunoassay system, which includes optimized immunoassays, offers LLoQ to femtogram levels per mL resolution—even in healthy populations, at an improvement of 1-3 fold over standard ELISAs or any conventional technology and with a dynamic range of up to 4-logs, according to a Singulex official, who adds that

  • this sensitivity improvement helps minimize undetectable samples that
  • could otherwise delay or derail clinical studies.

The official also explains that the Singulex solution includes an array of products and services that are being applied to a number of programs and have enabled the development of clinically relevant biomarkers, allowing translation from discovery to the clinic.

In a poster entitled “Advanced Single Molecule Detection: Accelerating Biomarker Development Utilizing Cytokines through Ultrasensitive Immunoassays,” a case study was presented of work performed by Jeff Greenberg of NYU to show how the use of the Erenna system can provide insights toward

  • improving the clinical utility of biomarkers and
  • accelerating the development of novel therapies for treating inflammatory diseases.

A panel of inflammatory biomarkers was examined in DMARD (disease modifying antirheumatic drugs)-naïve RA (rheumatoid arthritis) vs. knee OA (osteoarthritis) patient cohorts. Markers that exhibited significant differences in plasma concentrations between the two cohorts included

  • CRP, IL-6R alpha, IL-6, IL-1 RA, VEGF, TNF-RII, and IL-17A, IL-17F, and IL-17A/F.

Among the three tested isoforms of IL-17,

  • the magnitude of elevation for IL-17F in RA patients was the highest.

“Singulex provides high-resolution monitoring of baseline IL-17A concentrations that are present at low levels,” concluded the researchers. “The technology also enabled quantification of other IL-17 isoforms in RA patients, which have not been well characterized before.”

The Singulex Erenna System has also been applied to cardiovascular disease research, for which its

  • cardiac troponin I (cTnI) digital assay can be used to measure circulating
  • levels of cTnI undetectable by other commercial assays.

Recently presented data from Brigham and Women’s Hospital and the TIMI-22 study showed that

  • using the Singulex test to serially monitor cTnI helps
  • stratify risk in post-acute coronary syndrome patients and
  • can identify patients with elevated cTnI
  • who have the most to gain from intensive vs. moderate-dose statin therapy,

according to the scientists involved in the research.

The study poster, “Prognostic Performance of Serial High Sensitivity Cardiac Troponin Determination in Stable Ischemic Heart Disease: Analysis From PROVE IT-TIMI 22,” was presented at the 2013 American College of Cardiology (ACC) Annual Scientific Session & Expo by R. O’Malley et al.

Biomarkers Changing Clinical Medicine

Better Diagnosis, Prognosis, and Drug Targeting Are among Potential Benefits

  1. John Morrow Jr., Ph.D.

Researchers at EMD Chemicals are developing biomarker immunoassays

  • to monitor drug-induced toxicity including kidney damage.

The pace of biomarker development is accelerating as investigators report new studies on cancer, diabetes, Alzheimer disease, and other conditions in which the evaluation and isolation of workable markers is prominently featured.

Wei Zheng, Ph.D., leader of the R&D immunoassay group at EMD Chemicals, is overseeing a program to develop biomarker immunoassays to

  • monitor drug-induced toxicity, including kidney damage.

“One of the principle reasons for drugs failing during development is because of organ toxicity,” says Dr. Zheng.
“proteins liberated into the serum and urine can serve as biomarkers of adverse response to drugs, as well as disease states.”

Through collaborative programs with Rules-Based Medicine (RBM), the EMD group has released panels for the profiling of human renal impairment and renal toxicity. These urinary biomarker based products fit the FDA and EMEA guidelines for assessment of drug-induced kidney damage in rats.

The group recently performed a screen for potential protein biomarkers in relation to

  • kidney toxicity/damage on a set of urine and plasma samples
  • from patients with documented renal damage.

Additionally, Dr. Zheng is directing efforts to move forward with the multiplexed analysis of

  • organ and cellular toxicity.

Diseases thought to involve compromised oxidative phosphorylation include

  • diabetes, Parkinson and Alzheimer diseases, cancer, and the aging process itself.

Good biomarkers allow Dr. Zheng to follow the mantra, “fail early, fail fast.” With robust, multiplexible biomarkers, EMD can detect bad drugs early and kill them before they move into costly large animal studies and clinical trials. “Recognizing the severe liability that toxicity presents, we can modify the structure of the candidate molecule and then rapidly reassess its performance.”

Scientists at Oncogene Science a division of Siemens Healthcare Diagnostics, are also focused on biomarkers. “We are working on a number of antibody-based tests for various cancers, including a test for the Ca-9 CAIX protein, also referred to as carbonic anhydrase,” Walter Carney, Ph.D., head of the division, states.

CAIX is a transmembrane protein that is

  • overexpressed in a number of cancers, and, like Herceptin and the Her-2 gene,
  • can serve as an effective and specific marker for both diagnostic and therapeutic purposes.
  • It is liberated into the circulation in proportion to the tumor burden.

Dr. Carney and his colleagues are evaluating patients after tumor removal for the presence of the Ca-9 CAIX protein. If

  • the levels of the protein in serum increase over time,
  • this suggests that not all the tumor cells were removed and the tumor has metastasized.

Dr. Carney and his team have developed both an immuno-histochemistry and an ELISA test that could be used as companion diagnostics in clinical trials of CAIX-targeted drugs.

The ELISA for the Ca-9 CAIX protein will be used in conjunction with Wilex’ Rencarex®, which is currently in a

  • Phase III trial as an adjuvant therapy for non-metastatic clear cell renal cancer.

Additionally, Oncogene Science has in its portfolio an FDA-approved test for the Her-2 marker. Originally approved for Her-2/Neu-positive breast cancer, its indications have been expanded over time, and was approved

  • for the treatment of gastric cancer last year.

It is normally present on breast cancer epithelia but

  • overexpressed in some breast cancer tumors.

“Our products are designed to be used in conjunction with targeted therapies,” says Dr. Carney. “We are working with companies that are developing technology around proteins that are

  • overexpressed in cancerous tissues and can be both diagnostic and therapeutic targets.”

The long-term goal of these studies is to develop individualized therapies, tailored for the patient. Since the therapies are expensive, accurate diagnostics are critical to avoid wasting resources on patients who clearly will not respond (or could be harmed) by the particular drug.

“At this time the rate of response to antibody-based therapies may be very poor, as

  • they are often employed late in the course of the disease, and patients are in such a debilitated state
  • that they lack the capacity to react positively to the treatment,” Dr. Carney explains.

Nanoscale Real-Time Proteomics

Stanford University School of Medicine researchers, working with Cell BioSciences, have developed a

  • nanofluidic proteomic immunoassay that measures protein charge,
  • similar to immunoblots, mass spectrometry, or flow cytometry.
  • unlike these platforms, this approach can measure the amount of individual isoforms,
  • specifically, phosphorylated molecules.

“We have developed a nanoscale device for protein measurement, which I believe could be useful for clinical analysis,” says Dean W. Felsher, M.D., Ph.D., associate professor at Stanford University School of Medicine.

Critical oncogenic transformations involving

  • the activation of the signal-related kinases ERK-1 and ERK-2 can now be followed with ease.

“The fact that we measure nanoquantities with accuracy means that

  • we can interrogate proteomic profiles in clinical patients,

by drawing tiny needle aspirates from tumors over the course of time,” he explains.

“This allows us to observe the evolution of tumor cells and

  • their response to therapy
  • from a baseline of the normal tissue as a standard of comparison.”

According to Dr. Felsher, 20 cells is a large enough sample to obtain a detailed description. The technology is easy to automate, which allows

  • the inclusion of hundreds of assays.

Contrasting this technology platform with proteomic analysis using microarrays, Dr. Felsher notes that the latter is not yet workable for revealing reliable markers.

Dr. Felsher and his group published a description of this technology in Nature Medicine. “We demonstrated that we could take a set of human lymphomas and distinguish them from both normal tissue and other tumor types. We can

  • quantify changes in total protein, protein activation, and relative abundance of specific phospho-isoforms
  • from leukemia and lymphoma patients receiving targeted therapy.

Even with very small numbers of cells, we are able to show that the results are consistent, and

  • our sample is a random profile of the tumor.”

Splice Variant Peptides

“Aberrations in alternative splicing may generate

  • much of the variation we see in cancer cells,”

says Gilbert Omenn, Ph.D., director of the center for computational medicine and bioinformatics at the University of Michigan School of Medicine. Dr. Omenn and his colleague, Rajasree Menon, are

  • using this variability as a key to new biomarker identification.

It is becoming evident that splice variants play a significant role in the properties of cancer cells, including

  • initiation, progression, cell motility, invasiveness, and metastasis.

Alternative splicing occurs through multiple mechanisms

  • when the exons or coding regions of the DNA transcribe mRNA,
  • generating initiation sites and connecting exons in protein products.

Their translation into protein can result in numerous protein isoforms, and

  • these isoforms may reflect a diseased or cancerous state.

Regulatory elements within the DNA are responsible for selecting different alternatives; thus

  • the splice variants are tempting targets for exploitation as biomarkers.
Analyses of the splice-site mutation

Analyses of the splice-site mutation

Despite the many questions raised by these observations, splice variation in tumor material has not been widely studied. Cancer cells are known for their tremendous variability, which allows them to

  • grow rapidly, metastasize, and develop resistance to anticancer drugs.

Dr. Omenn and his collaborators used

  • mass spec data to interrogate a custom-built database of all potential mRNA sequences
  • to find alternative splice variants.

When they compared normal and malignant mammary gland tissue from a mouse model of Her2/Neu human breast cancers, they identified a vast number (608) of splice variant proteins, of which

  • peptides from 216 were found only in the tumor sample.

“These novel and known alternative splice isoforms

  • are detectable both in tumor specimens and in plasma and
  • represent potential biomarker candidates,” Dr. Omenn adds.

Dr. Omenn’s observations and those of his colleague Lewis Cantley, Ph.D., have also

  • shed light on the origins of the classic Warburg effect,
  • the shift to anaerobic glycolysis in tumor cells.

The novel splice variant M2, of muscle pyruvate kinase,

  • is observed in embryonic and tumor tissue.

It is associated with this shift, the result of

  • the expression of a peptide splice variant sequence.

It is remarkable how many different areas of the life sciences are tied into the phenomenon of splice variation. The changes in the genetic material can be much greater than point mutations, which have been traditionally considered to be the prime source of genetic variability.

“We now have powerful methods available to uncover a whole new category of variation,” Dr. Omenn says. “High-throughput RNA sequencing and proteomics will be complementary in discovery studies of splice variants.”

Splice variation may play an important role in rapid evolutionary changes, of the sort discussed by Susumu Ohno and Stephen J. Gould decades ago. They, and other evolutionary biologists, argued that

  • gene duplication, combined with rapid variability, could fuel major evolutionary jumps.

At the time, the molecular mechanisms of variation were poorly understood, but today

  • the tools are available to rigorously evaluate the role of
  • splice variation and other contributors to evolutionary change.

“Biomarkers derived from studies of splice variants, could, in the future, be exploited

  • both for diagnosis and prognosis and
  • for drug targeting of biological networks,
  • in situations such as the Her-2/Neu breast cancers,” Dr. Omenn says.

Aminopeptidase Activities

“By correlating the proteolytic patterns with disease groups and controls, we have shown that

  • exopeptidase activities contribute to the generation of not only cancer-specific
  • but also cancer type specific serum peptides.

according to Paul Tempst, Ph.D., professor and director of the Protein Center at the Memorial Sloan-Kettering Cancer Center.

So there is a direct link between peptide marker profiles of disease and differential protease activity.” For this reason Dr. Tempst argues that “the patterns we describe may have value as surrogate markers for detection and classification of cancer.”

To investigate this avenue, Dr. Tempst and his colleagues have followed

  • the relationship between exopeptidase activities and metastatic disease.

“We monitored controlled, de novo peptide breakdown in large numbers of biological samples using mass spectrometry, with relative quantitation of the metabolites,” Dr. Tempst explains. This entailed the use of magnetic, reverse-phase beads for analyte capture and a MALDI-TOF MS read-out.

“In biomarker discovery programs, functional proteomics is usually not pursued,” says Dr. Tempst. “For putative biomarkers, one may observe no difference in quantitative levels of proteins, while at the same time, there may be substantial differences in enzymatic activity.”

In a preliminary prostate cancer study, the team found a significant difference

  • in activity levels of exopeptidases in serum from patients with metastatic prostate cancer
  • as compared to primary tumor-bearing individuals and normal healthy controls.

However, there were no differences in amounts of the target protein, and this potential biomarker would have been missed if quantitative levels of protein had been the only criterion of selection.

It is frequently stated that “practical fusion energy is 30 years in the future and always will be.” The same might be said of functional, practical biomarkers that can pass muster with the FDA. But splice variation represents a new handle on this vexing problem. It appears that we are seeing the emergence of a new approach that may finally yield definitive diagnostic tests, detectable in serum and urine samples.

Part 7. Epigenetics and Drug Metabolism

DNA Methylation Rules: Studying Epigenetics with New Tools

The tools to unravel the epigenetic control mechanisms that influence how cells control access of transcriptional proteins to DNA are just beginning to emerge.

Patricia Fitzpatrick Dimond, Ph.D.

http://www.genengnews.com/media/images/AnalysisAndInsight/Feb7_2013_24454248_GreenPurpleDNA_EpigeneticsToolsII3576166141.jpg

New tools may help move the field of epigenetic analysis forward and potentially unveil novel biomarkers for cellular development, differentiation, and disease.

DNA sequencing has had the power of technology behind it as novel platforms to produce more sequencing faster and at lower cost have been introduced. But the tools to unravel the epigenetic control mechanisms that influence how cells control access of transcriptional proteins to DNA are just beginning to emerge.

Among these mechanisms, DNA methylation, or the enzymatically mediated addition of a methyl group to cytosine or adenine dinucleotides,

  • serves as an inherited epigenetic modification that
  • stably modifies gene expression in dividing cells.

The unique methylomes are largely maintained in differentiated cell types, making them critical to understanding the differentiation potential of the cell.

In the DNA methylation process, cytosine residues in the genome are enzymatically modified to 5-methylcytosine,

  • which participates in transcriptional repression of genes during development and disease progression.

5-methylcytosine can be further enzymatically modified to 5-hydroxymethylcytosine by the TET family of methylcytosine dioxygenases. DNA methylation affects gene transcription by physically

  • interfering with the binding of proteins involved in gene transcription.

Methylated DNA may be bound by methyl-CpG-binding domain proteins (MBDs) that can

  • then recruit additional proteins. Some of these include histone deacetylases and other chromatin remodeling proteins that modify histones, thereby
  • forming compact, inactive chromatin, or heterochromatin.

While DNA methylation doesn’t change the genetic code,

  • it influences chromosomal stability and gene expression.

Epigenetics and Cancer Biomarkers

multistage chemical carcinogenesis

multistage chemical carcinogenesis

And because of the increasing recognition that DNA methylation changes are involved in human cancers, scientists have suggested that these epigenetic markers may provide biological markers for cancer cells, and eventually point toward new diagnostic and therapeutic targets. Cancer cell genomes display genome-wide abnormalities in DNA methylation patterns,

  • some of which are oncogenic and contribute to genome instability.

In particular, de novo methylation of tumor suppressor gene promoters

  • occurs frequently in cancers, thereby silencing them and promoting transformation.

Cytosine hydroxymethylation (5-hydroxymethylcytosine, or 5hmC), the aforementioned DNA modification resulting from the enzymatic conversion of 5mC into 5-hydroxymethylcytosine by the TET family of oxygenases, has been identified

  • as another key epigenetic modification marking genes important for
  • pluripotency in embryonic stem cells (ES), as well as in cancer cells.

The base 5-hydroxymethylcytosine was recently identified as an oxidation product of 5-methylcytosine in mammalian DNA. In 2011, using sensitive and quantitative methods to assess levels of 5-hydroxymethyl-2′-deoxycytidine (5hmdC) and 5-methyl-2′-deoxycytidine (5mdC) in genomic DNA, scientists at the Department of Cancer Biology, Beckman Research Institute of the City of Hope, Duarte, California investigated

  • whether levels of 5hmC can distinguish normal tissue from tumor tissue.

They showed that in squamous cell lung cancers, levels of 5hmdC showed

  • up to five-fold reduction compared with normal lung tissue.

In brain tumors,5hmdC showed an even more drastic reduction

  • with levels up to more than 30-fold lower than in normal brain,
  • but 5hmdC levels were independent of mutations in isocitrate dehydrogenase-1, the enzyme that converts 5hmC to 5hmdC.

Immunohistochemical analysis indicated that 5hmC is “remarkably depleted” in many types of human cancer.

  • there was an inverse relationship between 5hmC levels and cell proliferation with lack of 5hmC in proliferating cells.

Their data suggest that 5hmdC is strongly depleted in human malignant tumors,

  • a finding that adds another layer of complexity to the aberrant epigenome found in cancer tissue.

In addition, a lack of 5hmC may become a useful biomarker for cancer diagnosis.

Enzymatic Mapping

But according to New England Biolabs’ Sriharsa Pradhan, Ph.D., methods for distinguishing 5mC from 5hmC and analyzing and quantitating the cell’s entire “methylome” and “hydroxymethylome” remain less than optimal.

The protocol for bisulphite conversion to detect methylation remains the “gold standard” for DNA methylation analysis. This method is generally followed by PCR analysis for single nucleotide resolution to determine methylation across the DNA molecule. According to Dr. Pradhan, “.. bisulphite conversion does not distinguish 5mC and 5hmC,”

Recently we found an enzyme, a unique DNA modification-dependent restriction endonuclease, AbaSI, which can

  • decode the hydryoxmethylome of the mammalian genome.

You easily can find out where the hydroxymethyl regions are.”

AbaSI, recognizes 5-glucosylatedmethylcytosine (5gmC) with high specificity when compared to 5mC and 5hmC, and

  • cleaves at narrow range of distances away from the recognized modified cytosine.

By mapping the cleaved ends, the exact 5hmC location can, the investigators reported, be determined.

Dr. Pradhan and his colleagues at NEB; the Department of Biochemistry, Emory University School of Medicine, Atlanta; and the New England Biolabs Shanghai R&D Center described use of this technique in a paper published in Cell Reports this month, in which they described high-resolution enzymatic mapping of genomic hydroxymethylcytosine in mouse ES cells.

In the current report, the authors used the enzyme technology for the genome-wide high-resolution hydroxymethylome, describing simple library construction even with a low amount of input DNA (50 ng) and the ability to readily detect 5hmC sites with low occupancy.

As a result of their studies, they propose that

factors affecting the local 5mC accessibility to TET enzymes play important roles in the 5hmC deposition

  • including include chromatin compaction, nucleosome positioning, or TF binding.
  •  the regularly oscillating 5hmC profile around the CTCF-binding sites, suggests 5hmC ‘‘writers’’ may be sensitive to the nucleosomal environment.
  • some transiently stable 5hmCs may indicate a poised epigenetic state or demethylation intermediate, whereas others may suggest a locally accessible chromosomal environment for the TET enzymatic apparatus.

“We were able to do complete mapping in mouse embryonic cells and are pleased about what this enzyme can do and how it works,” Dr. Pradhan said.

And the availability of novel tools that make analysis of the methylome and hypomethylome more accessible will move the field of epigenetic analysis forward and potentially novel biomarkers for cellular development, differentiation, and disease.

Patricia Fitzpatrick Dimond, Ph.D. (pdimond@genengnews.com), is technical editor at Genetic Engineering & Biotechnology News.

Epigenetic Regulation of ADME-Related Genes: Focus on Drug Metabolism and Transport

Published: Sep 23, 2013

Epigenetic regulation of gene expression refers to heritable factors that are functionally relevant genomic modifications but that do not involve changes in DNA sequence.

Examples of such modifications include

  • DNA methylation, histone modifications, noncoding RNAs, and chromatin architecture.

Epigenetic modifications are crucial for

packaging and interpreting the genome, and they have fundamental functions in regulating gene expression and activity under the influence of physiologic and environmental factors.

In this issue of Drug Metabolism and Disposition, a series of articles is presented to demonstrate the role of epigenetic factors in regulating

  • the expression of genes involved in drug absorption, distribution, metabolism, and excretion in organ development, tissue-specific gene expression, sexual dimorphism, and in the adaptive response to xenobiotic exposure, both therapeutic and toxic.

The articles also demonstrate that, in addition to genetic polymorphisms, epigenetics may also contribute to wide inter-individual variations in drug metabolism and transport. Identification of functionally relevant epigenetic biomarkers in human specimens has the potential to improve prediction of drug responses based on patient’s epigenetic profiles.

http://www.technologynetworks.com/Metabolomics/news.aspx?ID=157804

This study is published online in Drug Metabolism and Disposition

Part 8.  Pictorial Maps

 Prediction of intracellular metabolic states from extracellular metabolomic data

MK Aurich, G Paglia, Ottar Rolfsson, S Hrafnsdottir, M Magnusdottir, MM Stefaniak, BØ Palsson, RMT Fleming &

Ines Thiele

Metabolomics Aug 14, 2014;

http://dx.doi.org:/10.1007/s11306-014-0721-3

http://link.springer.com/article/10.1007/s11306-014-0721-3/fulltext.html#Sec1

http://link.springer.com/static-content/images/404/art%253A10.1007%252Fs11306-014-0721-3/MediaObjects/11306_2014_721_Fig1_HTML.gif

Metabolic models can provide a mechanistic framework

  • to analyze information-rich omics data sets, and are
  • increasingly being used to investigate metabolic alternations in human diseases.

An expression of the altered metabolic pathway utilization is the selection of metabolites consumed and released by cells. However, methods for the

  • inference of intracellular metabolic states from extracellular measurements in the context of metabolic models remain underdeveloped compared to methods for other omics data.

Herein, we describe a workflow for such an integrative analysis

  • emphasizing on extracellular metabolomics data.

We demonstrate,

  • using the lymphoblastic leukemia cell lines Molt-4 and CCRF-CEM,

how our methods can reveal differences in cell metabolism. Our models explain metabolite uptake and secretion by predicting

  • a more glycolytic phenotype for the CCRF-CEM model and
  • a more oxidative phenotype for the Molt-4 model,
  • which was supported by our experimental data.

Gene expression analysis revealed altered expression of gene products at

  • key regulatory steps in those central metabolic pathways, and

literature query emphasized the role of these genes in cancer metabolism.

Moreover, in silico gene knock-outs identified unique

  •  control points for each cell line model, e.g., phosphoglycerate dehydrogenase for the Molt-4 model.

Thus, our workflow is well suited to the characterization of cellular metabolic traits based on

  • -extracellular metabolomic data, and it allows the integration of multiple omics data sets
  • into a cohesive picture based on a defined model context.

Keywords Constraint-based modeling _ Metabolomics _ Multi-omics _ Metabolic network _ Transcriptomics

1 Introduction

Modern high-throughput techniques have increased the pace of biological data generation. Also referred to as the ‘‘omics avalanche’’, this wealth of data provides great opportunities for metabolic discovery. Omics data sets

  • contain a snapshot of almost the entire repertoire of mRNA, protein, or metabolites at a given time point or

under a particular set of experimental conditions. Because of the high complexity of the data sets,

  • computational modeling is essential for their integrative analysis.

Currently, such data analysis is a bottleneck in the research process and methods are needed to facilitate the use of these data sets, e.g., through meta-analysis of data available in public databases [e.g., the human protein atlas (Uhlen et al. 2010) or the gene expression omnibus (Barrett et al.  2011)], and to increase the accessibility of valuable information for the biomedical research community.

Constraint-based modeling and analysis (COBRA) is

  • a computational approach that has been successfully used to
  • investigate and engineer microbial metabolism through the prediction of steady-states (Durot et al.2009).

The basis of COBRA is network reconstruction: networks are assembled in a bottom-up fashion based on

  • genomic data and extensive
  • organism-specific information from the literature.

Metabolic reconstructions capture information on the

  • known biochemical transformations taking place in a target organism
  • to generate a biochemical, genetic and genomic knowledge base (Reed et al. 2006).

Once assembled, a

  • metabolic reconstruction can be converted into a mathematical model (Thiele and Palsson 2010), and
  • model properties can be interrogated using a great variety of methods (Schellenberger et al. 2011).

The ability of COBRA models

  • to represent genotype–phenotype and environment–phenotype relationships arises
  • through the imposition of constraints, which
  • limit the system to a subset of possible network states (Lewis et al. 2012).

Currently, COBRA models exist for more than 100 organisms, including humans (Duarte et al. 2007; Thiele et al. 2013).

Since the first human metabolic reconstruction was described [Recon 1 (Duarte et al. 2007)],

  • biomedical applications of COBRA have increased (Bordbar and Palsson 2012).

One way to contextualize networks is to

  • define their system boundaries according to the metabolic states of the system, e.g., disease or dietary regimes.

The consequences of the applied constraints can

  • then be assessed for the entire network (Sahoo and Thiele 2013).

Additionally, omics data sets have frequently been used

  • to generate cell-type or condition-specific metabolic models.

Models exist for specific cell types, such as

  1. enterocytes (Sahoo and Thiele2013),
  2. macrophages (Bordbar et al. 2010),
  3. adipocytes (Mardinoglu et al. 2013),
  4. even multi-cell assemblies that represent the interactions of brain cells (Lewis et al. 2010).

All of these cell type specific models, except the enterocyte reconstruction

  • were generated based on omics data sets.

Cell-type-specific models have been used to study

  • diverse human disease conditions.

For example, an adipocyte model was generated using

  • transcriptomic, proteomic, and metabolomics data.

This model was subsequently used to investigate metabolic alternations in adipocytes

  • that would allow for the stratification of obese patients (Mardinoglu et al. 2013).

The biomedical applications of COBRA have been

  1. cancer metabolism (Jerby and Ruppin, 2012).
  2. predicting drug targets (Folger et al. 2011; Jerby et al. 2012).

A cancer model was generated using

  • multiple gene expression data sets and subsequently used
  • to predict synthetic lethal gene pairs as potential drug targets
  • selective for the cancer model, but non-toxic to the global model (Recon 1),

a consequence of the reduced redundancy in the cancer specific model (Folger et al. 2011).

In a follow up study, lethal synergy between FH and enzymes of the heme metabolic pathway

  • were experimentally validated and resolved the mechanism by which FH deficient cells,
    e.g., in renal-cell cancer cells survive a non-functional TCA cycle (Frezza et al. 2011).

Contextualized models, which contain only the subset of reactions active in a particular tissue (or cell-) type,

  • can be generated in different ways (Becker and Palsson, 2008; Jerby et al. 2010).

However, the existing algorithms mainly consider

  • gene expression and proteomic data
  • to define the reaction sets that comprise the contextualized metabolic models.

These subset of reactions are usually defined

  • based on the expression or absence of expression of the genes or proteins (present and absent calls),
  • or inferred from expression values or differential gene expression.

Comprehensive reviews of the methods are available (Blazier and Papin, 2012; Hyduke et al. 2013). Only the compilation of a large set of omics data sets

  • can result in a tissue (or cell-type) specific metabolic model, whereas

the representation of one particular experimental condition is achieved

  • through the integration of omics data set generated from one experiment only (condition-specific cell line model).

Recently, metabolomic data sets have become more comprehensive and

  • using these data sets allow direct determination of the metabolic network components (the metabolites).

Additionally, metabolomics has proven to be stable, relatively inexpensive, and highly reproducible (Antonucci et al. 2012). These factors make metabolomic data sets particularly valuable for

  • interrogation of metabolic phenotypes.

Thus, the integration of these data sets is now an active field of research (Li et al. 2013; Mo et al. 2009; Paglia et al. 2012b; Schmidt et al. 2013).

Generally, metabolomic data can be incorporated into metabolic networks as

  • qualitative, quantitative, and thermodynamic constraints (Fleming et al. 2009; Mo et al. 2009).

Mo et al. used metabolites detected in the

  • spent medium of yeast cells to determine intracellular flux states through a sampling analysis (Mo et al. 2009),
  • which allowed unbiased interrogation of the possible network states (Schellenberger and Palsson 2009) and
  • prediction of internal pathway use.
Modes of transcriptional regulation during the YMC

Modes of transcriptional regulation during the YMC

Such analyses have also been used to reveal the effects of

  1. enzymopathies on red blood cells (Price et al. 2004),
  2. to study effects of diet on diabetes (Thiele et al. 2005) and
  3. to define macrophage metabolic states (Bordbar et al. 2010).

This type of analysis is available as a function in the COBRA toolbox (Schellenberger et al. 2011).

In this study, we established a workflow

  • for the generation and analysis of condition-specific metabolic cell line models
  • that can facilitate the interpretation of metabolomic data.

Our modeling yields meaningful predictions regarding

  • metabolic differences between two lymphoblastic leukemia cell lines (Fig. 1A).

Fig. 1

metabol leukem cell lines11306_2014_721_Fig1_HTML

metabol leukem cell lines11306_2014_721_Fig1_HTML

A Combined experimental and computational pipeline to study human metabolism.

  1. Experimental work and omics data analysis steps precede computational modeling.
  2. Model predictions are validated based on targeted experimental data.
  3. Metabolomic and transcriptomic data are used for model refinement and submodel extraction.
  4. Functional analysis methods are used to characterize the metabolism of the cell-line models and compare it to additional experimental data.
  5. The validated models are subsequently used for the prediction of drug targets.

B Uptake and secretion pattern of model metabolites. All metabolite uptakes and secretions that were mapped during model generation are shown.

  • Metabolite uptakes are depicted on the left, and
  • secreted metabolites are shown on the right.
  1. A number of metabolite exchanges mapped to the model were unique to one cell line.
  2. Differences between cell lines were used to set quantitative constraints for the sampling analysis.

C Statistics about the cell line-specific network generation.

D Quantitative constraints.

For the sampling analysis, an additional set of constraints was imposed on the cell line specific models,

  • emphasizing the differences in metabolite uptake and secretion between cell lines.

Higher uptake of a metabolite was allowed

  • in the model of the cell line that consumed more of the metabolite in vitro, whereas
  • the supply was restricted for the model with lower in vitro uptake.

This was done by establishing the same ratio between the models bounds as detected in vitro.

X denotes the factor (slope ratio) that distinguishes the bounds, and

  • which was individual for each metabolite.

(a) The uptake of a metabolite could be x times higher in CCRF-CEM cells,

(b) the metabolite uptake could be x times higher in Molt-4,

(c) metabolite secretion could be x times higher in CCRF-CEM, or

(d) metabolite secretion could be x times higher in Molt-4 cells.LOD limit of detection.

The consequence of the adjustment was, in case of uptake, that one model was constrained to a lower metabolite uptake (A, B), and the difference depended on the ratio detected in vitro. In case of secretion, one model

  • had to secrete more of the metabolite, and again
  • the difference depended on the experimental difference detected between the cell lines

2 Results

We set up a pipeline that could be used to infer intracellular metabolic states

  • from semi-quantitative data regarding metabolites exchanged between cells and their environment.

Our pipeline combined the following four steps:

  1. data acquisition,
  2. data analysis,
  3. metabolic modeling and
  4. experimental validation of the model predictions (Fig. 1A).

We demonstrated the pipeline and the predictive potential to predict metabolic alternations in diseases such as cancer based on

^two lymphoblastic leukemia cell lines.

The resulting Molt-4 and CCRF-CEM condition-specific cell line models could explain

^  metabolite uptake and secretion
^  by predicting the distinct utilization of central metabolic pathways by the two cell lines.
^  the CCRF-CEM model resembled more a glycolytic, commonly referred to as ‘Warburg’ phenotype,
^  our model predicted a more respiratory phenotype for the Molt-4 model.

We found these predictions to be in agreement with measured gene expression differences

  • at key regulatory steps in the central metabolic pathways, and they were also
  • consistent with additional experimental data regarding the energy and redox states of the cells.

After a brief discussion of the data generation and analysis steps, the results derived from model generation and analysis will be described in detail.

2.1 Pipeline for generation of condition-specific metabolic cell line models

integration of exometabolomic (EM) data

integration of exometabolomic (EM) data

2.1.1 Generation of experimental data

We monitored the growth and viability of lymphoblastic leukemia cell lines in serum-free medium (File S2, Fig. S1). Multiple omics data sets were derived from these cells.Extracellular metabolomics (exo-metabolomic) data,

integration of exometabolomic (EM) data

integration of exometabolomic (EM) data

^  comprising measurements of the metabolites in the spent medium of the cell cultures (Paglia et al. 2012a),
^ were collected along with transcriptomic data, and these data sets were used to construct the models.

2.1.4 Condition-specific models for CCRF-CEM and Molt-4 cells

To determine whether we had obtained two distinct models, we evaluated the reactions, metabolites, and genes of the two models. Both the Molt-4 and CCRF-CEM models contained approximately half of the reactions and metabolites present in the global model (Fig. 1C). They were very similar to each other in terms of their reactions, metabolites, and genes (File S1, Table S5A–C).

(1) The Molt-4 model contained seven reactions that were not present in the CCRF-CEM model (Co-A biosynthesis pathway and exchange reactions).
(2) The CCRF-CEM contained 31 unique reactions (arginine and proline metabolism, vitamin B6 metabolism, fatty acid activation, transport, and exchange reactions).
(3) There were 2 and 15 unique metabolites in the Molt-4 and CCRF-CEM models, respectively (File S1, Table S5B).
(4) Approximately three quarters of the global model genes remained in the condition-specific cell line models (Fig. 1C).
(5) The Molt-4 model contained 15 unique genes, and the CCRF-CEM model had 4 unique genes (File S1, Table S5C).
(6) Both models lacked NADH dehydrogenase (complex I of the electron transport chain—ETC), which was determined by the absence of expression of a mandatory subunit (NDUFB3, Entrez gene ID 4709).

Rather, the ETC was fueled by FADH2 originating from succinate dehydrogenase and from fatty acid oxidation, which through flavoprotein electron transfer

FADH2

FADH2

  • could contribute to the same ubiquinone pool as complex I and complex II (succinate dehydrogenase).

Despite their different in vitro growth rates (which differed by 11 %, see File S2, Fig. S1) and
^^^ differences in exo-metabolomic data (Fig. 1B) and transcriptomic data,
^^^ the internal networks were largely conserved in the two condition-specific cell line models.

2.1.5 Condition-specific cell line models predict distinct metabolic strategies

Despite the overall similarity of the metabolic models, differences in their cellular uptake and secretion patterns suggested distinct metabolic states in the two cell lines (Fig. 1B and see “Materials and methods” section for more detail). To interrogate the metabolic differences, we sampled the solution space of each model using an Artificial Centering Hit-and-Run (ACHR) sampler (Thiele et al. 2005). For this analysis, additional constraints were applied, emphasizing the quantitative differences in commonly uptaken and secreted metabolites. The maximum possible uptake and maximum possible secretion flux rates were reduced
^^^ according to the measured relative differences between the cell lines (Fig. 1D, see “Materials and methods” section).

We plotted the number of sample points containing a particular flux rate for each reaction. The resulting binned histograms can be understood as representing the probability that a particular reaction can have a certain flux value.

A comparison of the sample points obtained for the Molt-4 and CCRF-CEM models revealed

  • a considerable shift in the distributions, suggesting a higher utilization of glycolysis by the CCRF-CEM model
    (File S2, Fig. S2).

This result was further supported by differences in medians calculated from sampling points (File S1, Table S6).
The shift persisted throughout all reactions of the pathway and was induced by the higher glucose uptake (34 %) from the extracellular medium in CCRF-CEM cells.

The sampling median for glucose uptake was 34 % higher in the CCRF-CEM model than in Molt-4 model (File S2, Fig. S2).

The usage of the TCA cycle was also distinct in the two condition-specific cell-line models (Fig. 2). Interestingly,
the models used succinate dehydrogenase differently (Figs. 2, 3).

TCA_reactions

TCA_reactions

The Molt-4 model utilized an associated reaction to generate FADH2, whereas

  • in the CCRF-CEM model, the histogram was shifted in the opposite direction,
  • toward the generation of succinate.

Additionally, there was a higher efflux of citrate toward amino acid and lipid metabolism in the CCRF-CEM model (Fig. 2). There was higher flux through anaplerotic and cataplerotic reactions in the CCRF-CEM model than in the Molt-4 model (Fig. 2); these reactions include

(1) the efflux of citrate through ATP-citrate lyase,
(2) uptake of glutamine,
(3) generation of glutamate from glutamine,
(4) transamination of pyruvate and glutamate to alanine and to 2-oxoglutarate,
(5) secretion of nitrogen, and
(6) secretion of alanine.

energetics-of-cellular-respiration

energetics-of-cellular-respiration

The Molt-4 model showed higher utilization of oxidative phosphorylation (Fig. 3), again supported by
elevated median flux through ATP synthase (36 %) and other enzymes, which contributed to higher oxidative metabolism. The sampling analysis therefore revealed different usage of central metabolic pathways by the condition-specific models.

Fig. 2

Differences in the use of  the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

Differences in the use of the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

Differences in the use of the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

The table provides the median values of the sampling results. Negative values in histograms and in the table describe reversible reactions with flux in the reverse direction. There are multiple reversible reactions for the transformation of isocitrate and α-ketoglutarate, malate and fumarate, and succinyl-CoA and succinate. These reactions are unbounded, and therefore histograms are not shown. The details of participating cofactors have been removed.

Figure 3.

Molt-4 has higher median flux through ETC reactions II–IV 11306_2014_721_Fig3_HTML

Molt-4 has higher median flux through ETC reactions II–IV 11306_2014_721_Fig3_HTML

Atp ATP, cit citrate, adp ADP, pi phosphate, oaa oxaloacetate, accoa acetyl-CoA, coa coenzyme-A, icit isocitrate, αkg α-ketoglutarate, succ-coa succinyl-CoA, succ succinate, fumfumarate, mal malate, oxa oxaloacetate,
pyr pyruvate, lac lactate, ala alanine, gln glutamine, ETC electron transport chain

Ingenuity network analysis showing up (red) and downregulation (green) of miRNAs involved in PC and their target genes

Ingenuity network analysis showing up (red) and downregulation (green) of miRNAs involved in PC and their target genes

metabolic pathways 1476-4598-10-70-1

metabolic pathways 1476-4598-10-70-1

Metabolic Systems Research Team fig2

Metabolic Systems Research Team fig2

Metabolic control analysis of respiration in human cancer tissue. fphys-04-00151-g001

Metabolic control analysis of respiration in human cancer tissue. fphys-04-00151-g001

Metabolome Informatics Research fig1

Metabolome Informatics Research fig1

Modelling of Central Metabolism network3

Modelling of Central Metabolism network3

N. gaditana metabolic pathway map ncomms1688-f4

N. gaditana metabolic pathway map ncomms1688-f4

protein changes in biological mechanisms

protein changes in biological mechanisms

Read Full Post »

A Future for Plasma Metabolomics in Cardiovascular Disease Assessment

Curator: Larry H Bernstein, MD, FCAP

 

 

Plasma metabolomics reveals a potential panel of biomarkers for early diagnosis
in acute coronary syndrome  

CM. Laborde, L Mourino-Alvarez, M Posada-Ayala,
G Alvarez-Llamas, MG Serranillos-Reus, et al.
Metabolomics – manuscript draft

In this study, analyses of peripheral plasma from Non-ST Segment Elevation
Acute Coronary Syndrome patients and healthy controls by gas chromatography-
mass spectrometry permitted the identification of 15 metabolites with statistical
differences (p<0.05) between experimental groups.
In our study, 6 amino acids were found decreased in NSTEACS patients when
compared with healthy control group suggesting either a decrease in anabolic
activity of these metabolites or an increase in the catabolic pathways. Of both
possibilities, the increased catabolism of the amino acids can be explained
considering simultaneously the capacity of glycogenic and ketogenic amino
acids along with the gradual hypoxic condition to which cardiac muscle cells
have been exposed.

Additionally, validation by gas chromatography-mass spectrometry and liquid
chromatography-mass spectrometry permitted us to identify a potential panel
of biomarkers formed by 5-OH tryptophan, 2-OH-butyric acid and 3-OH-butyric
acid. Oxidative stress conditions dramatically increase the rate of hepatic
synthesis of glutathione. It is synthesized from the amino acids cysteine, glutamic
acid and glycine. Under these conditions of metabolic stress, the supply of cysteine
for glutathione synthesis become limiting and homocysteine is used to form
cystathionine, which is cleaved to cysteine and 2-OH-butyric acid. Thus elevated
plasma levels of 2-OH-butyric acid can be a good biomarker of cellular oxidative
stress for the early diagnosis of ACS.  Another altered metabolite of similar
structure was 3-OH-butyric acid, a ketone body together with the acetoacetate,
and acetone. Elevated levels of ketone bodies in blood and urine mainly occur
in diabetic ketoacidosis. Type 1 diabetes mellitus (DMI) patients have decreased
levels of insulin in the blood that prevent glucose enter cells so these cells use
the catabolism of fats as energy source that produce ketones as final products.
This panel of biomarkers reflects the oxidative stress and the hypoxic state that
disrupts the myocardial cells and consequently constitutes a metabolomic
signature that could be used for early diagnosis of acute coronary syndrome.
We hypothesize that the hypoxia situation comes to “mimic” the physiological
situation that occurs in DMI. In this case, the low energy yield of glucose
metabolism “forces” these cells to use fat as energy source (through catabolism
independent of aerobic/anaerobic conditions) occurring ketones as final
products. In our experiment, the 3-OH-butyric acid was strongly elevated in
NSTEACS patients.

 

Current Methods Used in the Protein Carbonyl Assay
Nicoleta Carmen Purdel, Denisa Margina and Mihaela Ilie.
Ann Res & Rev in Biol 2014; 4(12): 2015-2026.
http://www.sciencedomain.org/download.php?f=Purdel4122013ARRB8763-1

The attack of reactive oxygen species on proteins and theformation of
protein carbonyls were investigated only in the recent years. Taking into
account that protein carbonyls may play an important role in the early
diagnosis of pathologies associated with reactive oxygen species
overproduction, a robust and reliable method to quantify the protein
carbonyls in complex biological samples is also required. Oxidative
stress represents the aggression produced at the molecular level by
the imbalance between pro-oxidant and antioxidant agents, in favor of
pro-oxidants, with severe functional consequences in all organs and
tissues. An overproduction of ROS results in oxidative damages
especially to proteins (the main target of ROS), as well as in lipids,or
DNA. Glycation and oxidative stress are closely linked, and both
phenomena are referred to as ‘‘glycoxidation’’. All steps of glycoxidation
generate oxygen-free radical production, some of them being common
with lipidic peroxidation pathways.
The initial glycation reaction is followed by a cascade of chemical
reactions resulting in the formation of intermediate products (Schiff base,
Amadori and Maillard products) and finally to a variety of derivatives
named advanced glycation end products (AGEs). In hyperglycemic
environments and in natural aging, AGEs are generated in increased
concentrations; their levels can be evaluated in plasma due to the fact
that they are fluorescent compounds. Specific biomarkers of oxidative
stress are currently investigated in order to evaluate the oxidative status
of a biological system and/or its regenerative power. Generaly, malondi-
aldehyde, 4-hydroxy-nonenal (known together as thiobarbituric acid
reactive substances – TBARS), 2-propenal and F2-isoprostanes are
investigated as markers of lipid peroxidation, while the measurement
of protein thiols, as well as S-glutathionylated protein are assessed
as markers of oxidative damage of proteins. In most cases, the
oxidative damage of the DNA has 8-hydroxy-2l-deoxyguanosine
(8-OHdG) as a marker.  The oxidative degradation of proteins plays an
important role in the early diagnosis of pathologies associated with
ROS overproduction. Oxidative modification of the protein structure
may take a variety of forms, including the nitration of tyrosine residues,
carbonylation, oxidation of methionine, or thiol groups, etc.

The carbonylation of protein represents the introduction of carbonyl
groups (aldehyde or ketone) in the protein structure, through several
mechanisms: by direct oxidation of the residues of lysine, arginine,
proline and threonine residues from the protein chain, by interaction
with lipid peroxidation products with aldehyde groups (such as 4-
hydroxy-2-nonenal, malondialdehyde, 2-propenal), or by the
interaction with the compounds with the carbonyl groups resulting
from the degradation of the lipid or glycoxidation. All of these
molecular changes occur under oxidative stress conditions.
There is a pattern of carbonylation, meaning that only certain
proteins can undergo this process and protein structure determines
the preferential sites of carbonylation. The most investigated
carbonyl derivates are represented by gamma-glutamic
semialdehyde (GGS) generated from the degradation of arginine
residue and α-aminoadipic semialdehyde (AAS) derived from lysine.

A number of studies have shown that the generation of protein
carbonyl groups is associated with normal cellular phenomena like
apoptosis, and cell differentiation and is dependent on age, species
and habits (eg. smoking) or severe conditions’ exposure (as
starvation or stress). The formation and accumulation of protein
carbonyls is increased in various human diseases, including –
diabetes and cardiovascular disease.

Recently, Nystrom [7] suggested that the carbonylation process
is associated with the physiological and not to the chronological
age of the organism and the carbonylation may be one of the causes
of aging and cell senescence; therefore it can be used as the marker
of these processes. Jha and Rizvi, [15] proposed the quantification of
protein carbonyls in the erythrocyte membrane as a biomarker of aging

PanelomiX: A threshold-based algorithm to create panels of
biomarkers

X Robin, N Turck, A Hainard, N Tiberti, F Lisacek. 
T r a n s l a t i o n a l  P r o t e o m i c s   2 0 1 3; 1: 57–64.
http://dx.doi.org/10.1016/j.trprot.2013.04.003

The computational toolbox we present here – PanelomiX – uses
the iterative combination of biomarkers and thresholds (ICBT) method.
This method combines biomarkers andclinical scores by selecting
thresholds that provide optimal classification performance. Tospeed
up the calculation for a large number of biomarkers, PanelomiX selects
a subset ofthresholds and parameters based on the random forest method.
The panels’ robustness and performance are analysed by cross-validation
(CV) and receiver operating characteristic(ROC) analysis.

Using 8 biomarkers, we compared this method against classic
combination procedures inthe determination of outcome for 113 patients
with an aneurysmal subarachnoid hemorrhage. The panel classified the
patients better than the best single biomarker (< 0.005) and compared
favourably with other off-the-shelf classification methods.

In conclusion, the PanelomiX toolbox combines biomarkers and evaluates
the performance of panels to classify patients better than single markers
or other classifiers. The ICBT algorithm proved to be an efficient classifier,
the results of which can easily be interpreted. 

Multiparametric diagnostics of cardiomyopathies by microRNA
signatures.
CS. Siegismund, M Rohde, U Kühl,  D  Lassner.
Microchim Acta 2014 Mar.
http://dx.doi.org:/10.1007/s00604-014-1249-y

MicroRNAs (miRNAs) represent a new group of stable biomarkers
that are detectable both in tissue and body fluids. Such miRNAs
may serve as cardiological biomarkers to characterize inflammatory
processes and to differentiate various forms of infection. The predictive
power of single miRNAs for diagnosis of complex diseases may be further
increased if several distinctly deregulated candidates are combined to
form a specific miRNA signature. Diagnostic systems that generate
disease related miRNA profiles are based on microarrays, bead-based
oligo sorbent assays, or on assays based on real-time polymerase
chain reactions and placed on microfluidic cards or nanowell plates.
Multiparametric diagnostic systems that can measure differentially
expressed miRNAs may become the diagnostic tool of the future due
to their predictive value with respect to clinical course, therapeutic
decisions, and therapy monitoring.

Nutritional lipidomics: Molecular metabolism, analytics, and
diagnostics
JT. Smilowitz, AM. Zivkovic, Yu-Jui Y Wan, SM. Watkins, et al.
Mol. Nutr. Food Res2013, 00, 1–17.
http://dx.doi.org:/10.1002/mnfr.201200808

The term lipidomics is quite new, first appearing in 2001. Its definition
is still being debated, from “the comprehensive analysis of all lipid
components in a biological sample” to “the full characterization of
lipid molecular species and their biological roles with respect to the
genes that encode proteins that regulate lipid metabolism”. In principle,
lipidomics is a field taking advantage of the innovations in the separation
sciences and MS together with bioinformatics to characterize the lipid
compositions of biological samples (biofluids, cells, tissues, organisms)
compositionally and quantitatively.

Biochemical pathways of lipid metabolism remain incomplete and the
tools to map lipid compositional data to pathways are still being assembled.
Biology itself is dauntingly complex and simply separating biological
structures remains a key challenge to lipidomics. Nonetheless, the
strategy of combining tandem analytical methods to perform the sensitive,
high-throughput, quantitative, and comprehensive analysis of lipid
metabolites of very large numbers of molecules is poised to drive
the field forward rapidly. Among the next steps for nutrition to understand
the changes in structures, compositions, and function of lipid biomolecules
in response to diet is to describe their distribution within discrete functional
compartments lipoproteins. Additionally, lipidomics must tackle the task
of assigning the functions of lipids as signaling molecules, nutrient sensors,
and intermediates of metabolic pathways.

Read Full Post »

Larry H Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Innovation

High sensitivity c-Reactive Protein

High sensitivity C-reactive protein (hsCRP)
Author: Larry Bernstein, M.D.,  (see Reviewers/Authors page)
Revised: 12 December 2010, last major update December 2010
Copyright: (c) 2003-2010, PathologyOutlines.com, Inc.

http://dx.doi.org:/PathologyOutlines.com/cardiac

General
=========================================================================

  • hsCRP is an enhanced sensitivity C-reactive protein (CRP) immunoassay with a lowered measurement cutoff

Methodology
=========================================================================

  • Laser nephelometry

Indications
=========================================================================

  • In the JUPITER trial of apparently healthy persons without hyperlipidemia but with elevated
    high-sensitivity C-reactive protein levels, rosuvastatin significantly reduced the incidence of major
    cardiovascular events ( N Engl J Med 2008;359:2195)
  • This effect is thought to be due to the effect of statins on inflammation, which is detected by hsCRP
  • hsCRP assessment for cardiovascular disease in asymptomatic individuals seems to be most useful for
    those classified as intermediate risk on the basis of traditional risk factors (e.g. an NCEP-ATP III global
    risk score between 5% and 20%), and who do not already warrant chronic treatment with aspirin and a statin

Limitations
=========================================================================

  • Most useful for patients with intermediate risk for cardiovascular disease (Circ Cardiovasc Qual Outcomes
    2008;1:92, Ann Intern Med 2009;151:483)
  • For low risk patients, if their risk increases 3x (e.g. from 1% to 3%), their absolute cardiovascular risk
    is still low, so the hsCRP test has no practical value
  • High risk patients are candidates for chronic aspirin and lipid-lowering therapy regardless of their hsCRP test results
  • However, a recent study concludes that risk based statin treatment without hs-CRP testing is more cost-effective
    than hs-CRP screening, assuming that statins have good long-term safety and provide benefits among low-risk
    people with normal hs-CRP (Circulation 2010;122:1478)

Reference ranges
=========================================================================

  • Low risk: under 1 mg/L
  • Intermediate risk: 1-3 mg/L
  • High risk: > 3 mg/L

Additional references
=========================================================================

  • Wikipedia, Circulation 2006;113:2335, N Engl J Med 2001;344:1959

How to use C-reactive protein in acute coronary care
LM. Biasucci,W Koenig, J Mair, C Mueller, M Plebani, B Lindahl, N Rifai, P Venge, C Hamm, et al.
Eur Heart J  Nov 2013;  http://dx.doi.org:/10.1093/eurheartj/eht435

In patients with acute myocardial infarction (AMI), C-reactive protein increases within 4–6 h of symptoms,
peaks 2–4 days later, and returns to baseline after 7–10 days. Because of evidence that atherosclerosis
is an inflammatory disease, high-sensitivity C-reactive protein can be used as a biomarker of risk
in primary prevention
and in patients with known cardiovascular disease.
The upper reference limit is method-dependent but usually 8 mg/L for standard assays. The distribution of high-
sensitivity C-reactive protein concentrations is skewed in both genders with a 50th percentile of 1.5 mg/L (excluding
women on hormone replacement therapy).  C-reactive protein concentrations are increased by smoking, obesity, and
hormone replacement therapy and reduced by exercise, moderate alcohol drinking, and statin use. Correction for these
factors is essential in reference range studies.
C-reactive protein assays are not standardized. We recommend the use of third-generation high-sensitivity C-reactive
protein assays that combine features of standard and high-sensitivity C-reactive protein assays. Required assay precision
should be < 10% in the range of 3 and 10 mg/L.

Read Full Post »

USPTO Guidance On Patentable Subject Matter

USPTO Guidance On Patentable Subject Matter

Curator and Reporter: Larry H Bernstein, MD, FCAP

LH Bernstein

LH Bernstein

 

 

 

 

 

 

Revised 4 July, 2014

http://pharmaceuticalintelligence.com/2014/07/03/uspto-guidance-on-patentable-subject-matter

 

I came across a few recent articles on the subject of US Patent Office guidance on patentability as well as on Supreme Court ruling on claims. I filed several patents on clinical laboratory methods early in my career upon the recommendation of my brother-in-law, now deceased.  Years later, after both brother-in-law and patent attorney are no longer alive, I look back and ask what I have learned over $100,000 later, with many trips to the USPTO, opportunities not taken, and a one year provisional patent behind me.

My conclusion is

(1) that patents are for the protection of the innovator, who might realize legal protection, but the cost and the time investment can well exceed the cost of startup and building a small startup enterprize, that would be the next step.

(2) The other thing to consider is the capability of the lawyer or firm that represents you.  A patent that is well done can be expected to take 5-7 years to go through with due diligence.   I would not expect it to be done well by a university with many other competing demands. I might be wrong in this respect, as the climate has changed, and research universities have sprouted engines for change.  Experienced and productive faculty are encouraged or allowed to form their own such entities.

(3) The emergence of Big Data, computational biology, and very large data warehouses for data use and integration has changed the landscape. The resources required for an individual to pursue research along these lines is quite beyond an individuals sole capacity to successfully pursue without outside funding.  In addition, the changed designated requirement of first to publish has muddied the water.

Of course, one can propose without anything published in the public domain. That makes it possible for corporate entities to file thousands of patents, whether there is actual validation or not at the time of filing.  It would be a quite trying experience for anyone to pursue in the USPTO without some litigation over ownership of patent rights. At this stage of of technology development, I have come to realize that the organization of research, peer review, and archiving of data is still at a stage where some of the best systems avalailable for storing and accessing data still comes considerably short of what is needed for the most complex tasks, even though improvements have come at an exponential pace.

I shall not comment on the contested views held by physicists, chemists, biologists, and economists over the completeness of guiding theories strongly held.  Only history will tell.  Beliefs can hold a strong sway, and have many times held us back.

I am not an expert on legal matters, but it is incomprehensible to me that issues concerning technology innovation can be adjudicated in the Supreme Court, as has occurred in recent years. I have postgraduate degrees in  Medicine, Developmental Anatomy, and post-medical training in pathology and laboratory medicine, as well as experience in analytical and research biochemistry.  It is beyond the competencies expected for these type of cases to come before the Supreme Court, or even to the Federal District Courts, as we see with increasing frequency,  as this has occurred with respect to the development and application of the human genome.

I’m not sure that the developments can be resolved for the public good without a more full development of an open-access system of publishing. Now I present some recent publication about, or published by the USPTO.

DR ANTHONY MELVIN CRASTO

Dr. Melvin Castro - Organic Chemistry and New Drug Development

Dr. Melvin Castro – Organic Chemistry and New Drug Development

 

 

 

 

 

 

 

 

YOU ARE FOLLOWING THIS BLOG You are following this blog, along with 1,014 other amazing people (manage).

patentimages.storage.goog…

USPTO Guidance On Patentable Subject Matter: Impediment to Biotech Innovation

Joanna T. Brougher, David A. Fazzolare J Commercial Biotechnology 2014 20(3):Brougher

jcbiotech-patents

jcbiotech-patents

 

 

 

 

 

 

 

 

 

 

 

Abstract In June 2013, the U.S. Supreme Court issued a unanimous decision upending more than three decades worth of established patent practice when it ruled that isolated gene sequences are no longer patentable subject matter under 35 U.S.C. Section 101.While many practitioners in the field believed that the USPTO would interpret the decision narrowly, the USPTO actually expanded the scope of the decision when it issued its guidelines for determining whether an invention satisfies Section 101.

The guidelines were met with intense backlash with many arguing that they unnecessarily expanded the scope of the Supreme Court cases in a way that could unduly restrict the scope of patentable subject matter, weaken the U.S. patent system, and create a disincentive to innovation. By undermining patentable subject matter in this way, the guidelines may end up harming not only the companies that patent medical innovations, but also the patients who need medical care.  This article examines the guidelines and their impact on various technologies.

Keywords:   patent, patentable subject matter, Myriad, Mayo, USPTO guidelines

Full Text: PDF

References

35 U.S.C. Section 101 states “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

” Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 566 U.S. ___ (2012)

Association for Molecular Pathology et al., v. Myriad Genetics, Inc., 569 U.S. ___ (2013).

Parke-Davis & Co. v. H.K. Mulford Co., 189 F. 95, 103 (C.C.S.D.N.Y. 1911)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

DOI: http://dx.doi.org/10.5912/jcb664

 

Science 4 July 2014; 345 (6192): pp. 14-15  DOI: http://dx.doi.org/10.1126/science.345.6192.14
  • IN DEPTH

INTELLECTUAL PROPERTY

Biotech feels a chill from changing U.S. patent rules

A 2013 Supreme Court decision that barred human gene patents is scrambling patenting policies.

PHOTO: MLADEN ANTONOV/AFP/GETTY IMAGES

A year after the U.S. Supreme Court issued a landmark ruling that human genes cannot be patented, the biotech industry is struggling to adapt to a landscape in which inventions derived from nature are increasingly hard to patent. It is also pushing back against follow-on policies proposed by the U.S. Patent and Trademark Office (USPTO) to guide examiners deciding whether an invention is too close to a natural product to deserve patent protection. Those policies reach far beyond what the high court intended, biotech representatives say.

“Everything we took for granted a few years ago is now changing, and it’s generating a bit of a scramble,” says patent attorney Damian Kotsis of Harness Dickey in Troy, Michigan, one of more than 15,000 people who gathered here last week for the Biotechnology Industry Organization’s (BIO’s) International Convention.

At the meeting, attorneys and executives fretted over the fate of patent applications for inventions involving naturally occurring products—including chemical compounds, antibodies, seeds, and vaccines—and traded stories of recent, unexpected rejections by USPTO. Industry leaders warned that the uncertainty could chill efforts to commercialize scientific discoveries made at universities and companies. Some plan to appeal the rejections in federal court.

USPTO officials, meanwhile, implored attendees to send them suggestions on how to clarify and improve its new policies on patenting natural products, and even announced that they were extending the deadline for public comment by a month. “Each and every one of you in this room has a moral duty … to provide written comments to the PTO,” patent lawyer and former USPTO Deputy Director Teresa Stanek Rea told one audience.

At the heart of the shake-up are two Supreme Court decisions: the ruling last year in Association for Molecular Pathology v. Myriad Genetics Inc. that human genes cannot be patented because they occur naturally (Science, 21 June 2013, p. 1387); and the 2012 Mayo v. Prometheus decision, which invalidated a patent on a method of measuring blood metabolites to determine drug doses because it relied on a “law of nature” (Science, 12 July 2013, p. 137).

Myriad and Mayo are already having a noticeable impact on patent decisions, according to a study released here. It examined about 1000 patent applications that included claims linked to natural products or laws of nature that USPTO reviewed between April 2011 and March 2014. Overall, examiners rejected about 40%; Myriad was the basis for rejecting about 23% of the applications, and Mayo about 35%, with some overlap, the authors concluded. That rejection rate would have been in the single digits just 5 years ago, asserted Hans Sauer, BIO’s intellectual property counsel, at a press conference. (There are no historical numbers for comparison.) The study was conducted by the news service Bloomberg BNA and the law firm Robins, Kaplan, Miller & Ciseri in Minneapolis, Minnesota.

USPTO is extending the decisions far beyond diagnostics and DNA?

The numbers suggest USPTO is extending the decisions far beyond diagnostics and DNA, attorneys say. Harness Dickey’s Kotsis, for example, says a client recently tried to patent a plant extract with therapeutic properties; it was different from anything in nature, Kotsis argued, because the inventor had altered the relative concentrations of key compounds to enhance its effect. Nope, decided USPTO, too close to nature.

In March, USPTO released draft guidance designed to help its examiners decide such questions, setting out 12 factors for them to weigh. For example, if an examiner deems a product “markedly different in structure” from anything in nature, that counts in its favor. But if it has a “high level of generality,” it gets dinged.

The draft has drawn extensive criticism. “I don’t think I’ve ever seen anything as complicated as this,” says Kevin Bastian, a patent attorney at Kilpatrick Townsend & Stockton in San Francisco, California. “I just can’t believe that this will be the standard.”

USPTO officials appear eager to fine-tune the draft guidance, but patent experts fear the Supreme Court decisions have made it hard to draw clear lines. “The Myriad decision is hopelessly contradictory and completely incoherent,” says Dan Burk, a law professor at the University of California, Irvine. “We know you can’t patent genetic sequences,” he adds, but “we don’t really know why.”

Get creative in using Draft Guidelines!

For now, Kostis says, applicants will have to get creative to reduce the chance of rejection. Rather than claim protection for a plant extract itself, for instance, an inventor could instead patent the steps for using it to treat patients. Other biotech attorneys may try to narrow their patent claims. But there’s a downside to that strategy, they note: Narrower patents can be harder to protect from infringement, making them less attractive to investors. Others plan to wait out the storm, predicting USPTO will ultimately rethink its guidance and ease the way for new patents.

 

Public comment period extended

USPTO has extended the deadline for public comment to 31 July, with no schedule for issuing final language. Regardless of the outcome, however, Stanek Rea warned a crowd of riled-up attorneys that, in the world of biopatents, “the easy days are gone.”

 

United States Patent and Trademark Office

Today we published and made electronically available a new edition of the Manual of Patent Examining Procedure (MPEP). Manual of Patent Examining Procedure uspto.gov http://www.uspto.gov/web/offices/pac/mpep/index.html Summary of Changes

PDF Title Page
PDF Foreword
PDF Introduction
PDF Table of Contents
PDF Chapter 600 –
PDF   Parts, Form, and Content of Application Chapter 700 –
PDF    Examination of Applications Chapter 800 –
PDF   Restriction in Applications Filed Under 35 U.S.C. 111; Double Patenting Chapter 900 –
PDF   Prior Art, Classification, and Search Chapter 1000 –
PDF  Matters Decided by Various U.S. Patent and Trademark Office Officials Chapter 1100 –
PDF   Statutory Invention Registration (SIR); Pre-Grant Publication (PGPub) and Preissuance Submissions Chapter 1200 –
PDF    Appeal Chapter 1300 –
PDF   Allowance and Issue Appendix L –
PDF   Patent Laws Appendix R –
PDF   Patent Rules Appendix P –
PDF   Paris Convention Subject Matter Index 
PDF Zipped version of the MPEP current revision in the PDF format.

Manual of Patent Examining Procedure (MPEP)Ninth Edition, March 2014

The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to:
http://uspto-mpep.ideascale.com.

Manual of Patent Examining Procedure (MPEP) Ninth Edition, March 2014
The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to: http://uspto-mpep.ideascale.com.

Note: For current fees, refer to the Current USPTO Fee Schedule.
Consolidated Laws – The patent laws in effect as of May 15, 2014. Consolidated Rules – The patent rules in effect as of May 15, 2014.  MPEP Archives (1948 – 2012)
Current MPEP: Searchable MPEP

The documents updated in the Ninth Edition of the MPEP, dated March 2014, include changes that became effective in November 2013 or earlier.
All of the documents have been updated for the Ninth Edition except Chapters 800, 900, 1000, 1300, 1700, 1800, 1900, 2000, 2300, 2400, 2500, and Appendix P.
More information about the changes and updates is available from the “Blue Page – Introduction” of the Searchable MPEP or from the “Summary of Changes” link to the HTML and PDF versions provided below. Discuss the Manual of Patent Examining Procedure (MPEP) Welcome to the MPEP discussion tool!

We have received many thoughtful ideas on Chapters 100-600 and 1800 of the MPEP as well as on how to improve the discussion site. Each and every idea submitted by you, the participants in this conversation, has been carefully reviewed by the Office, and many of these ideas have been implemented in the August 2012 revision of the MPEP and many will be implemented in future revisions of the MPEP. The August 2012 revision is the first version provided to the public in a web based searchable format. The new search tool is available at http://mpep.uspto.gov. We would like to thank everyone for participating in the discussion of the MPEP.

We have some great news! Chapters 1300, 1500, 1600 and 2400 of the MPEP are now available for discussion. Please submit any ideas and comments you may have on these chapters. Also, don’t forget to vote on ideas and comments submitted by other users. As before, our editorial staff will periodically be posting proposed new material for you to respond to, and in some cases will post responses to some of the submitted ideas and comments.Recently, we have received several comments concerning the Leahy-Smith America Invents Act (AIA). Please note that comments regarding the implementation of the AIA should be submitted to the USPTO via email t aia_implementation@uspto.gov or via postal mail, as indicated at the America Invents Act Web site. Additional information regarding the AIA is available at www.uspto.gov/americainventsact  We have also received several comments suggesting policy changes which have been routed to the appropriate offices for consideration. We really appreciate your thinking and recommendations!

FDA Guidance for Industry:Electronic Source Data in Clinical Investigations

Electronic Source Data

Electronic Source Data

 

 

 

 

 

 

 

The FDA published its new Guidance for Industry (GfI) – “Electronic Source Data in Clinical Investigations” in September 2013.
The Guidance defines the expectations of the FDA concerning electronic source data generated in the context of clinical trials. Find out more about this Guidance.
http://www.gmp-compliance.org/enews_4288_FDA%20Guidance%20for%20Industry%3A%20Electronic%20Source%20Data%20in%20Clinical%20Investigations
_8534,8457,8366,8308,Z-COVM_n.html

After more than 5 years and two draft versions, the final version of the Guidance for
Industry (GfI) – “Electronic Source Data in Clinical Investigations” was published in
September 2013. This new FDA Guidance defines the FDA’s expectations for sponsors,
CROs, investigators and other persons involved in the capture, review and retention of
electronic source data generated in the context of FDA-regulated clinical trials.In an
effort to encourage the modernization and increased efficiency of processes in clinical
trials, the FDA clearly supports the capture of electronic source data and emphasizes
the agency’s intention to support activities aimed at ensuring the reliability, quality,
integrity and traceability of this source data, from its electronic source to the electronic
submission of the data in the context of an authorization procedure. The Guidance
addresses aspects as data capture, data review and record retention. When the
computerized systems used in clinical trials are described, the FDA recommends
that the description not only focus on the intended use of the system, but also on
data protection measures and the flow of data across system components and
interfaces. In practice, the pharmaceutical industry needs to meet significant
requirements regarding organisation, planning, specification and verification of
computerized systems in the field of clinical trials. The FDA also mentions in the
Guidance that it does not intend to apply 21 CFR Part 11 to electronic health records
(EHR). Author: Oliver Herrmann Q-Infiity Source: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/
Guidances/UCM328691.pdf
Webinar: https://collaboration.fda.gov/p89r92dh8wc

 

Read Full Post »

Larry H Bernstein, MD, FCAP, Author and Curator

http://pharmaceuticalintelligence.com/2014/06/22/Proteomics – The Pathway to Understanding and Decision-making in Medicine

This dialogue is a series of discussions introducing several perspective on proteomics discovery, an emerging scientific enterprise in the -OMICS- family of disciplines that aim to clarify many of the challenges toward the understanding of disease and aiding in the diagnosis as well as guiding treatment decisions. Beyond that focus, it will contribute to personalized medical treatment in facilitating the identification of treatment targets for the pharmaceutical industry. Despite enormous advances in genomics research over the last two decades, there is a still a problem in reaching anticipated goals for introducing new targeted treatments that has seen repeated failures in stage III of clinical trials, and even when success has been achieved, it is temporal.  The other problem has been toxicity of agents widely used in chemotherapy.  Even though the genomic approach brings relieve to the issues of toxicity found in organic chemistry derivative blocking reactions, the specificity for the target cell without an effect on normal cells has been elusive.

This is not confined to cancer chemotherapy, but can also be seen in pain medication, and has been a growing problem in antimicrobial therapy.  The stumbling block has been inability to manage a multiplicity of reactions that also have to be modulated in a changing environment based on 3-dimension structure of proteins, pH changes, ionic balance, micro- and macrovascular circulation, and protein-protein and protein- membrane interactions. There is reason to consider that the present problems can be overcome through a much better modification of target cellular metabolism as we peel away the confounding and blinding factors with a multivariable control of these imbalances, like removing the skin of an onion.

This is the first of a series of articles, and for convenience we shall here  only emphasize the progress of application of proteomics to cardiovascular disease.

growth in funding proteomics 1990-2010

growth in funding proteomics 1990-2010

Part I.

Panomics: Decoding Biological Networks  (Clinical OMICs 2014; 5)

Technological advances such as high-throughput sequencing are transforming medicine from symptom-based diagnosis and treatment to personalized medicine as scientists employ novel rapid genomic methodologies to gain a broader comprehension of disease and disease progression. As next-generation sequencing becomes more rapid, researchers are turning toward large-scale pan-omics, the collective use of all omics such as genomics, epigenomics, transcriptomics, proteomics, metabolomics, lipidomics and lipoprotein proteomics, to better understand, identify, and treat complex disease.

Genomics has been a cornerstone in understanding disease, and the sequencing of the human genome has led to the identification of numerous disease biomarkers through genome-wide association studies (GWAS). It was the goal of these studies that these biomarkers would serve to predict individual disease risk, enable early detection of disease, help make treatment decisions, and identify new therapeutic targets. In reality, however, only a few have gone on to become established in clinical practice. For example in human GWAS studies for heart failure at least 35 biomarkers have been identified but only natriuretic peptides have moved into clinical practice, where they are limited primarily for use as a diagnostic tool.

Proteomics Advances Will Rival the Genetics Advances of the Last Ten Years

Seventy percent of the decisions made by physicians today are influenced by results of diagnostic tests, according to N. Leigh Anderson, founder of the Plasma Proteome Institute and CEO of SISCAPA Assay Technologies. Imagine the changes that will come about when future diagnostics tests are more accurate, more useful, more economical, and more accessible to healthcare practitioners. For Dr. Anderson, that’s the promise of proteomics, the study of the structure and function of proteins, the principal constituents of the protoplasm of all cells.

In explaining why proteomics is likely to have such a major impact, Dr. Anderson starts with a major difference between the genetic testing common today, and the proteomic testing that is fast coming on the scene. “Most genetic tests are aimed at measuring something that’s constant in a person over his or her entire lifetime. These tests provide information on the probability of something happening, and they can help us understand the basis of various diseases and their potential risks. What’s missing is, a genetic test is not going to tell you what’s happening to you right now.”

Mass Spec-Based Multiplexed Protein Biomarkers

Clinical proteomics applications rely on the translation of targeted protein quantitation technologies and methods to develop robust assays that can guide diagnostic, prognostic, and therapeutic decision-making. The development of a clinical proteomics-based test begins with the discovery of disease-relevant biomarkers, followed by validation of those biomarkers.

“In common practice, the discovery stage is performed on a MS-based platform for global unbiased sampling of the proteome, while biomarker qualification and clinical implementation generally involve the development of an antibody-based protocol, such as the commonly used enzyme linked ELISA assays,” state López et al. in Proteome Science (2012; 10: 35–45). “Although this process is potentially capable of delivering clinically important biomarkers, it is not the most efficient process as the latter is low-throughput, very costly, and time-consuming.”

Part II.  Proteomics for Clinical and Research Use: Combining Protein Chips, 2D Gels and Mass Spectrometry in 

The next Step: Exploring the Proteome: Translation and Beyond

N. Leigh Anderson, Ph.D., Chief Scientific Officer, Large Scale Proteomics Corporation

Three streams of technology will play major roles in quantitative (expression) proteomics over the coming decade. Two-dimensional electrophoresis and mass spectrometry represent well-established methods for, respectively, resolving and characterizing proteins, and both have now been automated to enable the high-throughput generation of data from large numbers of samples.

These methods can be powerfully applied to discover proteins of interest as diagnostics, small molecule therapeutic targets, and protein therapeutics. However, neither offers a simple, rapid, routine way to measure many proteins in common samples like blood or tissue homogenates.

Protein chips do offer this possibility, and thus complete the triumvirate of technologies that will deliver the benefits of proteomics to both research and clinical users. Integration of efforts in all three approaches are discussed, highlighting the application of the Human Protein Index® database as a source of protein leads.

leighAnderson

leighAnderson

N. Leigh Anderson, Ph D. is Chief Scientific Officer of the Proteomics subsidiary of Large Scale Biology Corporation (LSBC).
Dr. Anderson obtained his B.A. in Physics with honors from Yale and a Ph.D. in Molecular Biology from Cambridge University
(England) where he worked with M. F. Perutz as a Churchill Fellow at the MRC Laboratory of Molecular Biology. Subsequently
he co-founded the Molecular Anatomy Program at the Argonne National Laboratory (Chicago) where his work in the development
of 2D electrophoresis and molecular database technology earned him, among other distinctions, the American Association for
Clinical Chemistry’s Young Investigator Award for 1982, the 1983 Pittsburgh Analytical Chemistry Award, 2008 AACC Outstanding
Research Award, and 2013 National Science Medal..

In 1985 Dr. Anderson co-founded LSBC in order to pursue commercial development and large scale applications of 2-D electro-
phoretic protein mapping technology. This effort has resulted in a large-scale proteomics analytical facility supporting research
work for LSBC and its pharmaceutical industry partners. Dr. Anderson’s current primary interests are in the automation of proteomics
technologies, and the expansion of LSBC’s proteomics databases describing drug effects and disease processes in vivo and in vitro.
Large Scale Biology went public in August 2000.

Part II. Plasma Proteomics: Lessons in Biomarkers and Diagnostics

Exposome Workshop
N Leigh Anderson
Washington 8 Dec 2011

QUESTIONS AND LESSONS:

CLINICAL DIAGNOSTICS AS A MODEL FOR EXPOSOME INDICATORS
TECHNOLOGY OPTIONS FOR MEASURING PROTEIN RESPONSES TO EXPOSURES
SCALE OF THE PROBLEM: EXPOSURE SIGNALS VS POPULATION NOISE

The Clinical Plasma Proteome
• Plasma and serum are the dominant non-invasive clinical sample types
– standard materials for in vitro diagnostics (IVD)
• Proteins measured in clinically-available tests in the US
– 109 proteins via FDA-cleared or approved tests
• Clinical test costs range from $9 (albumin) to $122 (Her2)
• 90% of those ever approved are still in use
– 96 additional proteins via laboratory-developed tests (not FDA
cleared or approved)
– Total 205 proteins (≅ products of 211genes, excluding Ig’s)
• Clinically applied proteins thus account for
– About 1% of the baseline human proteome (1 gene :1 protein)
– About 10% of the 2,000+ proteins observed in deep discovery
plasma proteome datasets

“New” Protein Diagnostics Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

FDA clearance of protein diagnostics

FDA clearance of protein diagnostics

A  Major Technology Gulf Exists Between Discovery

Proteomics and Routine Diagnostic Platforms

Two Streams of Proteomics
A.  Problem Technology
Basic biology: maximum proteome coverage (including PTM’s, splices) to
provide unbiased discovery of mechanistic information
• Critical: Depth and breadth
• Not critical: Cost, throughput, quant precision

B.  Discovery proteomics
Specialized proteomics field,
large groups,
complex workflows and informatics

Part III.  Addressing the Clinical Proteome with Mass Spectrometric Assays

N. Leigh Anderson, PhD, SISCAPA Assay Technologies, Inc.

protein changes in biological mechanisms

protein changes in biological mechanisms

No Increase in FDA Cleared Protein Tests in 20 yr

“New” Protein Tests in Plasma Are FDA-Cleared at a Rate of ~1.5/yr:
Insufficient to Meet Dx or Rx Development Needs

See figure above

An Explanation: the Biomarker Pipeline is Blocked at the Verification Step

Immunoassay Weaknesses Impact Biomarker Verification

1) Specificity: what actually forms the immunoassay sandwich – or prevents its
formation – is not directly visualized

2) Cost: an assay developed to FDA approvable quality costs $2-5M per
protein

Major_Plasma_Proteins

Major_Plasma_Proteins

Immunoassay vs Hybrid MS-based assays

Immunoassay vs Hybrid MS-based assays

MASS SPECTROMETRY: MRM’s provide what is missing in..IMMUNOASSAYS:

– SPECIFICITY
– INTERNAL STANDARDIZATION
– MULTIPLEXING
– RAPID CONFIGURATION PROVIDED A PROTEIN CAN ACT LIKE A SMALL
MOLECULE

MRM of Proteotypic Tryptic Peptides Provides Highly Specific Assays for Proteins > 1ug/ml in Plasma

Peptide-Level MS Provides High Structural Specificity
Multiple Reaction Monitoring (MRM) Quantitation

ADDRESSING MRM LIMITATIONS VIA SPECIFIC ENRICHMENT OF ANALYTE  PEPTIDES: SISCAPA

– SENSITIVITY
– THROUGHPUT (LC-MS/MS CYCLE TIME)

SISCAPA combines best features of immuno and MS

SISCAPA combines best features of immuno and MS

SISCAPA Process Schematic Diagram
Stable Isotope-labeled Standards with Capture on Anti-Peptide Antibodies

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

An automated process for SISCAPA targeted protein quantitation utilizes high affinity capture antibodies that are immobilized on magnetic beads

Antibodies sequence specific peptide binding

Antibodies sequence specific peptide binding

SISCAP target enrichmant

SISCAP target enrichmant

Multiple reaction monitoring (MRM) quantitation

Multiple reaction monitoring (MRM) quantitation

protein-quantitation-via-signature-peptides.png

protein-quantitation-via-signature-peptides.png

First SISCAP Assay - thyroglobulin

First SISCAP Assay – thyroglobulin

personalized reference range within population range

Glycemic control in DM

Glycemic control in DM

Part IV. National Heart, Lung, and Blood Institute Clinical

Proteomics Working Group Report
Christopher B. Granger, MD; Jennifer E. Van Eyk, PhD; Stephen C. Mockrin, PhD;
N. Leigh Anderson, PhD; on behalf of the Working Group Members*
Circulation. 2004;109:1697-1703 doi: 10.1161/01.CIR.0000121563.47232.2A
http://circ.ahajournals.org/content/109/14/1697

Abstract—The National Heart, Lung, and Blood Institute (NHLBI) Clinical Proteomics Working Group
was charged with identifying opportunities and challenges in clinical proteomics and using these as a
basis for recommendations aimed at directly improving patient care. The group included representatives
of clinical and translational research, proteomic technologies, laboratory medicine, bioinformatics, and
2 of the NHLBI Proteomics Centers, which form part of a program focused on innovative technology development.

This report represents the results from a one-and-a-half-day meeting on May 8 and 9, 2003. For the purposes
of this report, clinical proteomics is defined as the systematic, comprehensive, large-scale identification of
protein patterns (“fingerprints”) of disease and the application of this knowledge to improve patient care
and public health through better assessment of disease susceptibility, prevention of disease, selection of
therapy for the individual, and monitoring of treatment response. (Circulation. 2004;109:1697-1703.)
Key Words: proteins diagnosis prognosis genetics plasma

Part V.  Overview: The Maturing of Proteomics in Cardiovascular Research

Jennifer E. Van Eyk
Circ Res. 2011;108:490-498  doi: 10.1161/CIRCRESAHA.110.226894
http://circres.ahajournals.org/content/108/4/490

Abstract: Proteomic technologies are used to study the complexity of proteins, their roles, and biological functions.
It is based on the premise that the diversity of proteins, comprising their isoforms, and posttranslational modifications
(PTMs) underlies biology.

Based on an annotated human cardiac protein database, 62% have at least one PTM (phosphorylation currently dominating),
whereas 25% have more than one type of modification.

The field of proteomics strives to observe and quantify this protein diversity. It represents a broad group of technologies
and methods arising from analytic protein biochemistry, analytic separation, mass spectrometry, and bioinformatics.
Since the 1990s, the application of proteomic analysis has been increasingly used in cardiovascular research.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

prevalence-of-cardiovascular-diseases-in-adults-by-age-and-sex-u-s-2007-2010.

Technology development and adaptation have been at the heart of this progress. Technology undergoes a maturation,

becoming routine and ultimately obsolete, being replaced by newer methods. Because of extensive methodological
improvements, many proteomic studies today observe 1000 to 5000 proteins.

Only 5 years ago, this was not feasible. Even so, there are still road blocks. Nowadays, there is a focus on obtaining
better characterization of protein isoforms and specific PTMs. Consequentl, new techniques for identification and
quantification of modified amino acid residues are required, as is the assessment of single-nucleotide polymorphisms
in addition to determination of the structural and functional consequences.

In this series, 4 articles provide concrete examples of how proteomics can be incorporated into cardiovascular
research and address specific biological questions. They also illustrate how novel discoveries can be made and
how proteomic technology has continued to evolve. (Circ Res. 2011;108:490-498.)
Key Words: proteomics technology protein isoform posttranslational modification polymorphism

Part VI.   The -omics era: Proteomics and lipidomics in vascular research

Athanasios Didangelos, Christin Stegemann, Manuel Mayr∗

King’s British Heart Foundation Centre, King’s College London, UK

Atherosclerosis 2012; 221: 12– 17     http://dx.doi.org/10.1016/j.atherosclerosis.2011.09.043

a b s t r a c t

A main limitation of the current approaches to atherosclerosis research is the focus on the investigation of individual
factors, which are presumed to be involved in the pathophysiology and whose biological functions are, at least in part, understood.

These molecules are investigated extensively while others are not studied at all. In comparison to our detailed
knowledge about the role of inflammation in atherosclerosis, little is known about extracellular matrix remodelling
and the retention of individual lipid species rather than lipid classes in early and advanced atherosclerotic lesions.

The recent development of mass spectrometry-based methods and advanced analytical tools are transforming
our ability to profile extracellular proteins and lipid species in animal models and clinical specimen with the goal
of illuminating pathological processes and discovering new biomarkers.

Fig. 1. ECM in atherosclerosis

Fig. 1. ECM in atherosclerosis. The bulk of the vascular ECM is synthesised by smooth muscle cells and composed primarily of collagens, proteoglycans and glycoproteins.During the early stages of atherosclerosis, LDL binds to the proteoglycans of the vessel wall, becomes modified, i.e. by oxidation (ox-LDL), and sustains a proinflammatory cascade that is proatherogenic

Lipidomics of atherosclerotic plaques

Lipidomics of atherosclerotic plaques

Fig. 2. Lipidomics of atherosclerotic plaques. Lipids were separated by ultra performance reverse phase
liquid chromatography on a Waters® ACQUITY UPLC® (HSS T3 Column, 100 mm × 2.1 mm i.d., 1.8 _m
particle size, 55 ◦C, flow rate 400 _L/min, Waters, Milford MA, USA) and analyzed on a quadrupole time-of-flight
mass spectrometer (Waters® SYNAPTTM HDMSTM system) in both positive (A) and negative ion mode (C).
In positive MS mode, lysophosphatidyl-cholines (lPCs) and lysophosphatidylethanolamines (lPEs) eluted first;
followed by phosphatidylcholines (PCs), sphingomyelin (SMs), phosphatidylethanol-amines (PEs) and cholesteryl
esters (CEs); diacylglycerols (DAGs) and triacylglycerols (TAGs) had the longest retention times. In negative MS mode,
fatty acids (FA) were followed by phosphatidyl-glycerols (PGs), phosphatidyl-inositols (PIs), phosphatidylserines (PS)
and PEs. The chromatographic peaks corresponding to the different classes were detected as retention time-mass to
charge ratio (m/z) pairs and their areas were recorded. Principal component analyses on 629 variables from triplicate
analysis (C1, 2, 3 = control 1, 2, 3; P1, 2, 3 = endarterectomy patient 1, 2, 3) demonstrated a clear separation of
atherosclerotic plaques and control radial arteries in positive (B) and negative (D) ion mode. The clustering of the
technical replicates and the central projection of the pooled sample within the scores plot confirm the reproducibility
of the analyses, and the Goodness of Fit test returned a chi-squared of 0.4 and a R-squared value of 0.6.

Challenges in mass spectrometry

Mass spectrometry is an evolving technology and the technological advances facilitate the detection and quantification
of scarce proteins. Nonetheless, the enrichment of specific subproteomes using differential solubilityor isolation of cellular
organelleswill remain important to increase coverage and, at least partially, overcome the inhomogeneity of diseased tissue,
one of the major factors affecting sample-to-sample variation.

Proteomics is also the method of choice for the identification of post-translational modifications, which play an essential
role in protein function, i.e. enzymatic activation, binding ability and formation of ECM structures. Again, efficient enrichment
is essential to increase the likelihood of identifying modified peptides in complex mixtures. Lipidomics faces similar challenges.
While the extraction of lipids is more selective, new enrichment methods are needed for scarce lipids as well as labile lipid
metabolites, that may have important bioactivity. Another pressing issue in lipidomics is data analysis, in particular the lack
of automated search engines that can analyze mass spectra obtained from instruments of different vendors. Efforts to
overcome this issue are currently underway.

Conclusions

Proteomics and lipidomics offer an unbiased platform for the investigation of ECM and lipids within atherosclerosis. In
combination, these innovative technologies will reveal key differences in proteolytic processes responsible for plaque rupture
and advance our understanding of ECM – lipoprotein interactions in atherosclerosis.

references

Virtualization in Proteomics: ‘Sakshat’ in India, at IIT Bombay(tginnovations.wordpress.com)

Proteome Portraits (the-scientist.com)

A Protease for ‘Middle-down’ Proteomics(pharmaceuticalintelligence.com)

Intrinsic Disorder in the Human Spliceosomal Proteome(ploscompbiol.org)

proteome

proteome

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

Table - metabolic  targets

Table – metabolic targets

HK-II Phosphorylation

Read Full Post »

« Newer Posts