Feeds:
Posts
Comments

Archive for the ‘Quality Assurance’ Category

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Author and Curator: Larry H. Bernstein, MD, FCAP

This summary is the last of a series on the impact of transcriptomics, proteomics, and metabolomics on disease investigation, and the sorting and integration of genomic signatures and metabolic signatures to explain phenotypic relationships in variability and individuality of response to disease expression and how this leads to  pharmaceutical discovery and personalized medicine.  We have unquestionably better tools at our disposal than has ever existed in the history of mankind, and an enormous knowledge-base that has to be accessed.  I shall conclude here these discussions with the powerful contribution to and current knowledge pertaining to biochemistry, metabolism, protein-interactions, signaling, and the application of the -OMICS to diseases and drug discovery at this time.

The Ever-Transcendent Cell

Deriving physiologic first principles By John S. Torday | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41282/title/The-Ever-Transcendent-Cell/

Both the developmental and phylogenetic histories of an organism describe the evolution of physiology—the complex of metabolic pathways that govern the function of an organism as a whole. The necessity of establishing and maintaining homeostatic mechanisms began at the cellular level, with the very first cells, and homeostasis provides the underlying selection pressure fueling evolution.

While the events leading to the formation of the first functioning cell are debatable, a critical one was certainly the formation of simple lipid-enclosed vesicles, which provided a protected space for the evolution of metabolic pathways. Protocells evolved from a common ancestor that experienced environmental stresses early in the history of cellular development, such as acidic ocean conditions and low atmospheric oxygen levels, which shaped the evolution of metabolism.

The reduction of evolution to cell biology may answer the perennially unresolved question of why organisms return to their unicellular origins during the life cycle.

As primitive protocells evolved to form prokaryotes and, much later, eukaryotes, changes to the cell membrane occurred that were critical to the maintenance of chemiosmosis, the generation of bioenergy through the partitioning of ions. The incorporation of cholesterol into the plasma membrane surrounding primitive eukaryotic cells marked the beginning of their differentiation from prokaryotes. Cholesterol imparted more fluidity to eukaryotic cell membranes, enhancing functionality by increasing motility and endocytosis. Membrane deformability also allowed for increased gas exchange.

Acidification of the oceans by atmospheric carbon dioxide generated high intracellular calcium ion concentrations in primitive aquatic eukaryotes, which had to be lowered to prevent toxic effects, namely the aggregation of nucleotides, proteins, and lipids. The early cells achieved this by the evolution of calcium channels composed of cholesterol embedded within the cell’s plasma membrane, and of internal membranes, such as that of the endoplasmic reticulum, peroxisomes, and other cytoplasmic organelles, which hosted intracellular chemiosmosis and helped regulate calcium.

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.  ….

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.

Given that the unicellular toolkit is complete with all the traits necessary for forming multicellular organisms (Science, 301:361-63, 2003), it is distinctly possible that metazoans are merely permutations of the unicellular body plan. That scenario would clarify a lot of puzzling biology: molecular commonalities between the skin, lung, gut, and brain that affect physiology and pathophysiology exist because the cell membranes of unicellular organisms perform the equivalents of these tissue functions, and the existence of pleiotropy—one gene affecting many phenotypes—may be a consequence of the common unicellular source for all complex biologic traits.  …

The cell-molecular homeostatic model for evolution and stability addresses how the external environment generates homeostasis developmentally at the cellular level. It also determines homeostatic set points in adaptation to the environment through specific effectors, such as growth factors and their receptors, second messengers, inflammatory mediators, crossover mutations, and gene duplications. This is a highly mechanistic, heritable, plastic process that lends itself to understanding evolution at the cellular, tissue, organ, system, and population levels, mediated by physiologically linked mechanisms throughout, without having to invoke random, chance mechanisms to bridge different scales of evolutionary change. In other words, it is an integrated mechanism that can often be traced all the way back to its unicellular origins.

The switch from swim bladder to lung as vertebrates moved from water to land is proof of principle that stress-induced evolution in metazoans can be understood from changes at the cellular level.

http://www.the-scientist.com/Nov2014/TE_21.jpg

A MECHANISTIC BASIS FOR LUNG DEVELOPMENT: Stress from periodic atmospheric hypoxia (1) during vertebrate adaptation to land enhances positive selection of the stretch-regulated parathyroid hormone-related protein (PTHrP) in the pituitary and adrenal glands. In the pituitary (2), PTHrP signaling upregulates the release of adrenocorticotropic hormone (ACTH) (3), which stimulates the release of glucocorticoids (GC) by the adrenal gland (4). In the adrenal gland, PTHrP signaling also stimulates glucocorticoid production of adrenaline (5), which in turn affects the secretion of lung surfactant, the distension of alveoli, and the perfusion of alveolar capillaries (6). PTHrP signaling integrates the inflation and deflation of the alveoli with surfactant production and capillary perfusion.  THE SCIENTIST STAFF

From a cell-cell signaling perspective, two critical duplications in genes coding for cell-surface receptors occurred during this period of water-to-land transition—in the stretch-regulated parathyroid hormone-related protein (PTHrP) receptor gene and the β adrenergic (βA) receptor gene. These gene duplications can be disassembled by following their effects on vertebrate physiology backwards over phylogeny. PTHrP signaling is necessary for traits specifically relevant to land adaptation: calcification of bone, skin barrier formation, and the inflation and distention of lung alveoli. Microvascular shear stress in PTHrP-expressing organs such as bone, skin, kidney, and lung would have favored duplication of the PTHrP receptor, since sheer stress generates radical oxygen species (ROS) known to have this effect and PTHrP is a potent vasodilator, acting as an epistatic balancing selection for this constraint.

Positive selection for PTHrP signaling also evolved in the pituitary and adrenal cortex (see figure on this page), stimulating the secretion of ACTH and corticoids, respectively, in response to the stress of land adaptation. This cascade amplified adrenaline production by the adrenal medulla, since corticoids passing through it enzymatically stimulate adrenaline synthesis. Positive selection for this functional trait may have resulted from hypoxic stress that arose during global episodes of atmospheric hypoxia over geologic time. Since hypoxia is the most potent physiologic stressor, such transient oxygen deficiencies would have been acutely alleviated by increasing adrenaline levels, which would have stimulated alveolar surfactant production, increasing gas exchange by facilitating the distension of the alveoli. Over time, increased alveolar distension would have generated more alveoli by stimulating PTHrP secretion, impelling evolution of the alveolar bed of the lung.

This scenario similarly explains βA receptor gene duplication, since increased density of the βA receptor within the alveolar walls was necessary for relieving another constraint during the evolution of the lung in adaptation to land: the bottleneck created by the existence of a common mechanism for blood pressure control in both the lung alveoli and the systemic blood pressure. The pulmonary vasculature was constrained by its ability to withstand the swings in pressure caused by the systemic perfusion necessary to sustain all the other vital organs. PTHrP is a potent vasodilator, subserving the blood pressure constraint, but eventually the βA receptors evolved to coordinate blood pressure in both the lung and the periphery.

Gut Microbiome Heritability

Analyzing data from a large twin study, researchers have homed in on how host genetics can shape the gut microbiome.
By Tracy Vence | The Scientist Nov 6, 2014

Previous research suggested host genetic variation can influence microbial phenotype, but an analysis of data from a large twin study published in Cell today (November 6) solidifies the connection between human genotype and the composition of the gut microbiome. Studying more than 1,000 fecal samples from 416 monozygotic and dizygotic twin pairs, Cornell University’s Ruth Ley and her colleagues have homed in on one bacterial taxon, the family Christensenellaceae, as the most highly heritable group of microbes in the human gut. The researchers also found that Christensenellaceae—which was first described just two years ago—is central to a network of co-occurring heritable microbes that is associated with lean body mass index (BMI).  …

Of particular interest was the family Christensenellaceae, which was the most heritable taxon among those identified in the team’s analysis of fecal samples obtained from the TwinsUK study population.

While microbiologists had previously detected 16S rRNA sequences belonging to Christensenellaceae in the human microbiome, the family wasn’t named until 2012. “People hadn’t looked into it, partly because it didn’t have a name . . . it sort of flew under the radar,” said Ley.

Ley and her colleagues discovered that Christensenellaceae appears to be the hub in a network of co-occurring heritable taxa, which—among TwinsUK participants—was associated with low BMI. The researchers also found that Christensenellaceae had been found at greater abundance in low-BMI twins in older studies.

To interrogate the effects of Christensenellaceae on host metabolic phenotype, the Ley’s team introduced lean and obese human fecal samples into germ-free mice. They found animals that received lean fecal samples containing more Christensenellaceae showed reduced weight gain compared with their counterparts. And treatment of mice that had obesity-associated microbiomes with one member of the Christensenellaceae family, Christensenella minuta, led to reduced weight gain.   …

Ley and her colleagues are now focusing on the host alleles underlying the heritability of the gut microbiome. “We’re running a genome-wide association analysis to try to find genes—particular variants of genes—that might associate with higher levels of these highly heritable microbiota.  . . . Hopefully that will point us to possible reasons they’re heritable,” she said. “The genes will guide us toward understanding how these relationships are maintained between host genotype and microbiome composition.”

J.K. Goodrich et al., “Human genetics shape the gut microbiome,” Cell,  http://dx.doi.org:/10.1016/j.cell.2014.09.053, 2014.

Light-Operated Drugs

Scientists create a photosensitive pharmaceutical to target a glutamate receptor.
By Ruth Williams | The Scentist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41279/title/Light-Operated-Drugs/

light operated drugs MO1

light operated drugs MO1

http://www.the-scientist.com/Nov2014/MO1.jpg

The desire for temporal and spatial control of medications to minimize side effects and maximize benefits has inspired the development of light-controllable drugs, or optopharmacology. Early versions of such drugs have manipulated ion channels or protein-protein interactions, “but never, to my knowledge, G protein–coupled receptors [GPCRs], which are one of the most important pharmacological targets,” says Pau Gorostiza of the Institute for Bioengineering of Catalonia, in Barcelona.

Gorostiza has taken the first step toward filling that gap, creating a photosensitive inhibitor of the metabotropic glutamate 5 (mGlu5) receptor—a GPCR expressed in neurons and implicated in a number of neurological and psychiatric disorders. The new mGlu5 inhibitor—called alloswitch-1—is based on a known mGlu receptor inhibitor, but the simple addition of a light-responsive appendage, as had been done for other photosensitive drugs, wasn’t an option. The binding site on mGlu5 is “extremely tight,” explains Gorostiza, and would not accommodate a differently shaped molecule. Instead, alloswitch-1 has an intrinsic light-responsive element.

In a human cell line, the drug was active under dim light conditions, switched off by exposure to violet light, and switched back on by green light. When Gorostiza’s team administered alloswitch-1 to tadpoles, switching between violet and green light made the animals stop and start swimming, respectively.

The fact that alloswitch-1 is constitutively active and switched off by light is not ideal, says Gorostiza. “If you are thinking of therapy, then in principle you would prefer the opposite,” an “on” switch. Indeed, tweaks are required before alloswitch-1 could be a useful drug or research tool, says Stefan Herlitze, who studies ion channels at Ruhr-Universität Bochum in Germany. But, he adds, “as a proof of principle it is great.” (Nat Chem Biol, http://dx.doi.org:/10.1038/nchembio.1612, 2014)

Enhanced Enhancers

The recent discovery of super-enhancers may offer new drug targets for a range of diseases.
By Eric Olson | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41281/title/Enhanced-Enhancers/

To understand disease processes, scientists often focus on unraveling how gene expression in disease-associated cells is altered. Increases or decreases in transcription—as dictated by a regulatory stretch of DNA called an enhancer, which serves as a binding site for transcription factors and associated proteins—can produce an aberrant composition of proteins, metabolites, and signaling molecules that drives pathologic states. Identifying the root causes of these changes may lead to new therapeutic approaches for many different diseases.

Although few therapies for human diseases aim to alter gene expression, the outstanding examples—including antiestrogens for hormone-positive breast cancer, antiandrogens for prostate cancer, and PPAR-γ agonists for type 2 diabetes—demonstrate the benefits that can be achieved through targeting gene-control mechanisms.  Now, thanks to recent papers from laboratories at MIT, Harvard, and the National Institutes of Health, researchers have a new, much bigger transcriptional target: large DNA regions known as super-enhancers or stretch-enhancers. Already, work on super-enhancers is providing insights into how gene-expression programs are established and maintained, and how they may go awry in disease.  Such research promises to open new avenues for discovering medicines for diseases where novel approaches are sorely needed.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions (Cell, 153:307-19, 2013). They also appear to bind a large percentage of the transcriptional machinery compared to typical enhancers, allowing them to better establish and enforce cell-type specific transcriptional programs (Cell, 153:320-34, 2013).

Super-enhancers are closely associated with genes that dictate cell identity, including those for cell-type–specific master regulatory transcription factors. This observation led to the intriguing hypothesis that cells with a pathologic identity, such as cancer cells, have an altered gene expression program driven by the loss, gain, or altered function of super-enhancers.

Sure enough, by mapping the genome-wide location of super-enhancers in several cancer cell lines and from patients’ tumor cells, we and others have demonstrated that genes located near super-enhancers are involved in processes that underlie tumorigenesis, such as cell proliferation, signaling, and apoptosis.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions.

Genome-wide association studies (GWAS) have found that disease- and trait-associated genetic variants often occur in greater numbers in super-enhancers (compared to typical enhancers) in cell types involved in the disease or trait of interest (Cell, 155:934-47, 2013). For example, an enrichment of fasting glucose–associated single nucleotide polymorphisms (SNPs) was found in the stretch-enhancers of pancreatic islet cells (PNAS, 110:17921-26, 2013). Given that some 90 percent of reported disease-associated SNPs are located in noncoding regions, super-enhancer maps may be extremely valuable in assigning functional significance to GWAS variants and identifying target pathways.

Because only 1 to 2 percent of active genes are physically linked to a super-enhancer, mapping the locations of super-enhancers can be used to pinpoint the small number of genes that may drive the biology of that cell. Differential super-enhancer maps that compare normal cells to diseased cells can be used to unravel the gene-control circuitry and identify new molecular targets, in much the same way that somatic mutations in tumor cells can point to oncogenic drivers in cancer. This approach is especially attractive in diseases for which an incomplete understanding of the pathogenic mechanisms has been a barrier to discovering effective new therapies.

Another therapeutic approach could be to disrupt the formation or function of super-enhancers by interfering with their associated protein components. This strategy could make it possible to downregulate multiple disease-associated genes through a single molecular intervention. A group of Boston-area researchers recently published support for this concept when they described inhibited expression of cancer-specific genes, leading to a decrease in cancer cell growth, by using a small molecule inhibitor to knock down a super-enhancer component called BRD4 (Cancer Cell, 24:777-90, 2013).  More recently, another group showed that expression of the RUNX1 transcription factor, involved in a form of T-cell leukemia, can be diminished by treating cells with an inhibitor of a transcriptional kinase that is present at the RUNX1 super-enhancer (Nature, 511:616-20, 2014).

Fungal effector Ecp6 outcompetes host immune receptor for chitin binding through intrachain LysM dimerization 
Andrea Sánchez-Vallet, et al.   eLife 2013;2:e00790 http://elifesciences.org/content/2/e00790#sthash.LnqVMJ9p.dpuf

LysM effector

LysM effector

http://img.scoop.it/ZniCRKQSvJOG18fHbb4p0Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9

While host immune receptors

  • detect pathogen-associated molecular patterns to activate immunity,
  • pathogens attempt to deregulate host immunity through secreted effectors.

Fungi employ LysM effectors to prevent

  • recognition of cell wall-derived chitin by host immune receptors

Structural analysis of the LysM effector Ecp6 of

  • the fungal tomato pathogen Cladosporium fulvum reveals
  • a novel mechanism for chitin binding,
  • mediated by intrachain LysM dimerization,

leading to a chitin-binding groove that is deeply buried in the effector protein.

This composite binding site involves

  • two of the three LysMs of Ecp6 and
  • mediates chitin binding with ultra-high (pM) affinity.

The remaining singular LysM domain of Ecp6 binds chitin with

  • low micromolar affinity but can nevertheless still perturb chitin-triggered immunity.

Conceivably, the perturbation by this LysM domain is not established through chitin sequestration but possibly through interference with the host immune receptor complex.

Mutated Genes in Schizophrenia Map to Brain Networks
From www.nih.gov –  Sep 3, 2013

Previous studies have shown that many people with schizophrenia have de novo, or new, genetic mutations. These misspellings in a gene’s DNA sequence

  • occur spontaneously and so aren’t shared by their close relatives.

Dr. Mary-Claire King of the University of Washington in Seattle and colleagues set out to

  • identify spontaneous genetic mutations in people with schizophrenia and
  • to assess where and when in the brain these misspelled genes are turned on, or expressed.

The study was funded in part by NIH’s National Institute of Mental Health (NIMH). The results were published in the August 1, 2013, issue of Cell.

The researchers sequenced the exomes (protein-coding DNA regions) of 399 people—105 with schizophrenia plus their unaffected parents and siblings. Gene variations
that were found in a person with schizophrenia but not in either parent were considered spontaneous.

The likelihood of having a spontaneous mutation was associated with

  • the age of the father in both affected and unaffected siblings.

Significantly more mutations were found in people

  • whose fathers were 33-45 years at the time of conception compared to 19-28 years.

Among people with schizophrenia, the scientists identified

  • 54 genes with spontaneous mutations
  • predicted to cause damage to the function of the protein they encode.

The researchers used newly available database resources that show

  • where in the brain and when during development genes are expressed.

The genes form an interconnected expression network with many more connections than

  • that of the genes with spontaneous damaging mutations in unaffected siblings.

The spontaneously mutated genes in people with schizophrenia

  • were expressed in the prefrontal cortex, a region in the front of the brain.

The genes are known to be involved in important pathways in brain development. Fifty of these genes were active

  • mainly during the period of fetal development.

“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” King says. “Mutations can lead to loss of integrity of a whole pathway,
not just of a single gene.”

These findings support the concept that schizophrenia may result, in part, from

  • disruptions in development in the prefrontal cortex during fetal development.

James E. Darnell’s “Reflections”

A brief history of the discovery of RNA and its role in transcription — peppered with career advice
By Joseph P. Tiano

James Darnell begins his Journal of Biological Chemistry “Reflections” article by saying, “graduate students these days

  • have to swim in a sea virtually turgid with the daily avalanche of new information and
  • may be momentarily too overwhelmed to listen to the aging.

I firmly believe how we learned what we know can provide useful guidance for how and what a newcomer will learn.” Considering his remarkable discoveries in

  • RNA processing and eukaryotic transcriptional regulation

spanning 60 years of research, Darnell’s advice should be cherished. In his second year at medical school at Washington University School of Medicine in St. Louis, while
studying streptococcal disease in Robert J. Glaser’s laboratory, Darnell realized he “loved doing the experiments” and had his first “career advancement event.”
He and technician Barbara Pesch discovered that in vivo penicillin treatment killed streptococci only in the exponential growth phase and not in the stationary phase. These
results were published in the Journal of Clinical Investigation and earned Darnell an interview with Harry Eagle at the National Institutes of Health.

Darnell arrived at the NIH in 1956, shortly after Eagle  shifted his research interest to developing his minimal essential cell culture medium, still used. Eagle, then studying cell metabolism, suggested that Darnell take up a side project on poliovirus replication in mammalian cells in collaboration with Robert I. DeMars. DeMars’ Ph.D.
adviser was also James  Watson’s mentor, so Darnell met Watson, who invited him to give a talk at Harvard University, which led to an assistant professor position
at the MIT under Salvador Luria. A take-home message is to embrace side projects, because you never know where they may lead: this project helped to shape
his career.

Darnell arrived in Boston in 1961. Following the discovery of DNA’s structure in 1953, the world of molecular biology was turning to RNA in an effort to understand how
proteins are made. Darnell’s background in virology (it was discovered in 1960 that viruses used RNA to replicate) was ideal for the aim of his first independent lab:
exploring mRNA in animal cells grown in culture. While at MIT, he developed a new technique for purifying RNA along with making other observations

  • suggesting that nonribosomal cytoplasmic RNA may be involved in protein synthesis.

When Darnell moved to Albert Einstein College of Medicine for full professorship in 1964,  it was hypothesized that heterogenous nuclear RNA was a precursor to mRNA.
At Einstein, Darnell discovered RNA processing of pre-tRNAs and demonstrated for the first time

  • that a specific nuclear RNA could represent a possible specific mRNA precursor.

In 1967 Darnell took a position at Columbia University, and it was there that he discovered (simultaneously with two other labs) that

  • mRNA contained a polyadenosine tail.

The three groups all published their results together in the Proceedings of the National Academy of Sciences in 1971. Shortly afterward, Darnell made his final career move
four short miles down the street to Rockefeller University in 1974.

Over the next 35-plus years at Rockefeller, Darnell never strayed from his original research question: How do mammalian cells make and control the making of different
mRNAs? His work was instrumental in the collaborative discovery of

  • splicing in the late 1970s and
  • in identifying and cloning many transcriptional activators.

Perhaps his greatest contribution during this time, with the help of Ernest Knight, was

  • the discovery and cloning of the signal transducers and activators of transcription (STAT) proteins.

And with George Stark, Andy Wilks and John Krowlewski, he described

  • cytokine signaling via the JAK-STAT pathway.

Darnell closes his “Reflections” with perhaps his best advice: Do not get too wrapped up in your own work, because “we are all needed and we are all in this together.”

Darnell Reflections - James_Darnell

Darnell Reflections – James_Darnell

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/8758cb87-84ff-42d6-8aea-96fda4031a1b.jpg

Recent findings on presenilins and signal peptide peptidase

By Dinu-Valantin Bălănescu

γ-secretase and SPP

γ-secretase and SPP

Fig. 1 from the minireview shows a schematic depiction of γ-secretase and SPP

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/c2de032a-daad-41e5-ba19-87a17bd26362.png

GxGD proteases are a family of intramembranous enzymes capable of hydrolyzing

  • the transmembrane domain of some integral membrane proteins.

The GxGD family is one of the three families of

  • intramembrane-cleaving proteases discovered so far (along with the rhomboid and site-2 protease) and
  • includes the γ-secretase and the signal peptide peptidase.

Although only recently discovered, a number of functions in human pathology and in numerous other biological processes

  • have been attributed to γ-secretase and SPP.

Taisuke Tomita and Takeshi Iwatsubo of the University of Tokyo highlighted the latest findings on the structure and function of γ-secretase and SPP
in a recent minireview in The Journal of Biological Chemistry.

  • γ-secretase is involved in cleaving the amyloid-β precursor protein, thus producing amyloid-β peptide,

the main component of senile plaques in Alzheimer’s disease patients’ brains. The complete structure of mammalian γ-secretase is not yet known; however,
Tomita and Iwatsubo note that biochemical analyses have revealed it to be a multisubunit protein complex.

  • Its catalytic subunit is presenilin, an aspartyl protease.

In vitro and in vivo functional and chemical biology analyses have revealed that

  • presenilin is a modulator and mandatory component of the γ-secretase–mediated cleavage of APP.

Genetic studies have identified three other components required for γ-secretase activity:

  1. nicastrin,
  2. anterior pharynx defective 1 and
  3. presenilin enhancer 2.

By coexpression of presenilin with the other three components, the authors managed to

  • reconstitute γ-secretase activity.

Tomita and Iwatsubo determined using the substituted cysteine accessibility method and by topological analyses, that

  • the catalytic aspartates are located at the center of the nine transmembrane domains of presenilin,
  • by revealing the exact location of the enzyme’s catalytic site.

The minireview also describes in detail the formerly enigmatic mechanism of γ-secretase mediated cleavage.

SPP, an enzyme that cleaves remnant signal peptides in the membrane

  • during the biogenesis of membrane proteins and
  • signal peptides from major histocompatibility complex type I,
  • also is involved in the maturation of proteins of the hepatitis C virus and GB virus B.

Bioinformatics methods have revealed in fruit flies and mammals four SPP-like proteins,

  • two of which are involved in immunological processes.

By using γ-secretase inhibitors and modulators, it has been confirmed

  • that SPP shares a similar GxGD active site and proteolytic activity with γ-secretase.

Upon purification of the human SPP protein with the baculovirus/Sf9 cell system,

  • single-particle analysis revealed further structural and functional details.

HLA targeting efficiency correlates with human T-cell response magnitude and with mortality from influenza A infection

From www.pnas.org –  Sep 3, 2013 4:24 PM

Experimental and computational evidence suggests that

  • HLAs preferentially bind conserved regions of viral proteins, a concept we term “targeting efficiency,” and that
  • this preference may provide improved clearance of infection in several viral systems.

To test this hypothesis, T-cell responses to A/H1N1 (2009) were measured from peripheral blood mononuclear cells obtained from a household cohort study
performed during the 2009–2010 influenza season. We found that HLA targeting efficiency scores significantly correlated with

  • IFN-γ enzyme-linked immunosorbent spot responses (P = 0.042, multiple regression).

A further population-based analysis found that the carriage frequencies of the alleles with the lowest targeting efficiencies, A*24,

  • were associated with pH1N1 mortality (r = 0.37, P = 0.031) and
  • are common in certain indigenous populations in which increased pH1N1 morbidity has been reported.

HLA efficiency scores and HLA use are associated with CD8 T-cell magnitude in humans after influenza infection.
The computational tools used in this study may be useful predictors of potential morbidity and

  • identify immunologic differences of new variant influenza strains
  • more accurately than evolutionary sequence comparisons.

Population-based studies of the relative frequency of these alleles in severe vs. mild influenza cases

  • might advance clinical practices for severe H1N1 infections among genetically susceptible populations.

Metabolomics in drug target discovery

J D Rabinowitz et al.

Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ.
Cold Spring Harbor Symposia on Quantitative Biology 11/2011; 76:235-46.
http://dx.doi.org:/10.1101/sqb.2011.76.010694 

Most diseases result in metabolic changes. In many cases, these changes play a causative role in disease progression. By identifying pathological metabolic changes,

  • metabolomics can point to potential new sites for therapeutic intervention.

Particularly promising enzymatic targets are those that

  • carry increased flux in the disease state.

Definitive assessment of flux requires the use of isotope tracers. Here we present techniques for

  • finding new drug targets using metabolomics and isotope tracers.

The utility of these methods is exemplified in the study of three different viral pathogens. For influenza A and herpes simplex virus,

  • metabolomic analysis of infected versus mock-infected cells revealed
  • dramatic concentration changes around the current antiviral target enzymes.

Similar analysis of human-cytomegalovirus-infected cells, however, found the greatest changes

  • in a region of metabolism unrelated to the current antiviral target.

Instead, it pointed to the tricarboxylic acid (TCA) cycle and

  • its efflux to feed fatty acid biosynthesis as a potential preferred target.

Isotope tracer studies revealed that cytomegalovirus greatly increases flux through

  • the key fatty acid metabolic enzyme acetyl-coenzyme A carboxylase.
  • Inhibition of this enzyme blocks human cytomegalovirus replication.

Examples where metabolomics has contributed to identification of anticancer drug targets are also discussed. Eventual proof of the value of

  • metabolomics as a drug target discovery strategy will be
  • successful clinical development of therapeutics hitting these new targets.

 Related References

Use of metabolic pathway flux information in targeted cancer drug design. Drug Discovery Today: Therapeutic Strategies 1:435-443, 2004.

Detection of resistance to imatinib by metabolic profiling: clinical and drug development implications. Am J Pharmacogenomics. 2005;5(5):293-302. Review. PMID: 16196499

Medicinal chemistry, metabolic profiling and drug target discovery: a role for metabolic profiling in reverse pharmacology and chemical genetics.
Mini Rev Med Chem.  2005 Jan;5(1):13-20. Review. PMID: 15638788 [PubMed – indexed for MEDLINE] Related citations

Development of Tracer-Based Metabolomics and its Implications for the Pharmaceutical Industry. Int J Pharm Med 2007; 21 (3): 217-224.

Use of metabolic pathway flux information in anticancer drug design. Ernst Schering Found Symp Proc. 2007;(4):189-203. Review. PMID: 18811058

Pharmacological targeting of glucagon and glucagon-like peptide 1 receptors has different effects on energy state and glucose homeostasis in diet-induced obese mice. J Pharmacol Exp Ther. 2011 Jul;338(1):70-81. http://dx.doi.org:/10.1124/jpet.111.179986. PMID: 21471191

Single valproic acid treatment inhibits glycogen and RNA ribose turnover while disrupting glucose-derived cholesterol synthesis in liver as revealed by the
[U-C(6)]-d-glucose tracer in mice. Metabolomics. 2009 Sep;5(3):336-345. PMID: 19718458

Metabolic Pathways as Targets for Drug Screening, Metabolomics, Dr Ute Roessner (Ed.), ISBN: 978-953-51-0046-1, InTech, Available from: http://www.intechopen.com/books/metabolomics/metabolic-pathways-as-targets-for-drug-screening

Iron regulates glucose homeostasis in liver and muscle via AMP-activated protein kinase in mice. FASEB J. 2013 Jul;27(7):2845-54.
http://dx.doi.org:/10.1096/fj.12-216929. PMID: 23515442

Metabolomics and systems pharmacology: why and how to model the human metabolic network for drug discovery

Drug Discov. Today 19 (2014), 171–182     http://dx.doi.org:/10.1016/j.drudis.2013.07.014

Highlights

  • We now have metabolic network models; the metabolome is represented by their nodes.
  • Metabolite levels are sensitive to changes in enzyme activities.
  • Drugs hitchhike on metabolite transporters to get into and out of cells.
  • The consensus network Recon2 represents the present state of the art, and has predictive power.
  • Constraint-based modelling relates network structure to metabolic fluxes.

Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are

  • necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs.

To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of

  • the human metabolic network that include the important transporters.

Small molecule ‘drug’ transporters are in fact metabolite transporters, because

  • drugs bear structural similarities to metabolites known from the network reconstructions and
  • from measurements of the metabolome.

Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia:

(i) the effects of inborn errors of metabolism;

(ii) which metabolites are exometabolites, and

(iii) how metabolism varies between tissues and cellular compartments.

However, even these qualitative network models are not yet complete. As our understanding improves

  • so do we recognise more clearly the need for a systems (poly)pharmacology.

Introduction – a systems biology approach to drug discovery

It is clearly not news that the productivity of the pharmaceutical industry has declined significantly during recent years

  • following an ‘inverse Moore’s Law’, Eroom’s Law, or
  • that many commentators, consider that the main cause of this is
  • because of an excessive focus on individual molecular target discovery rather than a more sensible strategy
  • based on a systems-level approach (Fig. 1).
drug discovery science

drug discovery science

Figure 1.

The change in drug discovery strategy from ‘classical’ function-first approaches (in which the assay of drug function was at the tissue or organism level),
with mechanistic studies potentially coming later, to more-recent target-based approaches where initial assays usually involve assessing the interactions
of drugs with specified (and often cloned, recombinant) proteins in vitro. In the latter cases, effects in vivo are assessed later, with concomitantly high levels of attrition.

Arguably the two chief hallmarks of the systems biology approach are:

(i) that we seek to make mathematical models of our systems iteratively or in parallel with well-designed ‘wet’ experiments, and
(ii) that we do not necessarily start with a hypothesis but measure as many things as possible (the ’omes) and

  • let the data tell us the hypothesis that best fits and describes them.

Although metabolism was once seen as something of a Cinderella subject,

  • there are fundamental reasons to do with the organisation of biochemical networks as
  • to why the metabol(om)ic level – now in fact seen as the ‘apogee’ of the ’omics trilogy –
  •  is indeed likely to be far more discriminating than are
  • changes in the transcriptome or proteome.

The next two subsections deal with these points and Fig. 2 summarises the paper in the form of a Mind Map.

metabolomics and systems pharmacology

metabolomics and systems pharmacology

http://ars.els-cdn.com/content/image/1-s2.0-S1359644613002481-gr2.jpg

Metabolic Disease Drug Discovery— “Hitting the Target” Is Easier Said Than Done

David E. Moller, et al.   http://dx.doi.org:/10.1016/j.cmet.2011.10.012

Despite the advent of new drug classes, the global epidemic of cardiometabolic disease has not abated. Continuing

  • unmet medical needs remain a major driver for new research.

Drug discovery approaches in this field have mirrored industry trends, leading to a recent

  • increase in the number of molecules entering development.

However, worrisome trends and newer hurdles are also apparent. The history of two newer drug classes—

  1. glucagon-like peptide-1 receptor agonists and
  2. dipeptidyl peptidase-4 inhibitors—

illustrates both progress and challenges. Future success requires that researchers learn from these experiences and

  • continue to explore and apply new technology platforms and research paradigms.

The global epidemic of obesity and diabetes continues to progress relentlessly. The International Diabetes Federation predicts an even greater diabetes burden (>430 million people afflicted) by 2030, which will disproportionately affect developing nations (International Diabetes Federation, 2011). Yet

  • existing drug classes for diabetes, obesity, and comorbid cardiovascular (CV) conditions have substantial limitations.

Currently available prescription drugs for treatment of hyperglycemia in patients with type 2 diabetes (Table 1) have notable shortcomings. In general,

Therefore, clinicians must often use combination therapy, adding additional agents over time. Ultimately many patients will need to use insulin—a therapeutic class first introduced in 1922. Most existing agents also have

  • issues around safety and tolerability as well as dosing convenience (which can impact patient compliance).

Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics,

  • the quantification and analysis of metabolites produced by the body.

It refers to the direct measurement of metabolites in an individual’s bodily fluids, in order to

  • predict or evaluate the metabolism of pharmaceutical compounds, and
  • to better understand the pharmacokinetic profile of a drug.

Alternatively, pharmacometabolomics can be applied to measure metabolite levels

  • following the administration of a pharmaceutical compound, in order to
  • monitor the effects of the compound on certain metabolic pathways(pharmacodynamics).

This provides detailed mapping of drug effects on metabolism and

  • the pathways that are implicated in mechanism of variation of response to treatment.

In addition, the metabolic profile of an individual at baseline (metabotype) provides information about

  • how individuals respond to treatment and highlights heterogeneity within a disease state.

All three approaches require the quantification of metabolites found

relationship between -OMICS

relationship between -OMICS

http://upload.wikimedia.org/wikipedia/commons/thumb/e/eb/OMICS.png/350px-OMICS.png

Pharmacometabolomics is thought to provide information that

Looking at the characteristics of an individual down through these different levels of detail, there is an

  • increasingly more accurate prediction of a person’s ability to respond to a pharmaceutical compound.
  1. the genome, made up of 25 000 genes, can indicate possible errors in drug metabolism;
  2. the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed;
  3. and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions.

Pharmacometabolomics complements the omics with

  • direct measurement of the products of all of these reactions, but with perhaps a relatively
  • smaller number of members: that was initially projected to be approximately 2200 metabolites,

but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is

  • to more closely predict or assess the response of an individual to a pharmaceutical compound,
  • permitting continued treatment with the right drug or dosage
  • depending on the variations in their metabolism and ability to respond to treatment.

Pharmacometabolomic analyses, through the use of a metabolomics approach,

  • can provide a comprehensive and detailed metabolic profile or “metabolic fingerprint” for an individual patient.

Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations,

This approach can then be applied to the prediction of response to a pharmaceutical compound

  • by patients with a particular metabolic profile.

Pharmacometabolomic analyses of drug response are

Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms)

  • within patients that may contribute to altered drug responses and overall outcome of a certain treatment.

The results of pharmacometabolomics analyses can act to “inform” or “direct”

  • pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level.

This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors

  • where metabolic signatures were able to define a pathway implicated in response to the antidepressant and
  • that lead to identification of genetic variants within a key gene
  • within the highlighted pathway as being implicated in variation in response.

These genetic variants were not identified through genetic analysis alone and hence

  • illustrated how metabolomics can guide and inform genetic data.

en.wikipedia.org/wiki/Pharmacometabolomics

Benznidazole Biotransformation and Multiple Targets in Trypanosoma cruzi Revealed by Metabolomics

Andrea Trochine, Darren J. Creek, Paula Faral-Tello, Michael P. Barrett, Carlos Robello
Published: May 22, 2014   http://dx.doi.org:/10.1371/journal.pntd.0002844

The first line treatment for Chagas disease, a neglected tropical disease caused by the protozoan parasite Trypanosoma cruzi,

  • involves administration of benznidazole (Bzn).

Bzn is a 2-nitroimidazole pro-drug which requires nitroreduction to become active. We used a

  • non-targeted MS-based metabolomics approach to study the metabolic response of T. cruzi to Bzn.

Parasites treated with Bzn were minimally altered compared to untreated trypanosomes, although the redox active thiols

  1. trypanothione,
  2. homotrypanothione and
  3. cysteine

were significantly diminished in abundance post-treatment. In addition, multiple Bzn-derived metabolites were detected after treatment.

These metabolites included reduction products, fragments and covalent adducts of reduced Bzn

  • linked to each of the major low molecular weight thiols:
  1. trypanothione,
  2. glutathione,
  3. g-glutamylcysteine,
  4. glutathionylspermidine,
  5. cysteine and
  6. ovothiol A.

Bzn products known to be generated in vitro by the unusual trypanosomal nitroreductase, TcNTRI,

  • were found within the parasites,
  • but low molecular weight adducts of glyoxal, a proposed toxic end-product of NTRI Bzn metabolism, were not detected.

Our data is indicative of a major role of the

  • thiol binding capacity of Bzn reduction products
  • in the mechanism of Bzn toxicity against T. cruzi.

 

 

Read Full Post »

Larry H. Bernstein, MD, FCAP, Reporter, Reposted

Leaders in Pharmaceutical Intelligence

DR ANTHONY MELVIN CRASTO …..FOR BLOG HOME CLICK HERE

http://pharmaceuticalintelligence.com/10/29/2010/larryhbern/Rofecoxib

ROFECOXIB

MK-966, MK-0966, Vioxx

162011-90-7

C17-H14-O4-S
314.3596
\
Percent Composition: C 64.95%, H 4.49%, O 20.36%, S 10.20%
LitRef: Selective cyclooxygenase-2 (COX-2) inhibitor. Prepn: Y. Ducharme et al., WO 9500501; eidem, US5474995 (both 1995 to Merck Frosst).
Therap-Cat: Anti-inflammatory; analgesic.

Rofecoxib /ˌrɒfɨˈkɒksɪb/ is a nonsteroidal anti-inflammatory drug (NSAID) that has now been withdrawn over safety concerns. It was marketed by Merck & Co. to treat osteoarthritisacute pain conditions, and dysmenorrhoea. Rofecoxib was approved by the Food and Drug Administration (FDA) on May 20, 1999, and was marketed under the brand names VioxxCeoxx, and Ceeoxx.

Rofecoxib

Rofecoxib

Rofecoxib gained widespread acceptance among physicians treating patients with arthritis and other conditions causing chronic or acute pain. Worldwide, over 80 million people were prescribed rofecoxib at some time.[1]

On September 30, 2004, Merck withdrew rofecoxib from the market because of concerns about increased risk of heart attack and stroke associated with long-term, high-dosage use. Merck withdrew the drug after disclosures that it withheld information about rofecoxib’s risks from doctors and patients for over five years, resulting in between 88,000 and 140,000 cases of serious heart disease.[2] Rofecoxib was one of the most widely used drugs ever to be withdrawn from the market. In the year before withdrawal, Merck had sales revenue of US$2.5 billion from Vioxx.[3] Merck reserved $970 million to pay for its Vioxx-related legal expenses through 2007, and have set aside $4.85bn for legal claims from US citizens.

Rofecoxib was available on prescription in both tablet-form and as an oral suspension. It was available by injection for hospital use.

 

 Mode of action
 Cyclooxygenase (COX) has two well-studied isoforms, called COX-1 and COX-2.
  • COX-1 mediates the synthesis of prostaglandins responsible for protection of the stomach lining, while
  • COX-2 mediates the synthesis of prostaglandins responsible for pain and inflammation.
prostaglandin PGE2

prostaglandin PGE2

By creating “selective” NSAIDs that inhibit COX-2, but not COX-1, the same pain relief as traditional NSAIDs is offered, but with greatly reduced risk of fatal or debilitating peptic ulcers. Rofecoxib is a selective COX-2 inhibitor, or “coxib”.

Others include Merck’s etoricoxib (Arcoxia), Pfizer’s celecoxib (Celebrex) and valdecoxib (Bextra). Interestingly, at the time of its withdrawal, rofecoxib was the only coxib with clinical evidence of its superior gastrointestinal adverse effect profile over conventional NSAIDs. This was largely based on the VIGOR (Vioxx GI Outcomes Research) study, which compared the efficacy and adverse effect profiles of rofecoxib and naproxen.[4]

Pharmacokinetics

The therapeutic recommended dosages were 12.5, 25, and 50 mg with an approximate bioavailability of 93%.[5][6][7] Rofecoxib crossed the placenta and blood–brain barrier,[5][6][8]and took 1–3 hours to reach peak plasma concentration with an effective half-life (based on steady-state levels) of approximately 17 hours.[5][7][9] The metabolic products are cis-dihydro and trans-dihydro derivatives of rofecoxib[5][9] which are primarily excreted through urine.

Fabricated efficacy studies

On March 11, 2009, Scott S. Reuben, former chief of acute pain at Baystate Medical Center, Springfield, Mass., revealed that data for 21 studies he had authored for the efficacy of the drug (along with others such as celecoxib) had been fabricated in order to augment the analgesic effects of the drugs. There is no evidence that Reuben colluded with Merck in falsifying his data. Reuben was also a former paid spokesperson for the drug company Pfizer (which owns the intellectual property rights for marketing celecoxib in the United States). The retracted studies were not submitted to either the FDA or the European Union’s regulatory agencies prior to the drug’s approval. Drug manufacturer Merckhad no comment on the disclosure.[10]

Adverse drug reactions

VIOXX sample blister pack.jpg

Aside from the reduced incidence of gastric ulceration, rofecoxib exhibits a similar adverse effect profile to other NSAIDs.

Prostaglandin is a large family of lipids. Prostaglandin I2/PGI2/prostacyclin is just one member of it. Prostaglandins other than PGI2 (such as PGE2) also play important roles in vascular tone regulation. Prostacyclin/thromboxane are produced by both COX-1 and COX-2, and rofecoxib suppresses just COX-2 enzyme, so there is no reason to believe that prostacyclin levels are significantly reduced by the drug. And there is no reason to believe that only the balance between quantities of prostacyclin and thromboxane is the determinant factor for vascular tone.[11] Indeed Merck has stated that there was no effect on prostacyclin production in blood vessels in animal testing.[12] Other researchers have speculated that the cardiotoxicity may be associated with maleic anhydride metabolites formed when rofecoxib becomes ionized under physiological conditions. (Reddy & Corey, 2005)

 Adverse cardiovascular events

VIGOR study and publishing controversy

The VIGOR (Vioxx GI Outcomes Research) study, conducted by Bombardier, et al., which compared the efficacy and adverse effect profiles of rofecoxib and naproxen, had indicated a significant 4-fold increased risk of acute myocardial infarction (heart attack) in rofecoxib patients when compared with naproxen patients (0.4% vs 0.1%, RR 0.25) over the 12 month span of the study. The elevated risk began during the second month on rofecoxib. There was no significant difference in the mortality from cardiovascular events between the two groups, nor was there any significant difference in the rate of myocardial infarction between the rofecoxib and naproxen treatment groups in patients without high cardiovascular risk. The difference in overall risk was by the patients at higher risk of heart attack, i.e. those meeting the criteria for low-dose aspirin prophylaxis of secondary cardiovascular events (previous myocardial infarction, angina, cerebrovascular accidenttransient ischemic attack, or coronary artery bypass).

Merck’s scientists interpreted the finding as a protective effect of naproxen, telling the FDA that the difference in heart attacks “is primarily due to” this protective effect (Targum, 2001). Some commentators have noted that naproxen would have to be three times as effective as aspirin to account for all of the difference (Michaels 2005), and some outside scientists warned Merck that this claim was implausible before VIGOR was published.[13] No evidence has since emerged for such a large cardioprotective effect of naproxen, although a number of studies have found protective effects similar in size to those of aspirin.[14][15] Though Dr. Topol’s 2004 paper criticized Merck’s naproxen hypothesis, he himself co-authored a 2001 JAMA article stating “because of the evidence for an antiplatelet effect of naproxen, it is difficult to assess whether the difference in cardiovascular event rates in VIGOR was due to a benefit from naproxen or to a prothrombotic effect from rofecoxib.” (Mukherjee, Nissen and Topol, 2001.)

The results of the VIGOR study were submitted to the United States Food and Drug Administration (FDA) in February 2001. In September 2001, the FDA sent a warning letter to the CEO of Merck, stating, “Your promotional campaign discounts the fact that in the VIGOR study, patients on Vioxx were observed to have a four to five fold increase in myocardial infarctions (MIs) compared to patients on the comparator non-steroidal anti-inflammatory drug (NSAID), Naprosyn (naproxen).”[16] This led to the introduction, in April 2002, of warnings on Vioxx labeling concerning the increased risk of cardiovascular events (heart attack and stroke).

Months after the preliminary version of VIGOR was published in the New England Journal of Medicine, the journal editors learned that certain data reported to the FDA were not included in the NEJM article. Several years later, when they were shown a Merck memo during the depositions for the first federal Vioxx trial, they realized that these data had been available to the authors months before publication. The editors wrote an editorial accusing the authors of deliberately withholding the data.[17] They released the editorial to the media on December 8, 2005, before giving the authors a chance to respond. NEJM editor Gregory Curfman explained that the quick release was due to the imminent presentation of his deposition testimony, which he feared would be misinterpreted in the media. He had earlier denied any relationship between the timing of the editorial and the trial. Although his testimony was not actually used in the December trial, Curfman had testified well before the publication of the editorial.[18]

The editors charged that “more than four months before the article was published, at least two of its authors were aware of critical data on an array of adverse cardiovascular events that were not included in the VIGOR article.” These additional data included three additional heart attacks, and raised the relative risk of Vioxx from 4.25-fold to 5-fold. All the additional heart attacks occurred in the group at low risk of heart attack (the “aspirin not indicated” group) and the editors noted that the omission “resulted in the misleading conclusion that there was a difference in the risk of myocardial infarction between the aspirin indicated and aspirin not indicated groups.” The relative risk for myocardial infarctions among the aspirin not indicated patients increased from 2.25 to 3 (although it remained statitistically insignificant). The editors also noted a statistically significant (2-fold) increase in risk for serious thromboembolic events for this group, an outcome that Merck had not reported in the NEJM, though it had disclosed that information publicly in March 2000, eight months before publication.[19]

The authors of the study, including the non-Merck authors, responded by claiming that the three additional heart attacks had occurred after the prespecified cutoff date for data collection and thus were appropriately not included. (Utilizing the prespecified cutoff date also meant that an additional stroke in the naproxen population was not reported.) Furthermore, they said that the additional data did not qualitatively change any of the conclusions of the study, and the results of the full analyses were disclosed to the FDA and reflected on the Vioxx warning label. They further noted that all of the data in the “omitted” table were printed in the text of the article. The authors stood by the original article.[20]

NEJM stood by its editorial, noting that the cutoff date was never mentioned in the article, nor did the authors report that the cutoff for cardiovascular adverse events was before that for gastrointestinal adverse events. The different cutoffs increased the reported benefits of Vioxx (reduced stomach problems) relative to the risks (increased heart attacks).[19]

Some scientists have accused the NEJM editorial board of making unfounded accusations.[21][22] Others have applauded the editorial. Renowned research cardiologist Eric Topol,[23] a prominent Merck critic, accused Merck of “manipulation of data” and said “I think now the scientific misconduct trial is really fully backed up”.[24] Phil Fontanarosa, executive editor of the prestigious Journal of the American Medical Association, welcomed the editorial, saying “this is another in the long list of recent examples that have generated real concerns about trust and confidence in industry-sponsored studies”.[25]

On May 15, 2006, the Wall Street Journal reported that a late night email, written by an outside public relations specialist and sent to Journal staffers hours before the Expression of Concern was released, predicted that “the rebuke would divert attention to Merck and induce the media to ignore the New England Journal of Medicine‘s own role in aiding Vioxx sales.”[26]

“Internal emails show the New England Journal’s expression of concern was timed to divert attention from a deposition in which Executive Editor Gregory Curfman made potentially damaging admissions about the journal’s handling of the Vioxx study. In the deposition, part of the Vioxx litigation, Dr. Curfman acknowledged that lax editing might have helped the authors make misleading claims in the article.” The Journal stated that NEJM‘s “ambiguous” language misled reporters into incorrectly believing that Merck had deleted data regarding the three additional heart attacks, rather than a blank table that contained no statistical information; “the New England Journal says it didn’t attempt to have these mistakes corrected.”[26]

APPROVe study

In 2001, Merck commenced the APPROVe (Adenomatous Polyp PRevention On Vioxx) study, a three-year trial with the primary aim of evaluating the efficacy of rofecoxib for theprophylaxis of colorectal polypsCelecoxib had already been approved for this indication, and it was hoped to add this to the indications for rofecoxib as well. An additional aim of the study was to further evaluate the cardiovascular safety of rofecoxib.

The APPROVe study was terminated early when the preliminary data from the study showed an increased relative risk of adverse thrombotic cardiovascular events (includingheart attack and stroke), beginning after 18 months of rofecoxib therapy. In patients taking rofecoxib, versus placebo, the relative risk of these events was 1.92 (rofecoxib 1.50 events vs placebo 0.78 events per 100 patient years). The results from the first 18 months of the APPROVe study did not show an increased relative risk of adverse cardiovascular events. Moreover, overall and cardiovascular mortality rates were similar between the rofecoxib and placebo populations.[28]

In summary, the APPROVe study suggested that long-term use of rofecoxib resulted in nearly twice the risk of suffering a heart attack or stroke compared to patients receiving a placebo.

Other studies

Several very large observational studies have also found elevated risk of heart attack from rofecoxib. For example, a recent retrospective study of 113,000 elderly Canadians suggested a borderline statistically significant increased relative risk of heart attacks of 1.24 from Vioxx usage, with a relative risk of 1.73 for higher-dose Vioxx usage. (Levesque, 2005). Another study, using Kaiser Permanente data, found a 1.47 relative risk for low-dose Vioxx usage and 3.58 for high-dose Vioxx usage compared to current use of celecoxib, though the smaller number was not statistically significant, and relative risk compared to other populations was not statistically significant. (Graham, 2005).

Furthermore, a more recent meta-study of 114 randomized trials with a total of 116,000+ participants, published in JAMA, showed that Vioxx uniquely increased risk of renal (kidney) disease, and heart arrhythmia.[31]

Other COX-2 inhibitors

Any increased risk of renal and arrhythmia pathologies associated with the class of COX-2 inhibitors, e.g. celecoxib (Celebrex), valdecoxib (Bextra), parecoxib (Dynastat),lumiracoxib, and etoricoxib is not evident,[31] although smaller studies[32][33] had demonstrated such effects earlier with the use of celecoxib, valdecoxib and parecoxib.

Nevertheless, it is likely that trials of newer drugs in the category will be extended in order to supply additional evidence of cardiovascular safety. Examples are some more specific COX-2 inhibitors, including etoricoxib (Arcoxia) and lumiracoxib (Prexige), which are currently (circa 2005) undergoing Phase III/IV clinical trials.

Besides, regulatory authorities worldwide now require warnings about cardiovascular risk of COX-2 inhibitors still on the market. For example, in 2005, EU regulators required the following changes to the product information and/or packaging of all COX-2 inhibitors:[34]

  • Contraindications stating that COX-2 inhibitors must not be used in patients with established ischaemic heart disease and/or cerebrovascular disease (stroke), and also in patients with peripheral arterial disease
  • Reinforced warnings to healthcare professionals to exercise caution when prescribing COX-2 inhibitors to patients with risk factors for heart disease, such as hypertension, hyperlipidaemia (high cholesterol levels), diabetes and smoking
  • Given the association between cardiovascular risk and exposure to COX-2 inhibitors, doctors are advised to use the lowest effective dose for the shortest possible duration of treatment

Other NSAIDs

Since the withdrawal of Vioxx it has come to light that there may be negative cardiovascular effects with not only other COX-2 inhibitiors, but even the majority of other NSAIDs. It is only with the recent development of drugs like Vioxx that drug companies have carried out the kind of well executed trials that could establish such effects and these sort of trials have never been carried out in older “trusted” NSAIDs such as ibuprofendiclofenac and others. The possible exceptions may be aspirin and naproxen due to their anti-platelet aggregation properties.

Withdrawal

Due to the findings of its own APPROVe study, Merck publicly announced its voluntary withdrawal of the drug from the market worldwide on September 30, 2004.[35]

In addition to its own studies, on September 23, 2004 Merck apparently received information about new research by the FDA that supported previous findings of increased risk of heart attack among rofecoxib users (Grassley, 2004). FDA analysts estimated that Vioxx caused between 88,000 and 139,000 heart attacks, 30 to 40 percent of which were probably fatal, in the five years the drug was on the market.[36]

On November 5, the medical journal The Lancet published a meta-analysis of the available studies on the safety of rofecoxib (Jüni et al., 2004). The authors concluded that, owing to the known cardiovascular risk, rofecoxib should have been withdrawn several years earlier. The Lancet published an editorial which condemned both Merck and the FDA for the continued availability of rofecoxib from 2000 until the recall. Merck responded by issuing a rebuttal of the Jüni et al. meta-analysis that noted that Jüni omitted several studies that showed no increased cardiovascular risk. (Merck & Co., 2004).

In 2005, advisory panels in both the U.S. and Canada encouraged the return of rofecoxib to the market, stating that rofecoxib’s benefits outweighed the risks for some patients. The FDA advisory panel voted 17-15 to allow the drug to return to the market despite being found to increase heart risk. The vote in Canada was 12-1, and the Canadian panel noted that the cardiovascular risks from rofecoxib seemed to be no worse than those from ibuprofen—though the panel recommended that further study was needed for all NSAIDs to fully understand their risk profiles. Notwithstanding these recommendations, Merck has not returned rofecoxib to the market.[37]

In 2005, Merck retained Debevoise & Plimpton LLP to investigate Vioxx study results and communications conducted by Merck. Through the report, it was found that Merck’s senior management acted in good faith, and that the confusion over the clinical safety of Vioxx was due to the sales team’s overzealous behavior. The report that was filed gave a timeline of the events surrounding Vioxx and showed that Merck intended to operate honestly throughout the process. Any mistakes that were made regarding the mishandling of clinical trial results and withholding of information was the result of oversight, not malicious behavior….The report was published in February 2006, and Merck was satisfied with the findings of the report and promised to consider the recommendations contained in the Martin Report. Advisers to the US Food and Drug Administration (FDA) have voted, by a narrow margin, that it should not ban Vioxx — the painkiller withdrawn by drug-maker Merck.

They also said that Pfizer’s Celebrex and Bextra, two other members of the family of painkillers known as COX-2 inhibitors, should remain available, despite the fact that they too boost patients’ risk of heart attack and stroke. url = http://www.nature.com/drugdisc/news/articles/433790b.html The recommendations of the arthritis and drug safety advisory panel offer some measure of relief to the pharmaceutical industry, which has faced a barrage of criticism for its promotion of the painkillers. But the advice of the panel, which met near Washington DC over 16–18 February, comes with several strings attached.

For example, most panel members said that manufacturers should be required to add a prominent warning about the drugs’ risks to their labels; to stop direct-to-consumer advertising of the drugs; and to include detailed, written risk information with each prescription. The panel also unanimously stated that all three painkillers “significantly increase the risk of cardiovascular events”.

External links

For more details and references.. they are provided in the entirety in the original post

Read Full Post »

Metabolomics Summary and Perspective

Metabolomics Summary and Perspective

Author and Curator: Larry H Bernstein, MD, FCAP 

 

This is the final article in a robust series on metabolism, metabolomics, and  the “-OMICS-“ biological synthesis that is creating a more holistic and interoperable view of natural sciences, including the biological disciplines, climate science, physics, chemistry, toxicology, pharmacology, and pathophysiology with as yet unforeseen consequences.

There have been impressive advances already in the research into developmental biology, plant sciences, microbiology, mycology, and human diseases, most notably, cancer, metabolic , and infectious, as well as neurodegenerative diseases.

Acknowledgements:

I write this article in honor of my first mentor, Harry Maisel, Professor and Emeritus Chairman of Anatomy, Wayne State University, Detroit, MI and to my stimulating mentors, students, fellows, and associates over many years:

Masahiro Chiga, MD, PhD, Averill A Liebow, MD, Nathan O Kaplan, PhD, Johannes Everse, PhD, Norio Shioura, PhD, Abraham Braude, MD, Percy J Russell, PhD, Debby Peters, Walter D Foster, PhD, Herschel Sidransky, MD, Sherman Bloom, MD, Matthew Grisham, PhD, Christos Tsokos, PhD,  IJ Good, PhD, Distinguished Professor, Raool Banagale, MD, Gustavo Reynoso, MD,Gustave Davis, MD, Marguerite M Pinto, MD, Walter Pleban, MD, Marion Feietelson-Winkler, RD, PhD,  John Adan,MD, Joseph Babb, MD, Stuart Zarich, MD,  Inder Mayall, MD, A Qamar, MD, Yves Ingenbleek, MD, PhD, Emeritus Professor, Bette Seamonds, PhD, Larry Kaplan, PhD, Pauline Y Lau, PhD, Gil David, PhD, Ronald Coifman, PhD, Emeritus Professor, Linda Brugler, RD, MBA, James Rucinski, MD, Gitta Pancer, Ester Engelman, Farhana Hoque, Mohammed Alam, Michael Zions, William Fleischman, MD, Salman Haq, MD, Jerard Kneifati-Hayek, Madeleine Schleffer, John F Heitner, MD, Arun Devakonda,MD, Liziamma George,MD, Suhail Raoof, MD, Charles Oribabor,MD, Anthony Tortolani, MD, Prof and Chairman, JRDS Rosalino, PhD, Aviva Lev Ari, PhD, RN, Rosser Rudolph, MD, PhD, Eugene Rypka, PhD, Jay Magidson, PhD, Izaak Mayzlin, PhD, Maurice Bernstein, PhD, Richard Bing, Eli Kaplan, PhD, Maurice Bernstein, PhD.

This article has EIGHT parts, as follows:

Part 1

Metabolomics Continues Auspicious Climb

Part 2

Biologists Find ‘Missing Link’ in the Production of Protein Factories in Cells

Part 3

Neuroscience

Part 4

Cancer Research

Part 5

Metabolic Syndrome

Part 6

Biomarkers

Part 7

Epigenetics and Drug Metabolism

Part 8

Pictorial

genome cartoon

genome cartoon

 iron metabolism

iron metabolism

personalized reference range within population range

personalized reference range within population range

Part 1.  MetabolomicsSurge

metagraph  _OMICS

metagraph _OMICS

Metabolomics Continues Auspicious Climb

Jeffery Herman, Ph.D.
GEN May 1, 2012 (Vol. 32, No. 9)

Aberrant biochemical and metabolite signaling plays an important role in

  • the development and progression of diseased tissue.

This concept has been studied by the science community for decades. However, with relatively

  1. recent advances in analytical technology and bioinformatics as well as
  2. the development of the Human Metabolome Database (HMDB),

metabolomics has become an invaluable field of research.

At the “International Conference and Exhibition on Metabolomics & Systems Biology” held recently in San Francisco, researchers and industry leaders discussed how

  • the underlying cellular biochemical/metabolite fingerprint in response to
  1. a specific disease state,
  2. toxin exposure, or
  3. pharmaceutical compound
  • is useful in clinical diagnosis and biomarker discovery and
  • in understanding disease development and progression.

Developed by BASF, MetaMap® Tox is

  • a database that helps identify in vivo systemic effects of a tested compound, including
  1. targeted organs,
  2. mechanism of action, and
  3. adverse events.

Based on 28-day systemic rat toxicity studies, MetaMap Tox is composed of

  • differential plasma metabolite profiles of rats
  • after exposure to a large variety of chemical toxins and pharmaceutical compounds.

“Using the reference data,

  • we have developed more than 110 patterns of metabolite changes, which are
  • specific and predictive for certain toxicological modes of action,”

said Hennicke Kamp, Ph.D., group leader, department of experimental toxicology and ecology at BASF.

With MetaMap Tox, a potential drug candidate

  • can be compared to a similar reference compound
  • using statistical correlation algorithms,
  • which allow for the creation of a toxicity and mechanism of action profile.

“MetaMap Tox, in the context of early pre-clinical safety enablement in pharmaceutical development,” continued Dr. Kamp,

  • has been independently validated “
  • by an industry consortium (Drug Safety Executive Council) of 12 leading biopharmaceutical companies.”

Dr. Kamp added that this technology may prove invaluable

  • allowing for quick and accurate decisions and
  • for high-throughput drug candidate screening, in evaluation
  1. on the safety and efficacy of compounds
  2. during early and preclinical toxicological studies,
  3. by comparing a lead compound to a variety of molecular derivatives, and
  • the rapid identification of the most optimal molecular structure
  • with the best efficacy and safety profiles might be streamlined.
Dynamic Construct of the –Omics

Dynamic Construct of the –Omics

Targeted Tandem Mass Spectrometry

Biocrates Life Sciences focuses on targeted metabolomics, an important approach for

  • the accurate quantification of known metabolites within a biological sample.

Originally used for the clinical screening of inherent metabolic disorders from dried blood-spots of newborn children, Biocrates has developed

  • a tandem mass spectrometry (MS/MS) platform, which allows for
  1. the identification,
  2. quantification, and
  3. mapping of more than 800 metabolites to specific cellular pathways.

It is based on flow injection analysis and high-performance liquid chromatography MS/MS.

Clarification of Pathway-Specific Inhibition by Fourier Transform Ion Cyclotron Resonance.Mass Spectrometry-Based Metabolic Phenotyping Studies F5.large

common drug targets

common drug targets

The MetaDisIDQ® Kit is a

  • “multiparamatic” diagnostic assay designed for the “comprehensive assessment of a person’s metabolic state” and
  • the early determination of pathophysiological events with regards to a specific disease.

MetaDisIDQ is designed to quantify

  • a diverse range of 181 metabolites involved in major metabolic pathways
  • from a small amount of human serum (10 µL) using isotopically labeled internal standards,

This kit has been demonstrated to detect changes in metabolites that are commonly associated with the development of

  • metabolic syndrome, type 2 diabetes, and diabetic nephropathy,

Dr. Dallman reports that data generated with the MetaDisIDQ kit correlates strongly with

  • routine chemical analyses of common metabolites including glucose and creatinine

Biocrates has also developed the MS/MS-based AbsoluteIDQ® kits, which are

  • an “easy-to-use” biomarker analysis tool for laboratory research.

The kit functions on MS machines from a variety of vendors, and allows for the quantification of 150-180 metabolites.

The SteroIDQ® kit is a high-throughput standardized MS/MS diagnostic assay,

  • validated in human serum, for the rapid and accurate clinical determination of 16 known steroids.

Initially focusing on the analysis of steroid ranges for use in hormone replacement therapy, the SteroIDQ Kit is expected to have a wide clinical application.

Hormone-Resistant Breast Cancer

Scientists at Georgetown University have shown that

  • breast cancer cells can functionally coordinate cell-survival and cell-proliferation mechanisms,
  • while maintaining a certain degree of cellular metabolism.

To grow, cells need energy, and energy is a product of cellular metabolism. For nearly a century, it was thought that

  1. the uncoupling of glycolysis from the mitochondria,
  2. leading to the inefficient but rapid metabolism of glucose and
  3. the formation of lactic acid (the Warburg effect), was

the major and only metabolism driving force for unchecked proliferation and tumorigenesis of cancer cells.

Other aspects of metabolism were often overlooked.

“.. we understand now that

  • cellular metabolism is a lot more than just metabolizing glucose,”

said Robert Clarke, Ph.D., professor of oncology and physiology and biophysics at Georgetown University. Dr. Clarke, in collaboration with the Waters Center for Innovation at Georgetown University (led by Albert J. Fornace, Jr., M.D.), obtained

  • the metabolomic profile of hormone-sensitive and -resistant breast cancer cells through the use of UPLC-MS.

They demonstrated that breast cancer cells, through a rather complex and not yet completely understood process,

  1. can functionally coordinate cell-survival and cell-proliferation mechanisms,
  2. while maintaining a certain degree of cellular metabolism.

This is at least partly accomplished through the upregulation of important pro-survival mechanisms; including

  • the unfolded protein response;
  • a regulator of endoplasmic reticulum stress and
  • initiator of autophagy.

Normally, during a stressful situation, a cell may

  • enter a state of quiescence and undergo autophagy,
  • a process by which a cell can recycle organelles
  • in order to maintain enough energy to survive during a stressful situation or,

if the stress is too great,

  • undergo apoptosis.

By integrating cell-survival mechanisms and cellular metabolism

  • advanced ER+ hormone-resistant breast cancer cells
  • can maintain a low level of autophagy
  • to adapt and resist hormone/chemotherapy treatment.

This adaptation allows cells

  • to reallocate important metabolites recovered from organelle degradation and
  • provide enough energy to also promote proliferation.

With further research, we can gain a better understanding of the underlying causes of hormone-resistant breast cancer, with

  • the overall goal of developing effective diagnostic, prognostic, and therapeutic tools.

NMR

Over the last two decades, NMR has established itself as a major tool for metabolomics analysis. It is especially adept at testing biological fluids. [Bruker BioSpin]

Historically, nuclear magnetic resonance spectroscopy (NMR) has been used for structural elucidation of pure molecular compounds. However, in the last two decades, NMR has established itself as a major tool for metabolomics analysis. Since

  • the integral of an NMR signal is directly proportional to
  • the molar concentration throughout the dynamic range of a sample,

“the simultaneous quantification of compounds is possible

  • without the need for specific reference standards or calibration curves,” according to Lea Heintz of Bruker BioSpin.

NMR is adept at testing biological fluids because of

  1.  high reproducibility,
  2. standardized protocols,
  3. low sample manipulation, and
  4. the production of a large subset of data,

Bruker BioSpin is presently involved in a project for the screening of inborn errors of metabolism in newborn children from Turkey, based on their urine NMR profiles. More than 20 clinics are participating to the project that is coordinated by INFAI, a specialist in the transfer of advanced analytical technology into medical diagnostics. The construction of statistical models are being developed

  • for the detection of deviations from normality, as well as
  • automatic quantification methods for indicative metabolites

Bruker BioSpin recently installed high-resolution magic angle spinning NMR (HRMAS-NMR) systems that can rapidly analyze tissue biopsies. The main objective for HRMAS-NMR is to establish a rapid and effective clinical method to assess tumor grade and other important aspects of cancer during surgery.

Combined NMR and Mass Spec

There is increasing interest in combining NMR and MS, two of the main analytical assays in metabolomic research, as a means

  • to improve data sensitivity and to
  • fully elucidate the complex metabolome within a given biological sample.
  •  to realize a potential for cancer biomarker discovery in the realms of diagnosis, prognosis, and treatment.

.

Using combined NMR and MS to measure the levels of nearly 250 separate metabolites in the patient’s blood, Dr. Weljie and other researchers at the University of Calgary were able to rapidly determine the malignancy of a  pancreatic lesion (in 10–15% of the cases, it is difficult to discern between benign and malignant), while avoiding unnecessary surgery in patients with benign lesions.

When performing NMR and MS on a single biological fluid, ultimately “we are,” noted Dr. Weljie,

  1. “splitting up information content, processing, and introducing a lot of background noise and error and
  2. then trying to reintegrate the data…
    It’s like taking a complex item, with multiple pieces, out of an IKEA box and trying to repackage it perfectly into another box.”

By improving the workflow between the initial splitting of the sample, they improved endpoint data integration, proving that

  • a streamlined approach to combined NMR/MS can be achieved,
  • leading to a very strong, robust and precise metabolomics toolset.

Metabolomics Research Picks Up Speed

Field Advances in Quest to Improve Disease Diagnosis and Predict Drug Response

John Morrow Jr., Ph.D.
GEN May 1, 2011 (Vol. 31, No. 9)

As an important discipline within systems biology, metabolomics is being explored by a number of laboratories for

  • its potential in pharmaceutical development.

Studying metabolites can offer insights into the relationships between genotype and phenotype, as well as between genotype and environment. In addition, there is plenty to work with—there are estimated to be some 2,900 detectable metabolites in the human body, of which

  1. 309 have been identified in cerebrospinal fluid,
  2. 1,122 in serum,
  3. 458 in urine, and
  4. roughly 300 in other compartments.

Guowang Xu, Ph.D., a researcher at the Dalian Institute of Chemical Physics.  is investigating the causes of death in China,

  • and how they have been changing over the years as the country has become a more industrialized nation.
  •  the increase in the incidence of metabolic disorders such as diabetes has grown to affect 9.7% of the Chinese population.

Dr. Xu,  collaborating with Rainer Lehman, Ph.D., of the University of Tübingen, Germany, compared urinary metabolites in samples from healthy individuals with samples taken from prediabetic, insulin-resistant subjects. Using mass spectrometry coupled with electrospray ionization in the positive mode, they observed striking dissimilarities in levels of various metabolites in the two groups.

“When we performed a comprehensive two-dimensional gas chromatography, time-of-flight mass spectrometry analysis of our samples, we observed several metabolites, including

  • 2-hydroxybutyric acid in plasma,
  •  as potential diabetes biomarkers,” Dr. Xu explains.

In other, unrelated studies, Dr. Xu and the German researchers used a metabolomics approach to investigate the changes in plasma metabolite profiles immediately after exercise and following a 3-hour and 24-hour period of recovery. They found that

  • medium-chain acylcarnitines were the most distinctive exercise biomarkers, and
  • they are released as intermediates of partial beta oxidation in human myotubes and mouse muscle tissue.

Dr. Xu says. “The traditional approach of assessment based on a singular biomarker is being superseded by the introduction of multiple marker profiles.”

Typical of the studies under way by Dr. Kaddurah-Daouk and her colleaguesat Duke University

  • is a recently published investigation highlighting the role of an SNP variant in
  • the glycine dehydrogenase gene on individual response to antidepressants.
  •  patients who do not respond to the selective serotonin uptake inhibitors citalopram and escitalopram
  • carried a particular single nucleotide polymorphism in the GD gene.

“These results allow us to pinpoint a possible

  • role for glycine in selective serotonin reuptake inhibitor response and
  • illustrate the use of pharmacometabolomics to inform pharmacogenomics.

These discoveries give us the tools for prognostics and diagnostics so that

  • we can predict what conditions will respond to treatment.

“This approach to defining health or disease in terms of metabolic states opens a whole new paradigm.

By screening hundreds of thousands of molecules, we can understand

  • the relationship between human genetic variability and the metabolome.”

Dr. Kaddurah-Daouk talks about statins as a current

  • model of metabolomics investigations.

It is now known that the statins  have widespread effects, altering a range of metabolites. To sort out these changes and develop recommendations for which individuals should be receiving statins will require substantial investments of energy and resources into defining the complex web of biochemical changes that these drugs initiate.
Furthermore, Dr. Kaddurah-Daouk asserts that,

  • “genetics only encodes part of the phenotypic response.

One needs to take into account the

  • net environment contribution in order to determine
  • how both factors guide the changes in our metabolic state that determine the phenotype.”

Interactive Metabolomics

Researchers at the University of Nottingham use diffusion-edited nuclear magnetic resonance spectroscopy to assess the effects of a biological matrix on metabolites. Diffusion-edited NMR experiments provide a way to

  • separate the different compounds in a mixture
  • based on the differing translational diffusion coefficients (which reflect the size and shape of the molecule).

The measurements are carried out by observing

  • the attenuation of the NMR signals during a pulsed field gradient experiment.

Clare Daykin, Ph.D., is a lecturer at the University of Nottingham, U.K. Her field of investigation encompasses “interactive metabolomics,”which she defines as

“the study of the interactions between low molecular weight biochemicals and macromolecules in biological samples ..

  • without preselection of the components of interest.

“Blood plasma is a heterogeneous mixture of molecules that

  1. undergo a variety of interactions including metal complexation,
  2. chemical exchange processes,
  3. micellar compartmentation,
  4. enzyme-mediated biotransformations, and
  5. small molecule–macromolecular binding.”

Many low molecular weight compounds can exist

  • freely in solution,
  • bound to proteins, or
  • within organized aggregates such as lipoprotein complexes.

Therefore, quantitative comparison of plasma composition from

  • diseased individuals compared to matched controls provides an incomplete insight to plasma metabolism.

“It is not simply the concentrations of metabolites that must be investigated,

  • but their interactions with the proteins and lipoproteins within this complex web.

Rather than targeting specific metabolites of interest, Dr. Daykin’s metabolite–protein binding studies aim to study

  • the interactions of all detectable metabolites within the macromolecular sample.

Such activities can be studied through the use of diffusion-edited nuclear magnetic resonance (NMR) spectroscopy, in which one can assess

  • the effects of the biological matrix on the metabolites.

“This can lead to a more relevant and exact interpretation

  • for systems where metabolite–macromolecule interactions occur.”

Diffusion-edited NMR experiments provide a way to separate the different compounds in a mixture based on

  • the differing translational diffusion coefficients (which reflect the size and shape of the molecule).

The measurements are carried out by observing

  • the attenuation of the NMR signals during a pulsed field gradient experiment.

Pushing the Limits

It is widely recognized that many drug candidates fail during development due to ancillary toxicity. Uwe Sauer, Ph.D., professor, and Nicola Zamboni, Ph.D., researcher, both at the Eidgenössische Technische Hochschule, Zürich (ETH Zürich), are applying

  • high-throughput intracellular metabolomics to understand
  • the basis of these unfortunate events and
  • head them off early in the course of drug discovery.

“Since metabolism is at the core of drug toxicity, we developed a platform for

  • measurement of 50–100 targeted metabolites by
  • a high-throughput system consisting of flow injection
  • coupled to tandem mass spectrometry.”

Using this approach, Dr. Sauer’s team focused on

  • the central metabolism of the yeast Saccharomyces cerevisiae, reasoning that
  • this core network would be most susceptible to potential drug toxicity.

Screening approximately 41 drugs that were administered at seven concentrations over three orders of magnitude, they observed changes in metabolome patterns at much lower drug concentrations without attendant physiological toxicity.

The group carried out statistical modeling of about

  • 60 metabolite profiles for each drug they evaluated.

This data allowed the construction of a “profile effect map” in which

  • the influence of each drug on metabolite levels can be followed, including off-target effects, which
  • provide an indirect measure of the possible side effects of the various drugs.

Dr. Sauer says.“We have found that this approach is

  • at least 100 times as fast as other omics screening platforms,”

“Some drugs, including many anticancer agents,

  • disrupt metabolism long before affecting growth.”
killing cancer cells

killing cancer cells

Furthermore, they used the principle of 13C-based flux analysis, in which

  • metabolites labeled with 13C are used to follow the utilization of metabolic pathways in the cell.

These 13C-determined intracellular responses of metabolic fluxes to drug treatment demonstrate

  • the functional performance of the network to be rather robust,
conformational changes leading to substrate efflux.

conformational changes leading to substrate efflux.

leading Dr. Sauer to the conclusion that

  • the phenotypic vigor he observes to drug challenges
  • is achieved by a flexible make up of the metabolome.

Dr. Sauer is confident that it will be possible to expand the scope of these investigations to hundreds of thousands of samples per study. This will allow answers to the questions of

  • how cells establish a stable functioning network in the face of inevitable concentration fluctuations.

Is Now the Hour?

There is great enthusiasm and agitation within the biotech community for

  • metabolomics approaches as a means of reversing the dismal record of drug discovery

that has accumulated in the last decade.

While the concept clearly makes sense and is being widely applied today, there are many reasons why drugs fail in development, and metabolomics will not be a panacea for resolving all of these questions. It is too early at this point to recognize a trend or a track record, and it will take some time to see how this approach can aid in drug discovery and shorten the timeline for the introduction of new pharmaceutical agents.

Degree of binding correlated with function

Degree of binding correlated with function

Diagram_of_a_two-photon_excitation_microscope_

Diagram_of_a_two-photon_excitation_microscope_

Part 2.  Biologists Find ‘Missing Link’ in the Production of Protein Factories in Cells

Biologists at UC San Diego have found

  • the “missing link” in the chemical system that
  • enables animal cells to produce ribosomes

—the thousands of protein “factories” contained within each cell that

  • manufacture all of the proteins needed to build tissue and sustain life.
‘Missing Link’

‘Missing Link’

Their discovery, detailed in the June 23 issue of the journal Genes & Development, will not only force

  • a revision of basic textbooks on molecular biology, but also
  • provide scientists with a better understanding of
  • how to limit uncontrolled cell growth, such as cancer,
  • that might be regulated by controlling the output of ribosomes.

Ribosomes are responsible for the production of the wide variety of proteins that include

  1. enzymes;
  2. structural molecules, such as hair,
  3. skin and bones;
  4. hormones like insulin; and
  5. components of our immune system such as antibodies.

Regarded as life’s most important molecular machine, ribosomes have been intensively studied by scientists (the 2009 Nobel Prize in Chemistry, for example, was awarded for studies of its structure and function). But until now researchers had not uncovered all of the details of how the proteins that are used to construct ribosomes are themselves produced.

In multicellular animals such as humans,

  • ribosomes are made up of about 80 different proteins
    (humans have 79 while some other animals have a slightly different number) as well as
  • four different kinds of RNA molecules.

In 1969, scientists discovered that

  • the synthesis of the ribosomal RNAs is carried out by specialized systems using two key enzymes:
  • RNA polymerase I and RNA polymerase III.

But until now, scientists were unsure if a complementary system was also responsible for

  • the production of the 80 proteins that make up the ribosome.

That’s essentially what the UC San Diego researchers headed by Jim Kadonaga, a professor of biology, set out to examine. What they found was the missing link—the specialized

  • system that allows ribosomal proteins themselves to be synthesized by the cell.

Kadonaga says that he and coworkers found that ribosomal proteins are synthesized via

  • a novel regulatory system with the enzyme RNA polymerase II and
  • a factor termed TRF2,”

“For the production of most proteins,

  1. RNA polymerase II functions with
  2. a factor termed TBP,
  3. but for the synthesis of ribosomal proteins, it uses TRF2.”
  •  this specialized TRF2-based system for ribosome biogenesis
  • provides a new avenue for the study of ribosomes and
  • its control of cell growth, and

“it should lead to a better understanding and potential treatment of diseases such as cancer.”

Coordination of the transcriptome and metabolome

Coordination of the transcriptome and metabolome

the potential advantages conferred by distal-site protein synthesis

the potential advantages conferred by distal-site protein synthesis

Other authors of the paper were UC San Diego biologists Yuan-Liang Wang, Sascha Duttke and George Kassavetis, and Kai Chen, Jeff Johnston, and Julia Zeitlinger of the Stowers Institute for Medical Research in Kansas City, Missouri. Their research was supported by two grants from the National Institutes of Health (1DP2OD004561-01 and R01 GM041249).

Turning Off a Powerful Cancer Protein

Scientists have discovered how to shut down a master regulatory transcription factor that is

  • key to the survival of a majority of aggressive lymphomas,
  • which arise from the B cells of the immune system.

The protein, Bcl6, has long been considered too complex to target with a drug since it is also crucial

  • to the healthy functioning of many immune cells in the body, not just B cells gone bad.

The researchers at Weill Cornell Medical College report that it is possible

  • to shut down Bcl6 in diffuse large B-cell lymphoma (DLBCL)
  • while not affecting its vital function in T cells and macrophages
  • that are needed to support a healthy immune system.

If Bcl6 is completely inhibited, patients might suffer from systemic inflammation and atherosclerosis. The team conducted this new study to help clarify possible risks, as well as to understand

  • how Bcl6 controls the various aspects of the immune system.

The findings in this study were inspired from

  • preclinical testing of two Bcl6-targeting agents that Dr. Melnick and his Weill Cornell colleagues have developed
  • to treat DLBCLs.

These experimental drugs are

  • RI-BPI, a peptide mimic, and
  • the small molecule agent 79-6.

“This means the drugs we have developed against Bcl6 are more likely to be

  • significantly less toxic and safer for patients with this cancer than we realized,”

says Ari Melnick, M.D., professor of hematology/oncology and a hematologist-oncologist at NewYork-Presbyterian Hospital/Weill Cornell Medical Center.

Dr. Melnick says the discovery that

  • a master regulatory transcription factor can be targeted
  • offers implications beyond just treating DLBCL.

Recent studies from Dr. Melnick and others have revealed that

  • Bcl6 plays a key role in the most aggressive forms of acute leukemia, as well as certain solid tumors.

Bcl6 can control the type of immune cell that develops in the bone marrow—playing many roles

  • in the development of B cells, T cells, macrophages, and other cells—including a primary and essential role in
  • enabling B-cells to generate specific antibodies against pathogens.

According to Dr. Melnick, “When cells lose control of Bcl6,

  • lymphomas develop in the immune system.

Lymphomas are ‘addicted’ to Bcl6, and therefore

  • Bcl6 inhibitors powerfully and quickly destroy lymphoma cells,” .

The big surprise in the current study is that rather than functioning as a single molecular machine,

  • Bcl6 functions like a Swiss Army knife,
  • using different tools to control different cell types.

This multifunction paradigm could represent a general model for the functioning of other master regulatory transcription factors.

“In this analogy, the Swiss Army knife, or transcription factor, keeps most of its tools folded,

  • opening only the one it needs in any given cell type,”

He makes the following analogy:

  • “For B cells, it might open and use the knife tool;
  • for T cells, the cork screw;
  • for macrophages, the scissors.”

“this means that you only need to prevent the master regulator from using certain tools to treat cancer. You don’t need to eliminate the whole knife,” . “In fact, we show that taking out the whole knife is harmful since

  • the transcription factor has many other vital functions that other cells in the body need.”

Prior to these study results, it was not known that a master regulator could separate its functions so precisely. Researchers hope this will be a major benefit to the treatment of DLBCL and perhaps other disorders that are influenced by Bcl6 and other master regulatory transcription factors.

The study is published in the journal Nature Immunology, in a paper titled “Lineage-specific functions of Bcl-6 in immunity and inflammation are mediated by distinct biochemical mechanisms”.

Part 3. Neuroscience

Vesicles influence function of nerve cells 
Oct, 06 2014        source: http://feeds.sciencedaily.com

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Neurons (blue) which have absorbed exosomes (green) have increased levels of the enzyme catalase (red), which helps protect them against peroxides.

Tiny vesicles containing protective substances

  • which they transmit to nerve cells apparently
  • play an important role in the functioning of neurons.

As cell biologists at Johannes Gutenberg University Mainz (JGU) have discovered,

  • nerve cells can enlist the aid of mini-vesicles of neighboring glial cells
  • to defend themselves against stress and other potentially detrimental factors.

These vesicles, called exosomes, appear to stimulate the neurons on various levels:

  • they influence electrical stimulus conduction,
  • biochemical signal transfer, and
  • gene regulation.

Exosomes are thus multifunctional signal emitters

  • that can have a significant effect in the brain.
Exosome

Exosome

The researchers in Mainz already observed in a previous study that

  • oligodendrocytes release exosomes on exposure to neuronal stimuli.
  • these are absorbed by the neurons and improve neuronal stress tolerance.

Oligodendrocytes, a type of glial cell, form an

  • insulating myelin sheath around the axons of neurons.

The exosomes transport protective proteins such as

  • heat shock proteins,
  • glycolytic enzymes, and
  • enzymes that reduce oxidative stress from one cell type to another,
  • but also transmit genetic information in the form of ribonucleic acids.

“As we have now discovered in cell cultures, exosomes seem to have a whole range of functions,” explained Dr. Eva-Maria Krmer-Albers. By means of their transmission activity, the small bubbles that are the vesicles

  • not only promote electrical activity in the nerve cells, but also
  • influence them on the biochemical and gene regulatory level.

“The extent of activities of the exosomes is impressive,” added Krmer-Albers. The researchers hope that the understanding of these processes will contribute to the development of new strategies for the treatment of neuronal diseases. Their next aim is to uncover how vesicles actually function in the brains of living organisms.

http://labroots.com/user/news/article/id/217438/title/vesicles-influence-function-of-nerve-cells

The above story is based on materials provided by Universitt Mainz.

Universitt Mainz. “Vesicles influence function of nerve cells.” ScienceDaily. ScienceDaily, 6 October 2014. www.sciencedaily.com/releases/2014/10/141006174214.htm

Neuroscientists use snail research to help explain “chemo brain”

10/08/2014
It is estimated that as many as half of patients taking cancer drugs experience a decrease in mental sharpness. While there have been many theories, what causes “chemo brain” has eluded scientists.

In an effort to solve this mystery, neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) conducted an experiment in an animal memory model and their results point to a possible explanation. Findings appeared in The Journal of Neuroscience.

In the study involving a sea snail that shares many of the same memory mechanisms as humans and a drug used to treat a variety of cancers, the scientists identified

  • memory mechanisms blocked by the drug.

Then, they were able to counteract or

  • unblock the mechanisms by administering another agent.

“Our research has implications in the care of people given to cognitive deficits following drug treatment for cancer,” said John H. “Jack” Byrne, Ph.D., senior author, holder of the June and Virgil Waggoner Chair and Chairman of the Department of Neurobiology and Anatomy at the UTHealth Medical School. “There is no satisfactory treatment at this time.”

Byrne’s laboratory is known for its use of a large snail called Aplysia californica to further the understanding of the biochemical signaling among nerve cells (neurons).  The snails have large neurons that relay information much like those in humans.

When Byrne’s team compared cell cultures taken from normal snails to

  • those administered a dose of a cancer drug called doxorubicin,

the investigators pinpointed a neuronal pathway

  • that was no longer passing along information properly.

With the aid of an experimental drug,

  • the scientists were able to reopen the pathway.

Unfortunately, this drug would not be appropriate for humans, Byrne said. “We want to identify other drugs that can rescue these memory mechanisms,” he added.

According the American Cancer Society, some of the distressing mental changes cancer patients experience may last a short time or go on for years.

Byrne’s UT Health research team includes co-lead authors Rong-Yu Liu, Ph.D., and Yili Zhang, Ph.D., as well as Brittany Coughlin and Leonard J. Cleary, Ph.D. All are affiliated with the W.M. Keck Center for the Neurobiology of Learning and Memory.

Byrne and Cleary also are on the faculty of The University of Texas Graduate School of Biomedical Sciences at Houston. Coughlin is a student at the school, which is jointly operated by UT Health and The University of Texas MD Anderson Cancer Center.

The study titled “Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase” received support from National Institutes of Health grant (NS019895) and the Zilkha Family Discovery Fellowship.

Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase

Source: Univ. of Texas Health Science Center at Houston

http://www.rdmag.com/news/2014/10/neuroscientists-use-snail-research-help-explain-E2_9_Cchemo-brain

Doxorubicin Attenuates Serotonin-Induced Long-Term Synaptic Facilitation by Phosphorylation of p38 Mitogen-Activated Protein Kinase

Rong-Yu Liu*,  Yili Zhang*,  Brittany L. Coughlin,  Leonard J. Cleary, and  John H. Byrne   +Show Affiliations
The Journal of Neuroscience, 1 Oct 2014, 34(40): 13289-13300;
http://dx.doi.org:/10.1523/JNEUROSCI.0538-14.2014

Doxorubicin (DOX) is an anthracycline used widely for cancer chemotherapy. Its primary mode of action appears to be

  • topoisomerase II inhibition, DNA cleavage, and free radical generation.

However, in non-neuronal cells, DOX also inhibits the expression of

  • dual-specificity phosphatases (also referred to as MAPK phosphatases) and thereby
  1. inhibits the dephosphorylation of extracellular signal-regulated kinase (ERK) and
  2. p38 mitogen-activated protein kinase (p38 MAPK),
  3. two MAPK isoforms important for long-term memory (LTM) formation.

Activation of these kinases by DOX in neurons, if present,

  • could have secondary effects on cognitive functions, such as learning and memory.

The present study used cultures of rat cortical neurons and sensory neurons (SNs) of Aplysia

  • to examine the effects of DOX on levels of phosphorylated ERK (pERK) and
  • phosphorylated p38 (p-p38) MAPK.

In addition, Aplysia neurons were used to examine the effects of DOX on

  • long-term enhanced excitability, long-term synaptic facilitation (LTF), and
  • long-term synaptic depression (LTD).

DOX treatment led to elevated levels of

  • pERK and p-p38 MAPK in SNs and cortical neurons.

In addition, it increased phosphorylation of

  • the downstream transcriptional repressor cAMP response element-binding protein 2 in SNs.

DOX treatment blocked serotonin-induced LTF and enhanced LTD induced by the neuropeptide Phe-Met-Arg-Phe-NH2. The block of LTF appeared to be attributable to

  • overriding inhibitory effects of p-p38 MAPK, because
  • LTF was rescued in the presence of an inhibitor of p38 MAPK
    (SB203580 [4-(4-fluorophenyl)-2-(4-methylsulfinylphenyl)-5-(4-pyridyl)-1H-imidazole]) .

These results suggest that acute application of DOX might impair the formation of LTM via the p38 MAPK pathway.
Terms: Aplysia chemotherapy ERK  p38 MAPK serotonin synaptic plasticity

Technology that controls brain cells with radio waves earns early BRAIN grant

10/08/2014

bright spots = cells with increased calcium after treatment with radio waves,  allows neurons to fire

bright spots = cells with increased calcium after treatment with radio waves, allows neurons to fire

BRAIN control: The new technology uses radio waves to activate or silence cells remotely. The bright spots above represent cells with increased calcium after treatment with radio waves, a change that would allow neurons to fire.

A proposal to develop a new way to

  • remotely control brain cells

from Sarah Stanley, a research associate in Rockefeller University’s Laboratory of Molecular Genetics, headed by Jeffrey M. Friedman, is

  • among the first to receive funding from U.S. President Barack Obama’s BRAIN initiative.

The project will make use of a technique called

  • radiogenetics that combines the use of radio waves or magnetic fields with
  • nanoparticles to turn neurons on or off.

The National Institutes of Health is one of four federal agencies involved in the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative. Following in the ambitious footsteps of the Human Genome Project, the BRAIN initiative seeks

  • to create a dynamic map of the brain in action,

a goal that requires the development of new technologies. The BRAIN initiative working group, which outlined the broad scope of the ambitious project, was co-chaired by Rockefeller’s Cori Bargmann, head of the Laboratory of Neural Circuits and Behavior.

Stanley’s grant, for $1.26 million over three years, is one of 58 projects to get BRAIN grants, the NIH announced. The NIH’s plan for its part of this national project, which has been pitched as “America’s next moonshot,” calls for $4.5 billion in federal funds over 12 years.

The technology Stanley is developing would

  • enable researchers to manipulate the activity of neurons, as well as other cell types,
  • in freely moving animals in order to better understand what these cells do.

Other techniques for controlling selected groups of neurons exist, but her new nanoparticle-based technique has a

  • unique combination of features that may enable new types of experimentation.
  • it would allow researchers to rapidly activate or silence neurons within a small area of the brain or
  • dispersed across a larger region, including those in difficult-to-access locations.

Stanley also plans to explore the potential this method has for use treating patients.

“Francis Collins, director of the NIH, has discussed

  • the need for studying the circuitry of the brain,
  • which is formed by interconnected neurons.

Our remote-control technology may provide a tool with which researchers can ask new questions about the roles of complex circuits in regulating behavior,” Stanley says.
Rockefeller University’s Laboratory of Molecular Genetics
Source: Rockefeller Univ.

Part 4.  Cancer

Two Proteins Found to Block Cancer Metastasis

Why do some cancers spread while others don’t? Scientists have now demonstrated that

  • metastatic incompetent cancers actually “poison the soil”
  • by generating a micro-environment that blocks cancer cells
  • from settling and growing in distant organs.

The “seed and the soil” hypothesis proposed by Stephen Paget in 1889 is now widely accepted to explain how

  • cancer cells (seeds) are able to generate fertile soil (the micro-environment)
  • in distant organs that promotes cancer’s spread.

However, this concept had not explained why some tumors do not spread or metastasize.

The researchers, from Weill Cornell Medical College, found that

  • two key proteins involved in this process work by
  • dramatically suppressing cancer’s spread.

The study offers hope that a drug based on these

  • potentially therapeutic proteins, prosaposin and Thrombospondin 1 (Tsp-1),

might help keep human cancer at bay and from metastasizing.

Scientists don’t understand why some tumors wouldn’t “want” to spread. It goes against their “job description,” says the study’s senior investigator, Vivek Mittal, Ph.D., an associate professor of cell and developmental biology in cardiothoracic surgery and director of the Neuberger Berman Foundation Lung Cancer Laboratory at Weill Cornell Medical College. He theorizes that metastasis occurs when

  • the barriers that the body throws up to protect itself against cancer fail.

But there are some tumors in which some of the barriers may still be intact. “So that suggests

  • those primary tumors will continue to grow, but that
  • an innate protective barrier still exists that prevents them from spreading and invading other organs,”

The researchers found that, like typical tumors,

  • metastasis-incompetent tumors also send out signaling molecules
  • that establish what is known as the “premetastatic niche” in distant organs.

These niches composed of bone marrow cells and various growth factors have been described previously by others including Dr. Mittal as the fertile “soil” that the disseminated cancer cell “seeds” grow in.

Weill Cornell’s Raúl Catena, Ph.D., a postdoctoral fellow in Dr. Mittal’s laboratory, found an important difference between the tumor types. Metastatic-incompetent tumors

  • systemically increased expression of Tsp-1, a molecule known to fight cancer growth.
  • increased Tsp-1 production was found specifically in the bone marrow myeloid cells
  • that comprise the metastatic niche.

These results were striking, because for the first time Dr. Mittal says

  • the bone marrow-derived myeloid cells were implicated as
  • the main producers of Tsp-1,.

In addition, Weill Cornell and Harvard researchers found that

  • prosaposin secreted predominantly by the metastatic-incompetent tumors
  • increased expression of Tsp-1 in the premetastatic lungs.

Thus, Dr. Mittal posits that prosaposin works in combination with Tsp-1

  • to convert pro-metastatic bone marrow myeloid cells in the niche
  • into cells that are not hospitable to cancer cells that spread from a primary tumor.
  • “The very same myeloid cells in the niche that we know can promote metastasis
  • can also be induced under the command of the metastatic incompetent primary tumor to inhibit metastasis,”

The research team found that

  • the Tsp-1–inducing activity of prosaposin
  • was contained in only a 5-amino acid peptide region of the protein, and
  • this peptide alone induced Tsp-1 in the bone marrow cells and
  • effectively suppressed metastatic spread in the lungs
  • in mouse models of breast and prostate cancer.

This 5-amino acid peptide with Tsp-1–inducing activity

  • has the potential to be used as a therapeutic agent against metastatic cancer,

The scientists have begun to test prosaposin in other tumor types or metastatic sites.

Dr. Mittal says that “The clinical implications of the study are:

  • “Not only is it theoretically possible to design a prosaposin-based drug or drugs
  • that induce Tsp-1 to block cancer spread, but
  • you could potentially create noninvasive prognostic tests
  • to predict whether a cancer will metastasize.”

The study was reported in the April 30 issue of Cancer Discovery, in a paper titled “Bone Marrow-Derived Gr1+ Cells Can Generate a Metastasis-Resistant Microenvironment Via Induced Secretion of Thrombospondin-1”.

Disabling Enzyme Cripples Tumors, Cancer Cells

First Step of Metastasis

First Step of Metastasis

Published: Sep 05, 2013  http://www.technologynetworks.com/Metabolomics/news.aspx?id=157138

Knocking out a single enzyme dramatically cripples the ability of aggressive cancer cells to spread and grow tumors.

The paper, published in the journal Proceedings of the National Academy of Sciences, sheds new light on the importance of lipids, a group of molecules that includes fatty acids and cholesterol, in the development of cancer.

Researchers have long known that cancer cells metabolize lipids differently than normal cells. Levels of ether lipids – a class of lipids that are harder to break down – are particularly elevated in highly malignant tumors.

“Cancer cells make and use a lot of fat and lipids, and that makes sense because cancer cells divide and proliferate at an accelerated rate, and to do that,

  • they need lipids, which make up the membranes of the cell,”

said study principal investigator Daniel Nomura, assistant professor in UC Berkeley’s Department of Nutritional Sciences and Toxicology. “Lipids have a variety of uses for cellular structure, but what we’re showing with our study is that

  • lipids can send signals that fuel cancer growth.”

In the study, Nomura and his team tested the effects of reducing ether lipids on human skin cancer cells and primary breast tumors. They targeted an enzyme,

  • alkylglycerone phosphate synthase, or AGPS,
  • known to be critical to the formation of ether lipids.

The researchers confirmed that

  1. AGPS expression increased when normal cells turned cancerous.
  2. inactivating AGPS substantially reduced the aggressiveness of the cancer cells.

“The cancer cells were less able to move and invade,” said Nomura.

The researchers also compared the impact of

  • disabling the AGPS enzyme in mice that had been injected with cancer cells.

Nomura. observes -“Among the mice that had the AGPS enzyme inactivated,

  • the tumors were nonexistent,”

“The mice that did not have this enzyme

  • disabled rapidly developed tumors.”

The researchers determined that

  • inhibiting AGPS expression depleted the cancer cells of ether lipids.
  • AGPS altered levels of other types of lipids important to the ability of the cancer cells to survive and spread, including
    • prostaglandins and acyl phospholipids.

“What makes AGPS stand out as a treatment target is that the enzyme seems to simultaneously

  • regulate multiple aspects of lipid metabolism
  • important for tumor growth and malignancy.”

Future steps include the

  • development of AGPS inhibitors for use in cancer therapy,

“This study sheds considerable light on the important role that AGPS plays in ether lipid metabolism in cancer cells, and it suggests that

  • inhibitors of this enzyme could impair tumor formation,”

said Benjamin Cravatt, Professor and Chair of Chemical Physiology at The Scripps Research Institute, who is not part of the UC.

Agilent Technologies Thought Leader Award Supports Translational Research Program
Published: Mon, March 04, 2013

The award will support Dr DePinho’s research into

  • metabolic reprogramming in the earliest stages of cancer.

Agilent Technologies Inc. announces that Dr. Ronald A. DePinho, a world-renowned oncologist and researcher, has received an Agilent Thought Leader Award.

DePinho is president of the University of Texas MD Anderson Cancer Center. DePinho and his team hope to discover and characterize

  • alterations in metabolic flux during tumor initiation and maintenance, and to identify biomarkers for early detection of pancreatic cancer together with
  • novel therapeutic targets.

Researchers on his team will work with scientists from the university’s newly formed Institute of Applied Cancer Sciences.

The Agilent Thought Leader Award provides funds to support personnel as well as a state-of-the-art Agilent 6550 iFunnel Q-TOF LC/MS system.

“I am extremely pleased to receive this award for metabolomics research, as the survival rates for pancreatic cancer have not significantly improved over the past 20 years,” DePinho said. “This technology will allow us to

  • rapidly identify new targets that drive the formation, progression and maintenance of pancreatic cancer.

Discoveries from this research will also lead to

  • the development of effective early detection biomarkers and novel therapeutic interventions.”

“We are proud to support Dr. DePinho’s exciting translational research program, which will make use of

  • metabolomics and integrated biology workflows and solutions in biomarker discovery,”

said Patrick Kaltenbach, Agilent vice president, general manager of the Liquid Phase Division, and the executive sponsor of this award.

The Agilent Thought Leader Program promotes fundamental scientific advances by support of influential thought leaders in the life sciences and chemical analysis fields.

The covalent modifier Nedd8 is critical for the activation of Smurf1 ubiquitin ligase in tumorigenesis

Ping Xie, Minghua Zhang, Shan He, Kefeng Lu, Yuhan Chen, Guichun Xing, et al.
Nature Communications
  2014; 5(3733).  http://dx.doi.org:/10.1038/ncomms4733

Neddylation, the covalent attachment of ubiquitin-like protein Nedd8, of the Cullin-RING E3 ligase family

  • regulates their ubiquitylation activity.

However, regulation of HECT ligases by neddylation has not been reported to date. Here we show that

  • the C2-WW-HECT ligase Smurf1 is activated by neddylation.

Smurf1 physically interacts with

  1. Nedd8 and Ubc12,
  2. forms a Nedd8-thioester intermediate, and then
  3. catalyses its own neddylation on multiple lysine residues.

Intriguingly, this autoneddylation needs

  • an active site at C426 in the HECT N-lobe.

Neddylation of Smurf1 potently enhances

  • ubiquitin E2 recruitment and
  • augments the ubiquitin ligase activity of Smurf1.

The regulatory role of neddylation

  • is conserved in human Smurf1 and yeast Rsp5.

Furthermore, in human colorectal cancers,

  • the elevated expression of Smurf1, Nedd8, NAE1 and Ubc12
  • correlates with cancer progression and poor prognosis.

These findings provide evidence that

  • neddylation is important in HECT ubiquitin ligase activation and
  • shed new light on the tumour-promoting role of Smurf1.
 Swinging domains in HECT E3

Swinging domains in HECT E3

Subject terms: Biological sciences Cancer Cell biology

Figure 1: Smurf1 expression is elevated in colorectal cancer tissues.

Smurf1 expression is elevated in colorectal cancer tissues.

Smurf1 expression is elevated in colorectal cancer tissues.

(a) Smurf1 expression scores are shown as box plots, with the horizontal lines representing the median; the bottom and top of the boxes representing the 25th and 75th percentiles, respectively; and the vertical bars representing the ra

Figure 2: Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer.

Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer

Positive correlation of Smurf1 expression with Nedd8 and its interacting enzymes in colorectal cancer

(a) Representative images from immunohistochemical staining of Smurf1, Ubc12, NAE1 and Nedd8 in the same colorectal cancer tumour. Scale bars, 100 μm. (bd) The expression scores of Nedd8 (b, n=283 ), NAE1 (c, n=281) and Ubc12 (d, n=19…

Figure 3: Smurf1 interacts with Ubc12.

Smurf1 interacts with Ubc12

Smurf1 interacts with Ubc12

(a) GST pull-down assay of Smurf1 with Ubc12. Both input and pull-down samples were subjected to immunoblotting with anti-His and anti-GST antibodies. Smurf1 interacted with Ubc12 and UbcH5c, but not with Ubc9. (b) Mapping the regions…

Figure 4: Nedd8 is attached to Smurf1through C426-catalysed autoneddylation.

Nedd8 is attached to Smurf1through C426-catalysed autoneddylation

Nedd8 is attached to Smurf1through C426-catalysed autoneddylation

(a) Covalent neddylation of Smurf1 in vitro.Purified His-Smurf1-WT or C699A proteins were incubated with Nedd8 and Nedd8-E1/E2. Reactions were performed as described in the Methods section. Samples were analysed by western blotting wi…

Figure 5: Neddylation of Smurf1 activates its ubiquitin ligase activity.

Neddylation of Smurf1 activates its ubiquitin ligase activity.

Neddylation of Smurf1 activates its ubiquitin ligase activity.

(a) In vivo Smurf1 ubiquitylation assay. Nedd8 was co-expressed with Smurf1 WT or C699A in HCT116 cells (left panels). Twenty-four hours post transfection, cells were treated with MG132 (20 μM, 8 h). HCT116 cells were transfected with…

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f1.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f2.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f3.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f4.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f5.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f6.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f7.jpg

http://www.nature.com/ncomms/2014/140513/ncomms4733/carousel/ncomms4733-f8.jpg

The deubiquitylase USP33 discriminates between RALB functions in autophagy and innate immune response

M Simicek, S Lievens, M Laga, D Guzenko, VN. Aushev, et al.
Nature Cell Biology 2013; 15, 1220–1230    http://dx.doi.org:/10.1038/ncb2847

The RAS-like GTPase RALB mediates cellular responses to nutrient availability or viral infection by respectively

  • engaging two components of the exocyst complex, EXO84 and SEC5.
  1. RALB employs SEC5 to trigger innate immunity signalling, whereas
  2. RALB–EXO84 interaction induces autophagocytosis.

How this differential interaction is achieved molecularly by the RAL GTPase remains unknown.

We found that whereas GTP binding

  • turns on RALB activity,

ubiquitylation of RALB at Lys 47

  • tunes its activity towards a particular effector.

Specifically, ubiquitylation at Lys 47

  • sterically inhibits RALB binding to EXO84, while
  • facilitating its interaction with SEC5.

Double-stranded RNA promotes

  • RALB ubiquitylation and
  • SEC5–TBK1 complex formation.

In contrast, nutrient starvation

  • induces RALB deubiquitylation
  • by accumulation and relocalization of the deubiquitylase USP33
  • to RALB-positive vesicles.

Deubiquitylated RALB

  • promotes the assembly of the RALB–EXO84–beclin-1 complexes
  • driving autophagosome formation. Thus,
  • ubiquitylation within the effector-binding domain
  • provides the switch for the dual functions of RALB in
    • autophagy and innate immune responses.

Part 5. Metabolic Syndrome

Single Enzyme is Necessary for Development of Diabetes

Published: Aug 20, 2014 http://www.technologynetworks.com/Metabolomics/news.aspx?ID=169416

12-LO enzyme promotes the obesity-induced oxidative stress in the pancreatic cells.

An enzyme called 12-LO promotes the obesity-induced oxidative stress in the pancreatic cells that leads

  • to pre-diabetes, and diabetes.

12-LO’s enzymatic action is the last step in

  • the production of certain small molecules that harm the cell,

according to a team from Indiana University School of Medicine, Indianapolis.

The findings will enable the development of drugs that can interfere with this enzyme, preventing or even reversing diabetes. The research is published ahead of print in the journal Molecular and Cellular Biology.

In earlier studies, these researchers and their collaborators at Eastern Virginia Medical School showed that

  • 12-LO (which stands for 12-lipoxygenase) is present in these cells
  • only in people who become overweight.

The harmful small molecules resulting from 12-LO’s enzymatic action are known as HETEs, short for hydroxyeicosatetraenoic acid.

  1. HETEs harm the mitochondria, which then
  2. fail to produce sufficient energy to enable
  3. the pancreatic cells to manufacture the necessary quantities of insulin.

For the study, the investigators genetically engineered mice that

  • lacked the gene for 12-LO exclusively in their pancreas cells.

Mice were either fed a low-fat or high-fat diet.

Both the control mice and the knockout mice on the high fat diet

  • developed obesity and insulin resistance.

The investigators also examined the pancreatic beta cells of both knockout and control mice, using both microscopic studies and molecular analysis. Those from the knockout mice were intact and healthy, while

  • those from the control mice showed oxidative damage,
  • demonstrating that 12-LO and the resulting HETEs
  • caused the beta cell failure.

Mirmira notes that fatty diet used in the study was the Western Diet, which comprises mostly saturated-“bad”-fats. Based partly on a recent study of related metabolic pathways, he says that

  • the unsaturated and mono-unsaturated fats-which comprise most fats in the healthy,
  • relatively high fat Mediterranean diet-are unlikely to have the same effects.

“Our research is the first to show that 12-LO in the beta cell

  • is the culprit in the development of pre-diabetes, following high fat diets,” says Mirmira.

“Our work also lends important credence to the notion that

  • the beta cell is the primary defective cell in virtually all forms of diabetes and pre-diabetes.”

A New Player in Lipid Metabolism Discovered

Published: Aug18, 2014  http://www.technologynetworks.com/Metabolomics/news.aspx?ID=169356

Specially engineered mice gained no weight, and normal counterparts became obese

  • on the same high-fat, obesity-inducing Western diet.

Specially engineered mice that lacked a particular gene did not gain weight

  • when fed a typical high-fat, obesity-inducing Western diet.

Yet, these mice ate the same amount as their normal counterparts that became obese.

The mice were engineered with fat cells that lacked a gene called SEL1L,

  • known to be involved in the clearance of mis-folded proteins
  • in the cell’s protein making machinery called the endoplasmic reticulum (ER).

When mis-folded proteins are not cleared but accumulate,

  • they destroy the cell and contribute to such diseases as
  1. mad cow disease,
  2. Type 1 diabetes and
  3. cystic fibrosis.

“The million-dollar question is why don’t these mice gain weight? Is this related to its inability to clear mis-folded proteins in the ER?” said Ling Qi, associate professor of molecular and biochemical nutrition and senior author of the study published online July 24 in Cell Metabolism. Haibo Sha, a research associate in Qi’s lab, is the paper’s lead author.

Interestingly, the experimental mice developed a host of other problems, including

  • postprandial hypertriglyceridemia,
  • and fatty livers.

“Although we are yet to find out whether these conditions contribute to the lean phenotype, we found that

  • there was a lipid partitioning defect in the mice lacking SEL1L in fat cells,
  • where fat cells cannot store fat [lipids], and consequently
  • fat goes to the liver.

During the investigation of possible underlying mechanisms, we discovered

  • a novel function for SEL1L as a regulator of lipid metabolism,” said Qi.

Sha said “We were very excited to find that

  • SEL1L is required for the intracellular trafficking of
  • lipoprotein lipase (LPL), acting as a chaperone,” .

and added that “Using several tissue-specific knockout mouse models,

  • we showed that this is a general phenomenon,”

Without LPL, lipids remain in the circulation;

  • fat and muscle cells cannot absorb fat molecules for storage and energy combustion,

People with LPL mutations develop

  • postprandial hypertriglyceridemia similar to
  • conditions found in fat cell-specific SEL1L-deficient mice, said Qi.

Future work will investigate the

  • role of SEL1L in human patients carrying LPL mutations and
  • determine why fat cell-specific SEL1L-deficient mice remain lean under Western diets, said Sha.

Co-authors include researchers from Cedars-Sinai Medical Center in Los Angeles; Wageningen University in the Netherlands; Georgia State University; University of California, Los Angeles; and the Medical College of Soochow University in China.

The study was funded by the U.S. National Institutes of Health, the Netherlands Organization for Health Research and Development National Institutes of Health, the Cedars-Sinai Medical Center, Chinese National Science Foundation, the American Diabetes Association, Cornell’s Center for Vertebrate Genomics and the Howard Hughes Medical Institute.

Part 6. Biomarkers

Biomarkers Take Center Stage

Josh P. Roberts
GEN May 1, 2013 (Vol. 33, No. 9)  http://www.genengnews.com/

While work with biomarkers continues to grow, scientists are also grappling with research-related bottlenecks, such as

  1. affinity reagent development,
  2. platform reproducibility, and
  3. sensitivity.

Biomarkers by definition indicate some state or process that generally occurs

  • at a spatial or temporal distance from the marker itself, and

it would not be an exaggeration to say that biomedicine has become infatuated with them:

  1. where to find them,
  2. when they may appear,
  3. what form they may take, and
  4. how they can be used to diagnose a condition or
  5. predict whether a therapy may be successful.

Biomarkers are on the agenda of many if not most industry gatherings, and in cases such as Oxford Global’s recent “Biomarker Congress” and the GTC “Biomarker Summit”, they hold the naming rights. There, some basic principles were built upon, amended, and sometimes challenged.

In oncology, for example, biomarker discovery is often predicated on the premise that

  • proteins shed from a tumor will traverse to and persist in, and be detectable in, the circulation.

By quantifying these proteins—singularly or as part of a larger “signature”—the hope is

  1. to garner information about the molecular characteristics of the cancer
  2. that will help with cancer detection and
  3. personalization of the treatment strategy.

Yet this approach has not yet turned into the panacea that was hoped for. Bottlenecks exist in

  • affinity reagent development,
  • platform reproducibility, and
  • sensitivity.

There is also a dearth of understanding of some of the

  • fundamental principles of biomarker biology that we need to know the answers to,

said Parag Mallick, Ph.D., whose lab at Stanford University is “working on trying to understand where biomarkers come from.”

There are dogmas saying that

  • circulating biomarkers come solely from secreted proteins.

But Dr. Mallick’s studies indicate that fully

  • 50% of circulating proteins may come from intracellular sources or
  • proteins that are annotated as such.

“We don’t understand the processes governing

  • which tumor-derived proteins end up in the blood.”

Other questions include “how does the size of a tumor affect how much of a given protein will be in the blood?”—perhaps

  • the tumor is necrotic at the center, or
  • it’s hypervascular or hypovascular.

He points out “The problem is that these are highly nonlinear processes at work, and

  • there is a large number of factors that might affect the answer to that question,” .

Their research focuses on using

  1. mass spectrometry and
  2. computational analysis
  • to characterize the biophysical properties of the circulating proteome, and
  • relate these to measurements made of the tumor itself.

Furthermore, he said – “We’ve observed that the proteins that are likely to

  • first show up and persist in the circulation, ..
  • are more stable than proteins that don’t,”
  • “we can quantify how significant the effect is.”

The goal is ultimately to be able to

  1. build rigorous, formal mathematical models that will allow something measured in the blood
  2. to be tied back to the molecular biology taking place in the tumor.

And conversely, to use those models

  • to predict from a tumor what will be found in the circulation.

“Ultimately, the models will allow you to connect the dots between

  • what you measure in the blood and the biology of the tumor.”

Bound for Affinity Arrays

Affinity reagents are the main tools for large-scale protein biomarker discovery. And while this has tended to mean antibodies (or their derivatives), other affinity reagents are demanding a place in the toolbox.

Affimers, a type of affinity reagent being developed by Avacta, consist of

  1. a biologically inert, biophysically stable protein scaffold
  2. containing three variable regions into which
  3. distinct peptides are inserted.

The resulting three-dimensional surface formed by these peptides

  • interacts and binds to proteins and other molecules in solution,
  • much like the antigen-binding site of antibodies.

Unlike antibodies, Affimers are relatively small (13 KDa),

  • non-post-translationally modified proteins
  • that can readily be expressed in bacterial culture.

They may be made to bind surfaces through unique residues

  • engineered onto the opposite face of the Affimer,
  • allowing the binding site to be exposed to the target in solution.

“We don’t seem to see in what we’ve done so far

  • any real loss of activity or functionality of Affimers when bound to surfaces—

they’re very robust,” said CEO Alastair Smith, Ph.D.

Avacta is taking advantage of this stability and its large libraries of Affimers to develop

  • very large affinity microarrays for
  • drug and biomarker discovery.

To date they have printed arrays with around 20–25,000 features, and Dr. Smith is “sure that we can get toward about 50,000 on a slide,” he said. “There’s no real impediment to us doing that other than us expressing the proteins and getting on with it.”

Customers will be provided with these large, complex “naïve” discovery arrays, readable with standard equipment. The plan is for the company to then “support our customers by providing smaller arrays with

  • the Affimers that are binding targets of interest to them,” Dr. Smith foretold.

And since the intellectual property rights are unencumbered,

  • Affimers in those arrays can be licensed to the end users
  • to develop diagnostics that can be validated as time goes on.

Around 20,000-Affimer discovery arrays were recently tested by collaborator Professor Ann Morgan of the University of Leeds with pools of unfractionated serum from patients with symptoms of inflammatory disease. The arrays

  • “rediscovered” elevated C-reactive protein (CRP, the clinical gold standard marker)
  • as well as uncovered an additional 22 candidate biomarkers.
  • other candidates combined with CRP, appear able to distinguish between different diseases such as
  1. rheumatoid arthritis,
  2. psoriatic arthritis,
  3. SLE, or
  4. giant cell arteritis.

Epigenetic Biomarkers

Methylation of adenine

Sometimes biomarkers are used not to find disease but

  • to distinguish healthy human cell types, with
  •  examples being found in flow cytometry and immunohistochemistry.

These widespread applications, however, are difficult to standardize, being

  • subject to arbitrary or subjective gating protocols and other imprecise criteria.

Epiontis instead uses an epigenetic approach. “What we need is a unique marker that is

  • demethylated only in one cell type and
  • methylated in all the other cell types,”

Each cell of the right cell type will have

  • two demethylated copies of a certain gene locus,
  • allowing them to be enumerated by quantitative PCR.

The biggest challenge is finding that unique epigenetic marker. To do so they look through the literature for proteins and genes described as playing a role in the cell type’s biology, and then

  • look at the methylation patterns to see if one can be used as a marker,

They also “use customized Affymetrix chips to look at the

  • differential epigenetic status of different cell types on a genomewide scale.”

explained CBO and founder Ulrich Hoffmueller, Ph.D.

The company currently has a panel of 12 assays for 12 immune cell types. Among these is an assay for

  • regulatory T (Treg) cells that queries the Foxp3 gene—which is uniquely demethylated in Treg
  • even though it is transiently expressed in activated T cells of other subtypes.

Also assayed are Th17 cells, difficult to detect by flow cytometry because

  • “the cells have to be stimulated in vitro,” he pointed out.

Developing New Assays for Cancer Biomarkers

Researchers at Myriad RBM and the Cancer Prevention Research Institute of Texas are collaborating to develop

  • new assays for cancer biomarkers on the Myriad RBM Multi-Analyte Profile (MAP) platform.

The release of OncologyMAP 2.0 expanded Myriad RBM’s biomarker menu to over 250 analytes, which can be measured from a small single sample, according to the company. Using this menu, L. Stephen et al., published a poster, “Analysis of Protein Biomarkers in Prostate and Colorectal Tumor Lysates,” which showed the results of

  • a survey of proteins relevant to colorectal (CRC) and prostate (PC) tumors
  • to identify potential proteins of interest for cancer research.

The study looked at CRC and PC tumor lysates and found that 102 of the 115 proteins showed levels above the lower limit of quantification.

  • Four markers were significantly higher in PC and 10 were greater in CRC.

For most of the analytes, duplicate sections of the tumor were similar, although some analytes did show differences. In four of the CRC analytes, tumor number four showed differences for CEA and tumor number 2 for uPA.

Thirty analytes were shown to be

  • different in CRC tumor compared to its adjacent tissue.
  • Ten of the analytes were higher in adjacent tissue compared to CRC.
  • Eighteen of the markers examined demonstrated  —-

significant correlations of CRC tumor concentration to serum levels.

“This suggests.. that the Oncology MAP 2.0 platform “provides a good method for studying changes in tumor levels because many proteins can be assessed with a very small sample.”

Clinical Test Development with MALDI-ToF

While there have been many attempts to translate results from early discovery work on the serum proteome into clinical practice, few of these efforts have progressed past the discovery phase.

Matrix-assisted laser desorption/ionization-time of flight (MALDI-ToF) mass spectrometry on unfractionated serum/plasma samples offers many practical advantages over alternative techniques, and does not require

  • a shift from discovery to development and commercialization platforms.

Biodesix claims it has been able to develop the technology into

  • a reproducible, high-throughput tool to
  • routinely measure protein abundance from serum/plasma samples.

“.. we improved data-analysis algorithms to

  • reproducibly obtain quantitative measurements of relative protein abundance from MALDI-ToF mass spectra.

Heinrich Röder, CTO points out that the MALDI-ToF measurements

  • are combined with clinical outcome data using
  • modern learning theory techniques
  • to define specific disease states
  • based on a patient’s serum protein content,”

The clinical utility of the identification of these disease states can be investigated through a retrospective analysis of differing sample sets. For example, Biodesix clinically validated its first commercialized serum proteomic test, VeriStrat®, in 85 different retrospective sample sets.

Röder adds that “It is becoming increasingly clear that

  • the patients whose serum is characterized as VeriStrat Poor show
  • consistently poor outcomes irrespective of
  1. tumor type,
  2. histology, or
  3. molecular tumor characteristics,”

MALDI-ToF mass spectrometry, in its standard implementation,

  • allows for the observation of around 100 mostly high-abundant serum proteins.

Further, “while this does not limit the usefulness of tests developed from differential expression of these proteins,

  • the discovery potential would be greatly enhanced
  • if we could probe deeper into the proteome
  • while not giving up the advantages of the MALDI-ToF approach,”

Biodesix reports that its new MALDI approach, Deep MALDI™, can perform

  • simultaneous quantitative measurement of more than 1,000 serum protein features (or peaks) from 10 µL of serum in a high-throughput manner.
  • it increases the observable signal noise ratio from a few hundred to over 50,000,
  • resulting in the observation of many lower-abundance serum proteins.

Breast cancer, a disease now considered to be a collection of many complexes of symptoms and signatures—the dominant ones are labeled Luminal A, Luminal B, Her2, and Basal— which suggests different prognose, and

  • these labels are considered too simplistic for understanding and managing a woman’s cancer.

Studies published in the past year have looked at

  1. somatic mutations,
  2. gene copy number aberrations,
  3. gene expression abnormalities,
  4. protein and miRNA expression, and
  5. DNA methylation,

coming up with a list of significantly mutated genes—hot spots—in different categories of breast cancers. Targeting these will inevitably be the focus of much coming research.

“We’ve been taking these large trials and profiling these on a variety of array or sequence platforms. We think we’ll get

  1. prognostic drivers
  2. predictive markers for taxanes and
  3. monoclonal antibodies and
  4. tamoxifen and aromatase inhibitors,”
    explained Brian Leyland-Jones, Ph.D., director of Edith Sanford Breast Cancer Research. “We will end up with 20–40 different diseases, maybe more.”

Edith Sanford Breast Cancer Research is undertaking a pilot study in collaboration with The Scripps Research Institute, using a variety of tests on 25 patients to see how the information they provide complements each other, the overall flow, and the time required to get and compile results.

Laser-captured tumor samples will be subjected to low passage whole-genome, exome, and RNA sequencing (with targeted resequencing done in parallel), and reverse-phase protein and phosphorylation arrays, with circulating nucleic acids and circulating tumor cells being queried as well. “After that we hope to do a 100- or 150-patient trial when we have some idea of the best techniques,” he said.

Dr. Leyland-Jones predicted that ultimately most tumors will be found

  • to have multiple drivers,
  • with most patients receiving a combination of two, three, or perhaps four different targeted therapies.

Reduce to Practice

According to Randox, the evidence Investigator is a sophisticated semi-automated biochip sys­tem designed for research, clinical, forensic, and veterinary applications.

Once biomarkers that may have an impact on therapy are discovered, it is not always routine to get them into clinical practice. Leaving regulatory and financial, intellectual property and cultural issues aside, developing a diagnostic based on a biomarker often requires expertise or patience that its discoverer may not possess.

Andrew Gribben is a clinical assay and development scientist at Randox Laboratories, based in Northern Ireland, U.K. The company utilizes academic and industrial collaborators together with in-house discovery platforms to identify biomarkers that are

  • augmented or diminished in a particular pathology
  • relative to appropriate control populations.

Biomarkers can be developed to be run individually or

  • combined into panels of immunoassays on its multiplex biochip array technology.

Specificity can also be gained—or lost—by the affinity of reagents in an assay. The diagnostic potential of Heart-type fatty acid binding protein (H-FABP) abundantly expressed in human myocardial cells was recognized by Jan Glatz of Maastricht University, The Netherlands, back in 1988. Levels rise quickly within 30 minutes after a myocardial infarction, peaking at 6–8 hours and return to normal within 24–30 hours. Yet at the time it was not known that H-FABP was a member of a multiprotein family, with which the polyclonal antibodies being used in development of an assay were cross-reacting, Gribben related.

Randox developed monoclonal antibodies specific to H-FABP, funded trials investigating its use alone, and multiplexed with cardiac biomarker assays, and, more than 30 years after the biomarker was identified, in 2011, released a validated assay for H-FABP as a biomarker for early detection of acute myocardial infarction.

Ultrasensitive Immunoassays for Biomarker Development

Research has shown that detection and monitoring of biomarker concentrations can provide

  • insights into disease risk and progression.

Cytokines have become attractive biomarkers and candidates

  • for targeted therapies for a number of autoimmune diseases, including rheumatoid arthritis (RA), Crohn’s disease, and psoriasis, among others.

However, due to the low-abundance of circulating cytokines, such as IL-17A, obtaining robust measurements in clinical samples has been difficult.

Singulex reports that its digital single-molecule counting technology provides

  • increased precision and detection sensitivity over traditional ELISA techniques,
  • helping to shed light on biomarker verification and validation programs.

The company’s Erenna® immunoassay system, which includes optimized immunoassays, offers LLoQ to femtogram levels per mL resolution—even in healthy populations, at an improvement of 1-3 fold over standard ELISAs or any conventional technology and with a dynamic range of up to 4-logs, according to a Singulex official, who adds that

  • this sensitivity improvement helps minimize undetectable samples that
  • could otherwise delay or derail clinical studies.

The official also explains that the Singulex solution includes an array of products and services that are being applied to a number of programs and have enabled the development of clinically relevant biomarkers, allowing translation from discovery to the clinic.

In a poster entitled “Advanced Single Molecule Detection: Accelerating Biomarker Development Utilizing Cytokines through Ultrasensitive Immunoassays,” a case study was presented of work performed by Jeff Greenberg of NYU to show how the use of the Erenna system can provide insights toward

  • improving the clinical utility of biomarkers and
  • accelerating the development of novel therapies for treating inflammatory diseases.

A panel of inflammatory biomarkers was examined in DMARD (disease modifying antirheumatic drugs)-naïve RA (rheumatoid arthritis) vs. knee OA (osteoarthritis) patient cohorts. Markers that exhibited significant differences in plasma concentrations between the two cohorts included

  • CRP, IL-6R alpha, IL-6, IL-1 RA, VEGF, TNF-RII, and IL-17A, IL-17F, and IL-17A/F.

Among the three tested isoforms of IL-17,

  • the magnitude of elevation for IL-17F in RA patients was the highest.

“Singulex provides high-resolution monitoring of baseline IL-17A concentrations that are present at low levels,” concluded the researchers. “The technology also enabled quantification of other IL-17 isoforms in RA patients, which have not been well characterized before.”

The Singulex Erenna System has also been applied to cardiovascular disease research, for which its

  • cardiac troponin I (cTnI) digital assay can be used to measure circulating
  • levels of cTnI undetectable by other commercial assays.

Recently presented data from Brigham and Women’s Hospital and the TIMI-22 study showed that

  • using the Singulex test to serially monitor cTnI helps
  • stratify risk in post-acute coronary syndrome patients and
  • can identify patients with elevated cTnI
  • who have the most to gain from intensive vs. moderate-dose statin therapy,

according to the scientists involved in the research.

The study poster, “Prognostic Performance of Serial High Sensitivity Cardiac Troponin Determination in Stable Ischemic Heart Disease: Analysis From PROVE IT-TIMI 22,” was presented at the 2013 American College of Cardiology (ACC) Annual Scientific Session & Expo by R. O’Malley et al.

Biomarkers Changing Clinical Medicine

Better Diagnosis, Prognosis, and Drug Targeting Are among Potential Benefits

  1. John Morrow Jr., Ph.D.

Researchers at EMD Chemicals are developing biomarker immunoassays

  • to monitor drug-induced toxicity including kidney damage.

The pace of biomarker development is accelerating as investigators report new studies on cancer, diabetes, Alzheimer disease, and other conditions in which the evaluation and isolation of workable markers is prominently featured.

Wei Zheng, Ph.D., leader of the R&D immunoassay group at EMD Chemicals, is overseeing a program to develop biomarker immunoassays to

  • monitor drug-induced toxicity, including kidney damage.

“One of the principle reasons for drugs failing during development is because of organ toxicity,” says Dr. Zheng.
“proteins liberated into the serum and urine can serve as biomarkers of adverse response to drugs, as well as disease states.”

Through collaborative programs with Rules-Based Medicine (RBM), the EMD group has released panels for the profiling of human renal impairment and renal toxicity. These urinary biomarker based products fit the FDA and EMEA guidelines for assessment of drug-induced kidney damage in rats.

The group recently performed a screen for potential protein biomarkers in relation to

  • kidney toxicity/damage on a set of urine and plasma samples
  • from patients with documented renal damage.

Additionally, Dr. Zheng is directing efforts to move forward with the multiplexed analysis of

  • organ and cellular toxicity.

Diseases thought to involve compromised oxidative phosphorylation include

  • diabetes, Parkinson and Alzheimer diseases, cancer, and the aging process itself.

Good biomarkers allow Dr. Zheng to follow the mantra, “fail early, fail fast.” With robust, multiplexible biomarkers, EMD can detect bad drugs early and kill them before they move into costly large animal studies and clinical trials. “Recognizing the severe liability that toxicity presents, we can modify the structure of the candidate molecule and then rapidly reassess its performance.”

Scientists at Oncogene Science a division of Siemens Healthcare Diagnostics, are also focused on biomarkers. “We are working on a number of antibody-based tests for various cancers, including a test for the Ca-9 CAIX protein, also referred to as carbonic anhydrase,” Walter Carney, Ph.D., head of the division, states.

CAIX is a transmembrane protein that is

  • overexpressed in a number of cancers, and, like Herceptin and the Her-2 gene,
  • can serve as an effective and specific marker for both diagnostic and therapeutic purposes.
  • It is liberated into the circulation in proportion to the tumor burden.

Dr. Carney and his colleagues are evaluating patients after tumor removal for the presence of the Ca-9 CAIX protein. If

  • the levels of the protein in serum increase over time,
  • this suggests that not all the tumor cells were removed and the tumor has metastasized.

Dr. Carney and his team have developed both an immuno-histochemistry and an ELISA test that could be used as companion diagnostics in clinical trials of CAIX-targeted drugs.

The ELISA for the Ca-9 CAIX protein will be used in conjunction with Wilex’ Rencarex®, which is currently in a

  • Phase III trial as an adjuvant therapy for non-metastatic clear cell renal cancer.

Additionally, Oncogene Science has in its portfolio an FDA-approved test for the Her-2 marker. Originally approved for Her-2/Neu-positive breast cancer, its indications have been expanded over time, and was approved

  • for the treatment of gastric cancer last year.

It is normally present on breast cancer epithelia but

  • overexpressed in some breast cancer tumors.

“Our products are designed to be used in conjunction with targeted therapies,” says Dr. Carney. “We are working with companies that are developing technology around proteins that are

  • overexpressed in cancerous tissues and can be both diagnostic and therapeutic targets.”

The long-term goal of these studies is to develop individualized therapies, tailored for the patient. Since the therapies are expensive, accurate diagnostics are critical to avoid wasting resources on patients who clearly will not respond (or could be harmed) by the particular drug.

“At this time the rate of response to antibody-based therapies may be very poor, as

  • they are often employed late in the course of the disease, and patients are in such a debilitated state
  • that they lack the capacity to react positively to the treatment,” Dr. Carney explains.

Nanoscale Real-Time Proteomics

Stanford University School of Medicine researchers, working with Cell BioSciences, have developed a

  • nanofluidic proteomic immunoassay that measures protein charge,
  • similar to immunoblots, mass spectrometry, or flow cytometry.
  • unlike these platforms, this approach can measure the amount of individual isoforms,
  • specifically, phosphorylated molecules.

“We have developed a nanoscale device for protein measurement, which I believe could be useful for clinical analysis,” says Dean W. Felsher, M.D., Ph.D., associate professor at Stanford University School of Medicine.

Critical oncogenic transformations involving

  • the activation of the signal-related kinases ERK-1 and ERK-2 can now be followed with ease.

“The fact that we measure nanoquantities with accuracy means that

  • we can interrogate proteomic profiles in clinical patients,

by drawing tiny needle aspirates from tumors over the course of time,” he explains.

“This allows us to observe the evolution of tumor cells and

  • their response to therapy
  • from a baseline of the normal tissue as a standard of comparison.”

According to Dr. Felsher, 20 cells is a large enough sample to obtain a detailed description. The technology is easy to automate, which allows

  • the inclusion of hundreds of assays.

Contrasting this technology platform with proteomic analysis using microarrays, Dr. Felsher notes that the latter is not yet workable for revealing reliable markers.

Dr. Felsher and his group published a description of this technology in Nature Medicine. “We demonstrated that we could take a set of human lymphomas and distinguish them from both normal tissue and other tumor types. We can

  • quantify changes in total protein, protein activation, and relative abundance of specific phospho-isoforms
  • from leukemia and lymphoma patients receiving targeted therapy.

Even with very small numbers of cells, we are able to show that the results are consistent, and

  • our sample is a random profile of the tumor.”

Splice Variant Peptides

“Aberrations in alternative splicing may generate

  • much of the variation we see in cancer cells,”

says Gilbert Omenn, Ph.D., director of the center for computational medicine and bioinformatics at the University of Michigan School of Medicine. Dr. Omenn and his colleague, Rajasree Menon, are

  • using this variability as a key to new biomarker identification.

It is becoming evident that splice variants play a significant role in the properties of cancer cells, including

  • initiation, progression, cell motility, invasiveness, and metastasis.

Alternative splicing occurs through multiple mechanisms

  • when the exons or coding regions of the DNA transcribe mRNA,
  • generating initiation sites and connecting exons in protein products.

Their translation into protein can result in numerous protein isoforms, and

  • these isoforms may reflect a diseased or cancerous state.

Regulatory elements within the DNA are responsible for selecting different alternatives; thus

  • the splice variants are tempting targets for exploitation as biomarkers.
Analyses of the splice-site mutation

Analyses of the splice-site mutation

Despite the many questions raised by these observations, splice variation in tumor material has not been widely studied. Cancer cells are known for their tremendous variability, which allows them to

  • grow rapidly, metastasize, and develop resistance to anticancer drugs.

Dr. Omenn and his collaborators used

  • mass spec data to interrogate a custom-built database of all potential mRNA sequences
  • to find alternative splice variants.

When they compared normal and malignant mammary gland tissue from a mouse model of Her2/Neu human breast cancers, they identified a vast number (608) of splice variant proteins, of which

  • peptides from 216 were found only in the tumor sample.

“These novel and known alternative splice isoforms

  • are detectable both in tumor specimens and in plasma and
  • represent potential biomarker candidates,” Dr. Omenn adds.

Dr. Omenn’s observations and those of his colleague Lewis Cantley, Ph.D., have also

  • shed light on the origins of the classic Warburg effect,
  • the shift to anaerobic glycolysis in tumor cells.

The novel splice variant M2, of muscle pyruvate kinase,

  • is observed in embryonic and tumor tissue.

It is associated with this shift, the result of

  • the expression of a peptide splice variant sequence.

It is remarkable how many different areas of the life sciences are tied into the phenomenon of splice variation. The changes in the genetic material can be much greater than point mutations, which have been traditionally considered to be the prime source of genetic variability.

“We now have powerful methods available to uncover a whole new category of variation,” Dr. Omenn says. “High-throughput RNA sequencing and proteomics will be complementary in discovery studies of splice variants.”

Splice variation may play an important role in rapid evolutionary changes, of the sort discussed by Susumu Ohno and Stephen J. Gould decades ago. They, and other evolutionary biologists, argued that

  • gene duplication, combined with rapid variability, could fuel major evolutionary jumps.

At the time, the molecular mechanisms of variation were poorly understood, but today

  • the tools are available to rigorously evaluate the role of
  • splice variation and other contributors to evolutionary change.

“Biomarkers derived from studies of splice variants, could, in the future, be exploited

  • both for diagnosis and prognosis and
  • for drug targeting of biological networks,
  • in situations such as the Her-2/Neu breast cancers,” Dr. Omenn says.

Aminopeptidase Activities

“By correlating the proteolytic patterns with disease groups and controls, we have shown that

  • exopeptidase activities contribute to the generation of not only cancer-specific
  • but also cancer type specific serum peptides.

according to Paul Tempst, Ph.D., professor and director of the Protein Center at the Memorial Sloan-Kettering Cancer Center.

So there is a direct link between peptide marker profiles of disease and differential protease activity.” For this reason Dr. Tempst argues that “the patterns we describe may have value as surrogate markers for detection and classification of cancer.”

To investigate this avenue, Dr. Tempst and his colleagues have followed

  • the relationship between exopeptidase activities and metastatic disease.

“We monitored controlled, de novo peptide breakdown in large numbers of biological samples using mass spectrometry, with relative quantitation of the metabolites,” Dr. Tempst explains. This entailed the use of magnetic, reverse-phase beads for analyte capture and a MALDI-TOF MS read-out.

“In biomarker discovery programs, functional proteomics is usually not pursued,” says Dr. Tempst. “For putative biomarkers, one may observe no difference in quantitative levels of proteins, while at the same time, there may be substantial differences in enzymatic activity.”

In a preliminary prostate cancer study, the team found a significant difference

  • in activity levels of exopeptidases in serum from patients with metastatic prostate cancer
  • as compared to primary tumor-bearing individuals and normal healthy controls.

However, there were no differences in amounts of the target protein, and this potential biomarker would have been missed if quantitative levels of protein had been the only criterion of selection.

It is frequently stated that “practical fusion energy is 30 years in the future and always will be.” The same might be said of functional, practical biomarkers that can pass muster with the FDA. But splice variation represents a new handle on this vexing problem. It appears that we are seeing the emergence of a new approach that may finally yield definitive diagnostic tests, detectable in serum and urine samples.

Part 7. Epigenetics and Drug Metabolism

DNA Methylation Rules: Studying Epigenetics with New Tools

The tools to unravel the epigenetic control mechanisms that influence how cells control access of transcriptional proteins to DNA are just beginning to emerge.

Patricia Fitzpatrick Dimond, Ph.D.

http://www.genengnews.com/media/images/AnalysisAndInsight/Feb7_2013_24454248_GreenPurpleDNA_EpigeneticsToolsII3576166141.jpg

New tools may help move the field of epigenetic analysis forward and potentially unveil novel biomarkers for cellular development, differentiation, and disease.

DNA sequencing has had the power of technology behind it as novel platforms to produce more sequencing faster and at lower cost have been introduced. But the tools to unravel the epigenetic control mechanisms that influence how cells control access of transcriptional proteins to DNA are just beginning to emerge.

Among these mechanisms, DNA methylation, or the enzymatically mediated addition of a methyl group to cytosine or adenine dinucleotides,

  • serves as an inherited epigenetic modification that
  • stably modifies gene expression in dividing cells.

The unique methylomes are largely maintained in differentiated cell types, making them critical to understanding the differentiation potential of the cell.

In the DNA methylation process, cytosine residues in the genome are enzymatically modified to 5-methylcytosine,

  • which participates in transcriptional repression of genes during development and disease progression.

5-methylcytosine can be further enzymatically modified to 5-hydroxymethylcytosine by the TET family of methylcytosine dioxygenases. DNA methylation affects gene transcription by physically

  • interfering with the binding of proteins involved in gene transcription.

Methylated DNA may be bound by methyl-CpG-binding domain proteins (MBDs) that can

  • then recruit additional proteins. Some of these include histone deacetylases and other chromatin remodeling proteins that modify histones, thereby
  • forming compact, inactive chromatin, or heterochromatin.

While DNA methylation doesn’t change the genetic code,

  • it influences chromosomal stability and gene expression.

Epigenetics and Cancer Biomarkers

multistage chemical carcinogenesis

multistage chemical carcinogenesis

And because of the increasing recognition that DNA methylation changes are involved in human cancers, scientists have suggested that these epigenetic markers may provide biological markers for cancer cells, and eventually point toward new diagnostic and therapeutic targets. Cancer cell genomes display genome-wide abnormalities in DNA methylation patterns,

  • some of which are oncogenic and contribute to genome instability.

In particular, de novo methylation of tumor suppressor gene promoters

  • occurs frequently in cancers, thereby silencing them and promoting transformation.

Cytosine hydroxymethylation (5-hydroxymethylcytosine, or 5hmC), the aforementioned DNA modification resulting from the enzymatic conversion of 5mC into 5-hydroxymethylcytosine by the TET family of oxygenases, has been identified

  • as another key epigenetic modification marking genes important for
  • pluripotency in embryonic stem cells (ES), as well as in cancer cells.

The base 5-hydroxymethylcytosine was recently identified as an oxidation product of 5-methylcytosine in mammalian DNA. In 2011, using sensitive and quantitative methods to assess levels of 5-hydroxymethyl-2′-deoxycytidine (5hmdC) and 5-methyl-2′-deoxycytidine (5mdC) in genomic DNA, scientists at the Department of Cancer Biology, Beckman Research Institute of the City of Hope, Duarte, California investigated

  • whether levels of 5hmC can distinguish normal tissue from tumor tissue.

They showed that in squamous cell lung cancers, levels of 5hmdC showed

  • up to five-fold reduction compared with normal lung tissue.

In brain tumors,5hmdC showed an even more drastic reduction

  • with levels up to more than 30-fold lower than in normal brain,
  • but 5hmdC levels were independent of mutations in isocitrate dehydrogenase-1, the enzyme that converts 5hmC to 5hmdC.

Immunohistochemical analysis indicated that 5hmC is “remarkably depleted” in many types of human cancer.

  • there was an inverse relationship between 5hmC levels and cell proliferation with lack of 5hmC in proliferating cells.

Their data suggest that 5hmdC is strongly depleted in human malignant tumors,

  • a finding that adds another layer of complexity to the aberrant epigenome found in cancer tissue.

In addition, a lack of 5hmC may become a useful biomarker for cancer diagnosis.

Enzymatic Mapping

But according to New England Biolabs’ Sriharsa Pradhan, Ph.D., methods for distinguishing 5mC from 5hmC and analyzing and quantitating the cell’s entire “methylome” and “hydroxymethylome” remain less than optimal.

The protocol for bisulphite conversion to detect methylation remains the “gold standard” for DNA methylation analysis. This method is generally followed by PCR analysis for single nucleotide resolution to determine methylation across the DNA molecule. According to Dr. Pradhan, “.. bisulphite conversion does not distinguish 5mC and 5hmC,”

Recently we found an enzyme, a unique DNA modification-dependent restriction endonuclease, AbaSI, which can

  • decode the hydryoxmethylome of the mammalian genome.

You easily can find out where the hydroxymethyl regions are.”

AbaSI, recognizes 5-glucosylatedmethylcytosine (5gmC) with high specificity when compared to 5mC and 5hmC, and

  • cleaves at narrow range of distances away from the recognized modified cytosine.

By mapping the cleaved ends, the exact 5hmC location can, the investigators reported, be determined.

Dr. Pradhan and his colleagues at NEB; the Department of Biochemistry, Emory University School of Medicine, Atlanta; and the New England Biolabs Shanghai R&D Center described use of this technique in a paper published in Cell Reports this month, in which they described high-resolution enzymatic mapping of genomic hydroxymethylcytosine in mouse ES cells.

In the current report, the authors used the enzyme technology for the genome-wide high-resolution hydroxymethylome, describing simple library construction even with a low amount of input DNA (50 ng) and the ability to readily detect 5hmC sites with low occupancy.

As a result of their studies, they propose that

factors affecting the local 5mC accessibility to TET enzymes play important roles in the 5hmC deposition

  • including include chromatin compaction, nucleosome positioning, or TF binding.
  •  the regularly oscillating 5hmC profile around the CTCF-binding sites, suggests 5hmC ‘‘writers’’ may be sensitive to the nucleosomal environment.
  • some transiently stable 5hmCs may indicate a poised epigenetic state or demethylation intermediate, whereas others may suggest a locally accessible chromosomal environment for the TET enzymatic apparatus.

“We were able to do complete mapping in mouse embryonic cells and are pleased about what this enzyme can do and how it works,” Dr. Pradhan said.

And the availability of novel tools that make analysis of the methylome and hypomethylome more accessible will move the field of epigenetic analysis forward and potentially novel biomarkers for cellular development, differentiation, and disease.

Patricia Fitzpatrick Dimond, Ph.D. (pdimond@genengnews.com), is technical editor at Genetic Engineering & Biotechnology News.

Epigenetic Regulation of ADME-Related Genes: Focus on Drug Metabolism and Transport

Published: Sep 23, 2013

Epigenetic regulation of gene expression refers to heritable factors that are functionally relevant genomic modifications but that do not involve changes in DNA sequence.

Examples of such modifications include

  • DNA methylation, histone modifications, noncoding RNAs, and chromatin architecture.

Epigenetic modifications are crucial for

packaging and interpreting the genome, and they have fundamental functions in regulating gene expression and activity under the influence of physiologic and environmental factors.

In this issue of Drug Metabolism and Disposition, a series of articles is presented to demonstrate the role of epigenetic factors in regulating

  • the expression of genes involved in drug absorption, distribution, metabolism, and excretion in organ development, tissue-specific gene expression, sexual dimorphism, and in the adaptive response to xenobiotic exposure, both therapeutic and toxic.

The articles also demonstrate that, in addition to genetic polymorphisms, epigenetics may also contribute to wide inter-individual variations in drug metabolism and transport. Identification of functionally relevant epigenetic biomarkers in human specimens has the potential to improve prediction of drug responses based on patient’s epigenetic profiles.

http://www.technologynetworks.com/Metabolomics/news.aspx?ID=157804

This study is published online in Drug Metabolism and Disposition

Part 8.  Pictorial Maps

 Prediction of intracellular metabolic states from extracellular metabolomic data

MK Aurich, G Paglia, Ottar Rolfsson, S Hrafnsdottir, M Magnusdottir, MM Stefaniak, BØ Palsson, RMT Fleming &

Ines Thiele

Metabolomics Aug 14, 2014;

http://dx.doi.org:/10.1007/s11306-014-0721-3

http://link.springer.com/article/10.1007/s11306-014-0721-3/fulltext.html#Sec1

http://link.springer.com/static-content/images/404/art%253A10.1007%252Fs11306-014-0721-3/MediaObjects/11306_2014_721_Fig1_HTML.gif

Metabolic models can provide a mechanistic framework

  • to analyze information-rich omics data sets, and are
  • increasingly being used to investigate metabolic alternations in human diseases.

An expression of the altered metabolic pathway utilization is the selection of metabolites consumed and released by cells. However, methods for the

  • inference of intracellular metabolic states from extracellular measurements in the context of metabolic models remain underdeveloped compared to methods for other omics data.

Herein, we describe a workflow for such an integrative analysis

  • emphasizing on extracellular metabolomics data.

We demonstrate,

  • using the lymphoblastic leukemia cell lines Molt-4 and CCRF-CEM,

how our methods can reveal differences in cell metabolism. Our models explain metabolite uptake and secretion by predicting

  • a more glycolytic phenotype for the CCRF-CEM model and
  • a more oxidative phenotype for the Molt-4 model,
  • which was supported by our experimental data.

Gene expression analysis revealed altered expression of gene products at

  • key regulatory steps in those central metabolic pathways, and

literature query emphasized the role of these genes in cancer metabolism.

Moreover, in silico gene knock-outs identified unique

  •  control points for each cell line model, e.g., phosphoglycerate dehydrogenase for the Molt-4 model.

Thus, our workflow is well suited to the characterization of cellular metabolic traits based on

  • -extracellular metabolomic data, and it allows the integration of multiple omics data sets
  • into a cohesive picture based on a defined model context.

Keywords Constraint-based modeling _ Metabolomics _ Multi-omics _ Metabolic network _ Transcriptomics

1 Introduction

Modern high-throughput techniques have increased the pace of biological data generation. Also referred to as the ‘‘omics avalanche’’, this wealth of data provides great opportunities for metabolic discovery. Omics data sets

  • contain a snapshot of almost the entire repertoire of mRNA, protein, or metabolites at a given time point or

under a particular set of experimental conditions. Because of the high complexity of the data sets,

  • computational modeling is essential for their integrative analysis.

Currently, such data analysis is a bottleneck in the research process and methods are needed to facilitate the use of these data sets, e.g., through meta-analysis of data available in public databases [e.g., the human protein atlas (Uhlen et al. 2010) or the gene expression omnibus (Barrett et al.  2011)], and to increase the accessibility of valuable information for the biomedical research community.

Constraint-based modeling and analysis (COBRA) is

  • a computational approach that has been successfully used to
  • investigate and engineer microbial metabolism through the prediction of steady-states (Durot et al.2009).

The basis of COBRA is network reconstruction: networks are assembled in a bottom-up fashion based on

  • genomic data and extensive
  • organism-specific information from the literature.

Metabolic reconstructions capture information on the

  • known biochemical transformations taking place in a target organism
  • to generate a biochemical, genetic and genomic knowledge base (Reed et al. 2006).

Once assembled, a

  • metabolic reconstruction can be converted into a mathematical model (Thiele and Palsson 2010), and
  • model properties can be interrogated using a great variety of methods (Schellenberger et al. 2011).

The ability of COBRA models

  • to represent genotype–phenotype and environment–phenotype relationships arises
  • through the imposition of constraints, which
  • limit the system to a subset of possible network states (Lewis et al. 2012).

Currently, COBRA models exist for more than 100 organisms, including humans (Duarte et al. 2007; Thiele et al. 2013).

Since the first human metabolic reconstruction was described [Recon 1 (Duarte et al. 2007)],

  • biomedical applications of COBRA have increased (Bordbar and Palsson 2012).

One way to contextualize networks is to

  • define their system boundaries according to the metabolic states of the system, e.g., disease or dietary regimes.

The consequences of the applied constraints can

  • then be assessed for the entire network (Sahoo and Thiele 2013).

Additionally, omics data sets have frequently been used

  • to generate cell-type or condition-specific metabolic models.

Models exist for specific cell types, such as

  1. enterocytes (Sahoo and Thiele2013),
  2. macrophages (Bordbar et al. 2010),
  3. adipocytes (Mardinoglu et al. 2013),
  4. even multi-cell assemblies that represent the interactions of brain cells (Lewis et al. 2010).

All of these cell type specific models, except the enterocyte reconstruction

  • were generated based on omics data sets.

Cell-type-specific models have been used to study

  • diverse human disease conditions.

For example, an adipocyte model was generated using

  • transcriptomic, proteomic, and metabolomics data.

This model was subsequently used to investigate metabolic alternations in adipocytes

  • that would allow for the stratification of obese patients (Mardinoglu et al. 2013).

The biomedical applications of COBRA have been

  1. cancer metabolism (Jerby and Ruppin, 2012).
  2. predicting drug targets (Folger et al. 2011; Jerby et al. 2012).

A cancer model was generated using

  • multiple gene expression data sets and subsequently used
  • to predict synthetic lethal gene pairs as potential drug targets
  • selective for the cancer model, but non-toxic to the global model (Recon 1),

a consequence of the reduced redundancy in the cancer specific model (Folger et al. 2011).

In a follow up study, lethal synergy between FH and enzymes of the heme metabolic pathway

  • were experimentally validated and resolved the mechanism by which FH deficient cells,
    e.g., in renal-cell cancer cells survive a non-functional TCA cycle (Frezza et al. 2011).

Contextualized models, which contain only the subset of reactions active in a particular tissue (or cell-) type,

  • can be generated in different ways (Becker and Palsson, 2008; Jerby et al. 2010).

However, the existing algorithms mainly consider

  • gene expression and proteomic data
  • to define the reaction sets that comprise the contextualized metabolic models.

These subset of reactions are usually defined

  • based on the expression or absence of expression of the genes or proteins (present and absent calls),
  • or inferred from expression values or differential gene expression.

Comprehensive reviews of the methods are available (Blazier and Papin, 2012; Hyduke et al. 2013). Only the compilation of a large set of omics data sets

  • can result in a tissue (or cell-type) specific metabolic model, whereas

the representation of one particular experimental condition is achieved

  • through the integration of omics data set generated from one experiment only (condition-specific cell line model).

Recently, metabolomic data sets have become more comprehensive and

  • using these data sets allow direct determination of the metabolic network components (the metabolites).

Additionally, metabolomics has proven to be stable, relatively inexpensive, and highly reproducible (Antonucci et al. 2012). These factors make metabolomic data sets particularly valuable for

  • interrogation of metabolic phenotypes.

Thus, the integration of these data sets is now an active field of research (Li et al. 2013; Mo et al. 2009; Paglia et al. 2012b; Schmidt et al. 2013).

Generally, metabolomic data can be incorporated into metabolic networks as

  • qualitative, quantitative, and thermodynamic constraints (Fleming et al. 2009; Mo et al. 2009).

Mo et al. used metabolites detected in the

  • spent medium of yeast cells to determine intracellular flux states through a sampling analysis (Mo et al. 2009),
  • which allowed unbiased interrogation of the possible network states (Schellenberger and Palsson 2009) and
  • prediction of internal pathway use.
Modes of transcriptional regulation during the YMC

Modes of transcriptional regulation during the YMC

Such analyses have also been used to reveal the effects of

  1. enzymopathies on red blood cells (Price et al. 2004),
  2. to study effects of diet on diabetes (Thiele et al. 2005) and
  3. to define macrophage metabolic states (Bordbar et al. 2010).

This type of analysis is available as a function in the COBRA toolbox (Schellenberger et al. 2011).

In this study, we established a workflow

  • for the generation and analysis of condition-specific metabolic cell line models
  • that can facilitate the interpretation of metabolomic data.

Our modeling yields meaningful predictions regarding

  • metabolic differences between two lymphoblastic leukemia cell lines (Fig. 1A).

Fig. 1

metabol leukem cell lines11306_2014_721_Fig1_HTML

metabol leukem cell lines11306_2014_721_Fig1_HTML

A Combined experimental and computational pipeline to study human metabolism.

  1. Experimental work and omics data analysis steps precede computational modeling.
  2. Model predictions are validated based on targeted experimental data.
  3. Metabolomic and transcriptomic data are used for model refinement and submodel extraction.
  4. Functional analysis methods are used to characterize the metabolism of the cell-line models and compare it to additional experimental data.
  5. The validated models are subsequently used for the prediction of drug targets.

B Uptake and secretion pattern of model metabolites. All metabolite uptakes and secretions that were mapped during model generation are shown.

  • Metabolite uptakes are depicted on the left, and
  • secreted metabolites are shown on the right.
  1. A number of metabolite exchanges mapped to the model were unique to one cell line.
  2. Differences between cell lines were used to set quantitative constraints for the sampling analysis.

C Statistics about the cell line-specific network generation.

D Quantitative constraints.

For the sampling analysis, an additional set of constraints was imposed on the cell line specific models,

  • emphasizing the differences in metabolite uptake and secretion between cell lines.

Higher uptake of a metabolite was allowed

  • in the model of the cell line that consumed more of the metabolite in vitro, whereas
  • the supply was restricted for the model with lower in vitro uptake.

This was done by establishing the same ratio between the models bounds as detected in vitro.

X denotes the factor (slope ratio) that distinguishes the bounds, and

  • which was individual for each metabolite.

(a) The uptake of a metabolite could be x times higher in CCRF-CEM cells,

(b) the metabolite uptake could be x times higher in Molt-4,

(c) metabolite secretion could be x times higher in CCRF-CEM, or

(d) metabolite secretion could be x times higher in Molt-4 cells.LOD limit of detection.

The consequence of the adjustment was, in case of uptake, that one model was constrained to a lower metabolite uptake (A, B), and the difference depended on the ratio detected in vitro. In case of secretion, one model

  • had to secrete more of the metabolite, and again
  • the difference depended on the experimental difference detected between the cell lines

2 Results

We set up a pipeline that could be used to infer intracellular metabolic states

  • from semi-quantitative data regarding metabolites exchanged between cells and their environment.

Our pipeline combined the following four steps:

  1. data acquisition,
  2. data analysis,
  3. metabolic modeling and
  4. experimental validation of the model predictions (Fig. 1A).

We demonstrated the pipeline and the predictive potential to predict metabolic alternations in diseases such as cancer based on

^two lymphoblastic leukemia cell lines.

The resulting Molt-4 and CCRF-CEM condition-specific cell line models could explain

^  metabolite uptake and secretion
^  by predicting the distinct utilization of central metabolic pathways by the two cell lines.
^  the CCRF-CEM model resembled more a glycolytic, commonly referred to as ‘Warburg’ phenotype,
^  our model predicted a more respiratory phenotype for the Molt-4 model.

We found these predictions to be in agreement with measured gene expression differences

  • at key regulatory steps in the central metabolic pathways, and they were also
  • consistent with additional experimental data regarding the energy and redox states of the cells.

After a brief discussion of the data generation and analysis steps, the results derived from model generation and analysis will be described in detail.

2.1 Pipeline for generation of condition-specific metabolic cell line models

integration of exometabolomic (EM) data

integration of exometabolomic (EM) data

2.1.1 Generation of experimental data

We monitored the growth and viability of lymphoblastic leukemia cell lines in serum-free medium (File S2, Fig. S1). Multiple omics data sets were derived from these cells.Extracellular metabolomics (exo-metabolomic) data,

integration of exometabolomic (EM) data

integration of exometabolomic (EM) data

^  comprising measurements of the metabolites in the spent medium of the cell cultures (Paglia et al. 2012a),
^ were collected along with transcriptomic data, and these data sets were used to construct the models.

2.1.4 Condition-specific models for CCRF-CEM and Molt-4 cells

To determine whether we had obtained two distinct models, we evaluated the reactions, metabolites, and genes of the two models. Both the Molt-4 and CCRF-CEM models contained approximately half of the reactions and metabolites present in the global model (Fig. 1C). They were very similar to each other in terms of their reactions, metabolites, and genes (File S1, Table S5A–C).

(1) The Molt-4 model contained seven reactions that were not present in the CCRF-CEM model (Co-A biosynthesis pathway and exchange reactions).
(2) The CCRF-CEM contained 31 unique reactions (arginine and proline metabolism, vitamin B6 metabolism, fatty acid activation, transport, and exchange reactions).
(3) There were 2 and 15 unique metabolites in the Molt-4 and CCRF-CEM models, respectively (File S1, Table S5B).
(4) Approximately three quarters of the global model genes remained in the condition-specific cell line models (Fig. 1C).
(5) The Molt-4 model contained 15 unique genes, and the CCRF-CEM model had 4 unique genes (File S1, Table S5C).
(6) Both models lacked NADH dehydrogenase (complex I of the electron transport chain—ETC), which was determined by the absence of expression of a mandatory subunit (NDUFB3, Entrez gene ID 4709).

Rather, the ETC was fueled by FADH2 originating from succinate dehydrogenase and from fatty acid oxidation, which through flavoprotein electron transfer

FADH2

FADH2

  • could contribute to the same ubiquinone pool as complex I and complex II (succinate dehydrogenase).

Despite their different in vitro growth rates (which differed by 11 %, see File S2, Fig. S1) and
^^^ differences in exo-metabolomic data (Fig. 1B) and transcriptomic data,
^^^ the internal networks were largely conserved in the two condition-specific cell line models.

2.1.5 Condition-specific cell line models predict distinct metabolic strategies

Despite the overall similarity of the metabolic models, differences in their cellular uptake and secretion patterns suggested distinct metabolic states in the two cell lines (Fig. 1B and see “Materials and methods” section for more detail). To interrogate the metabolic differences, we sampled the solution space of each model using an Artificial Centering Hit-and-Run (ACHR) sampler (Thiele et al. 2005). For this analysis, additional constraints were applied, emphasizing the quantitative differences in commonly uptaken and secreted metabolites. The maximum possible uptake and maximum possible secretion flux rates were reduced
^^^ according to the measured relative differences between the cell lines (Fig. 1D, see “Materials and methods” section).

We plotted the number of sample points containing a particular flux rate for each reaction. The resulting binned histograms can be understood as representing the probability that a particular reaction can have a certain flux value.

A comparison of the sample points obtained for the Molt-4 and CCRF-CEM models revealed

  • a considerable shift in the distributions, suggesting a higher utilization of glycolysis by the CCRF-CEM model
    (File S2, Fig. S2).

This result was further supported by differences in medians calculated from sampling points (File S1, Table S6).
The shift persisted throughout all reactions of the pathway and was induced by the higher glucose uptake (34 %) from the extracellular medium in CCRF-CEM cells.

The sampling median for glucose uptake was 34 % higher in the CCRF-CEM model than in Molt-4 model (File S2, Fig. S2).

The usage of the TCA cycle was also distinct in the two condition-specific cell-line models (Fig. 2). Interestingly,
the models used succinate dehydrogenase differently (Figs. 2, 3).

TCA_reactions

TCA_reactions

The Molt-4 model utilized an associated reaction to generate FADH2, whereas

  • in the CCRF-CEM model, the histogram was shifted in the opposite direction,
  • toward the generation of succinate.

Additionally, there was a higher efflux of citrate toward amino acid and lipid metabolism in the CCRF-CEM model (Fig. 2). There was higher flux through anaplerotic and cataplerotic reactions in the CCRF-CEM model than in the Molt-4 model (Fig. 2); these reactions include

(1) the efflux of citrate through ATP-citrate lyase,
(2) uptake of glutamine,
(3) generation of glutamate from glutamine,
(4) transamination of pyruvate and glutamate to alanine and to 2-oxoglutarate,
(5) secretion of nitrogen, and
(6) secretion of alanine.

energetics-of-cellular-respiration

energetics-of-cellular-respiration

The Molt-4 model showed higher utilization of oxidative phosphorylation (Fig. 3), again supported by
elevated median flux through ATP synthase (36 %) and other enzymes, which contributed to higher oxidative metabolism. The sampling analysis therefore revealed different usage of central metabolic pathways by the condition-specific models.

Fig. 2

Differences in the use of  the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

Differences in the use of the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

Differences in the use of the TCA cycle by the CCRF-CEM model (red) and the Molt-4 model (blue).

The table provides the median values of the sampling results. Negative values in histograms and in the table describe reversible reactions with flux in the reverse direction. There are multiple reversible reactions for the transformation of isocitrate and α-ketoglutarate, malate and fumarate, and succinyl-CoA and succinate. These reactions are unbounded, and therefore histograms are not shown. The details of participating cofactors have been removed.

Figure 3.

Molt-4 has higher median flux through ETC reactions II–IV 11306_2014_721_Fig3_HTML

Molt-4 has higher median flux through ETC reactions II–IV 11306_2014_721_Fig3_HTML

Atp ATP, cit citrate, adp ADP, pi phosphate, oaa oxaloacetate, accoa acetyl-CoA, coa coenzyme-A, icit isocitrate, αkg α-ketoglutarate, succ-coa succinyl-CoA, succ succinate, fumfumarate, mal malate, oxa oxaloacetate,
pyr pyruvate, lac lactate, ala alanine, gln glutamine, ETC electron transport chain

Ingenuity network analysis showing up (red) and downregulation (green) of miRNAs involved in PC and their target genes

Ingenuity network analysis showing up (red) and downregulation (green) of miRNAs involved in PC and their target genes

metabolic pathways 1476-4598-10-70-1

metabolic pathways 1476-4598-10-70-1

Metabolic Systems Research Team fig2

Metabolic Systems Research Team fig2

Metabolic control analysis of respiration in human cancer tissue. fphys-04-00151-g001

Metabolic control analysis of respiration in human cancer tissue. fphys-04-00151-g001

Metabolome Informatics Research fig1

Metabolome Informatics Research fig1

Modelling of Central Metabolism network3

Modelling of Central Metabolism network3

N. gaditana metabolic pathway map ncomms1688-f4

N. gaditana metabolic pathway map ncomms1688-f4

protein changes in biological mechanisms

protein changes in biological mechanisms

Read Full Post »

NIH Considers Guidelines for CAR-T therapy: Report from Recombinant DNA Advisory Committee

Reporter: Stephen J. Williams, Ph.D.

UPDATED 5/10/2022

In the mid to late 1970’s a public debate (and related hysteria) had emerged surrounding two emerging advances in recombinant DNA technology;

  1. the development of vectors useful for cloning pieces of DNA (the first vector named pBR322) and
  2. the discovery of bacterial strains useful in propagating such vectors

As discussed by D. S, Fredrickson of NIH’s Dept. of Education and Welfare in his historical review” A HISTORY OF THE RECOMBINANT DNA GUIDELINES IN THE UNITED STATES” this international concern of the biological safety issues of this new molecular biology tool led the National Institute of Health to coordinate a committee (the NIH Recombinant DNA Advisory Committee) to develop guidelines for the ethical use, safe development, and safe handling of such vectors and host bacterium. The first conversations started in 1974 and, by 1978, initial guidelines had been developed. In fact, as Dr. Fredrickson notes, public relief was voiced even by religious organizations (who had the greatest ethical concerns)

On December 16, 1978, a telegram purporting to be from the Vatican was hand delivered to the office of Joseph A. Califano, Jr., Secretary of Health, Education,

and Welfare. “Habemus regimen recombinatum,” it proclaimed, in celebration of the

end of a long struggle to revise the NIH Guidelines for Research Involving

Recombinant DNA Molecules

The overall Committee resulted in guidelines (2013 version) which assured the worldwide community that

  • organisms used in such procedures would have limited pathogenicity in humans
  • vectors would be developed in a manner which would eliminate their ability to replicate in humans and have defined antibiotic sensitivity

So great was the success and acceptance of this committee and guidelines, the NIH felt the Recombinant DNA Advisory Committee should meet regularly to discuss and develop ethical guidelines and clinical regulations concerning DNA-based therapeutics and technologies.

A PowerPoint Slideshow: Introduction to NIH OBA and the History of Recombinant DNA Oversight can be viewed at the following link:

http://www.powershow.com/view1/e1703-ZDc1Z/Introduction_to_NIH_OBA_and_the_History_of_Recombinant_DNA_Oversight_powerpoint_ppt_presentation

Please see the following link for a video discussion between Dr. Paul Berg, who pioneered DNA recombinant technology, and Dr. James Watson (Commemorating 50 Years of DNA Science):

http://media.hhmi.org/interviews/berg_watson.html

The Recombinant DNA Advisory Committee has met numerous times to discuss new DNA-based technologies and their biosafety and clinical implication including:

A recent Symposium was held in the summer of 2010 to discuss ethical and safety concerns and discuss potential clinical guidelines for use of an emerging immunotherapy technology, the Chimeric Antigen Receptor T-Cells (CART), which at that time had just been started to be used in clinical trials.

Considerations for the Clinical Application of Chimeric Antigen Receptor T Cells: Observations from a Recombinant DNA Advisory Committee Symposium Held June 15, 2010[1]

Contributors to the Symposium discussing opinions regarding CAR-T protocol design included some of the prominent members in the field including:

Drs. Hildegund C.J. Ertl, John Zaia, Steven A. Rosenberg, Carl H. June, Gianpietro Dotti, Jeffrey Kahn, Laurence J. N. Cooper, Jacqueline Corrigan-Curay, And Scott E. Strome.

The discussions from the Symposium, reported in Cancer Research[1]. were presented in three parts:

  1. Summary of the Evolution of the CAR therapy
  2. Points for Future Consideration including adverse event reporting
  3. Considerations for Design and Implementation of Trials including mitigating toxicities and risks

1. Evolution of Chimeric Antigen Receptors

Early evidence had suggested that adoptive transfer of tumor-infiltrating lymphocytes, after depletion of circulating lymphocytes, could result in a clinical response in some tumor patients however developments showed autologous T-cells (obtained from same patient) could be engineered to express tumor-associated antigens (TAA) and replace the TILS in the clinical setting.

However there were some problems noticed.

  • Problem: HLA restriction of T-cells. Solution: genetically engineer T-cells to redirect T-cell specificity to surface TAAs
  • Problem: 1st generation vectors designed to engineer T-cells to recognize surface epitopes but engineered cells had limited survival in patients.   Solution: development of 2nd generation vectors with co-stimulatory molecules such as CD28, CD19 to improve survival and proliferation in patients

A summary table of limitations of the two types of genetically-modified T-cell therapies were given and given (in modified form) below

                                                                                                Type of Gene-modified T-Cell

Limitations aβ TCR CAR
Affected by loss or decrease of HLA on tumor cells yes no
Affected by altered tumor cell antigen processing? yes no
Need to have defined tumor target antigen? no yes
Vector recombination with endogenous TCR yes no

A brief history of construction of 2nd and 3rd generation CAR-T cells given by cancer.gov:

http://www.cancer.gov/cancertopics/research-updates/2013/CAR-T-Cells

cartdiagrampic

Differences between  second- and third-generation chimeric antigen receptor T cells. (Adapted by permission from the American Association for Cancer Research: Lee, DW et al. The Future Is Now: Chimeric Antigen Receptors as New Targeted Therapies for Childhood Cancer. Clin Cancer Res; 2012;18(10); 2780–90. doi:10.1158/1078-0432.CCR-11-1920)

Constructing a CAR T Cell (from cancer.gov)

The first efforts to engineer T cells to be used as a cancer treatment began in the early 1990s. Since then, researchers have learned how to produce T cells that express chimeric antigen receptors (CARs) that recognize specific targets on cancer cells.

The T cells are genetically modified to produce these receptors. To do this, researchers use viral vectors that are stripped of their ability to cause illness but that retain the capacity to integrate into cells’ DNA to deliver the genetic material needed to produce the T-cell receptors.

The second- and third-generation CARs typically consist of a piece of monoclonal antibody, called a single-chain variable fragment (scFv), that resides on the outside of the T-cell membrane and is linked to stimulatory molecules (Co-stim 1 and Co-stim 2) inside the T cell. The scFv portion guides the cell to its target antigen. Once the T cell binds to its target antigen, the stimulatory molecules provide the necessary signals for the T cell to become fully active. In this fully active state, the T cells can more effectively proliferate and attack cancer cells.

2. Adverse Event Reporting and Protocol Considerations

The symposium had been organized mainly in response to two reported deaths of patients enrolled in a CART trial, so that clinical investigators could discuss and formulate best practices for the proper conduct and analysis of such trials. One issue raised was lack of pharmacovigilence procedures (adverse event reporting). Although no pharmacovigilence procedures (either intra or inter-institutional) were devised from meeting proceedings, it was stressed that each institution should address this issue as well as better clinical outcome reporting.

Case Report of a Serious Adverse Event Following the Administration of T Cells Transduced With a Chimeric Antigen Receptor Recognizing ERBB2[2] had reported the death of a patient on trial.

In A phase I clinical trial of adoptive transfer of folate receptor-alpha redirected autologous T cells for recurrent ovarian cancer[3] authors: Lana E Kandalaft*, Daniel J Powell and George Coukos from University of Pennsylvania recorded adverse events in pilot studies using a CART modified to recognize the folate receptor, so it appears any adverse event reporting system is at the discretion of the primary investigator.

Other protocol considerations suggested by the symposium attendants included:

  • Plan for translational clinical lab for routine blood analysis
  • Subject screening for pulmonary and cardiac events
  • Determine possibility of insertional mutagenesis
  • Informed consent
  • Analysis of non T and T-cell subsets, e.g. natural killer cells and CD*8 cells

3. Consideration for Design of Trials and Mitigating Toxicities

  • Early Toxic effectsCytokine Release Syndrome– The effectiveness of CART therapy has been manifested by release of high levels of cytokines resulting in fever and inflammatory sequelae. One such cytokine, interleukin 6, has been attributed to this side effect and investigators have successfully used an IL6 receptor antagonist, tocilizumab (Acterma™), to alleviate symptoms of cytokine release syndrome (see review Adoptive T-cell therapy: adverse events and safety switches by Siok-Keen Tey).

 

Below is a video form Dr. Renier Brentjens, M.D., Ph.D. for Memorial Sloan Kettering concerning the finding he made that the adverse event from cytokine release syndrome may be a function of the tumor cell load, and if they treat the patient with CAR-T right after salvage chemotherapy the adverse events are alleviated..

Please see video below:

http link: https://www.youtube.com/watch?v=4Gg6elUMIVE

  • Early Toxic effects – Over-activation of CAR T-cells; mitigation by dose escalation strategy (as authors in reference [3] proposed). Most trials give billions of genetically modified cells to a patient.
  • Late Toxic Effectslong-term depletion of B-cells . For example CART directing against CD19 or CD20 on B cells may deplete the normal population of CD19 or CD20 B-cells over time; possibly managed by IgG supplementation

Below is a curation of various examples of the need for developing a Pharmacovigilence Framework for Engineered T-Cell Therapies

As shown above the first reported side effects from engineered T-cell or CAR-T therapies stemmed from the first human trial occuring at University of Pennsylvania, the developers of the first CAR-T therapy.  The clinical investigators however anticipated the issue of a potential cytokine storm and had developed ideas in the pre-trial phase of how to ameliorate such toxicity using anti-cytokine antibodies.  However, until the trial was underway they were unsure of which cytokines would be prominent in causing a cytokine storm effect from the CAR-T therapy.  Fortunately, the investigators were able to save patient 1 (described here in other posts) using anti-IL1 and 10 antibodies.  

 

Over the years, however, multiple trials had to be discontinued as shown below in the following posts:

What does this mean for Immunotherapy? FDA put a temporary hold on Juno’s JCAR015, Three Death of Celebral Edema in CAR-T Clinical Trial and Kite Pharma announced Phase II portion of its CAR-T ZUMA-1 trial

The NIH has put a crimp in the clinical trial work of Steven Rosenberg, Kite Pharma’s star collaborator at the National Cancer Institute. The feds slammed the brakes on the production of experimental drugs at two of its facilities–including cell therapies that Rosenberg works with–after an internal inspection found they weren’t in compliance with safety and quality regulations.

In this instance Kite was being cited for manufacturing issues, apparantly fungal contamination in their cell therapy manufacturing facility.  However shortly after other CAR-T developers were having tragic deaths in their initial phase 1 safety studies.

Juno Halts Cancer Trial Using Gene-Altered Cells After 3 Deaths

 

Juno halts its immunotherapy trial for cancer after three patient deaths

By DAMIAN GARDE @damiangarde and MEGHANA KESHAVAN @megkesh

JULY 7, 2016

In Juno patient deaths, echoes seen of earlier failed company

By SHARON BEGLEY @sxbegle

JULY 8, 2016

https://www.statnews.com/2016/07/08/juno-echoes-of-dendreon/

After a deadly clinical trial, will immune therapies for cancer be a bust?

By DAMIAN GARDE @damiangarde

JULY 8, 2016

This led to warnings by FDA and alteration of their trials as well as the use of their CART as a monotherapy

Hours after Juno CAR-T study deaths announced, Kite enrolls CAR-T PhII

Well That Was Quick! FDA Lets Juno Restart Trial With a New Combination Chemotherapuetic

 at Seattle Times

FDA lets Juno restart cancer-treatment trial

Certainly with so many issues there would seem to be more rigorous work to either establish a pharmacovigilence framework or to develop alternative engineered T cells with a safer profile

However here we went again

New paper sheds fresh light on Tmunity’s high-profile CAR-T deaths
Jason Mast
Editor
The industry-wide effort to push CAR-T therapies — wildly effective in several blood cancers — into solid tumors took a hit last year when Tmunity, a biotech founded by CAR-T pioneer Carl June and backed by several blue-chip VCs, announced it shut down its lead program for prostate cancer after two patients died.

On a personal note this trial was announced in a Bio International meeting here in Philadelphia a few years ago in 2019

see Live Conference Coverage on this site

eProceedings for BIO 2019 International Convention, June 3-6, 2019 Philadelphia Convention Center; Philadelphia PA, Real Time Coverage by Stephen J. Williams, PhD @StephenJWillia2

and the indication was for prostate cancer, in particular hormone resistant castration resistant.  Another one was planned for pancreatic cancer from the same group and the early indications were favorable.

From Onclive

Source: https://www.onclive.com/view/car-t-cell-therapy-trial-in-solid-tumors-halted-following-2-patient-deaths 

Tmunity Therapeutics, a clinical-stage biotherapeutics company, has halted the development of its lead CAR T-cell product following the deaths of 2 patients who were enrolled to a trial investigating its use in solid tumors.1

The patients reportedly died from immune effector cell-associated neurotoxicity syndrome (ICANS), which is a known adverse effect associated with CAR T-cell therapies.

“What we are discovering is that the cytokine profiles we see in solid tumors are completely different from hematologic cancers,” Oz Azam, co-founder of Tmunity said in an interview with Endpoints News. “We observed ICANS. And we had 2 patient deaths as a result of that. We navigated the first event and obviously saw the second event, and as a result of that we have shut down the version one of that program and pivoted quickly to our second generation.”

Previously, with first-generation CAR T-cell therapies in patients with blood cancers, investigators were presented with the challenge of overcoming cytokine release syndrome. Now ICANS, or macrophage activation, is proving to have deadly effects in the realm of solid tumors. Carl June, the other co-founder of Tmunity, noted that investigators will now need to dedicate their efforts to engineering around this, as had been done with tocilizumab (Actemra) in 2012.

The company is dedicated to the development of novel approaches that produce best-in-class control over T-cell activation and direction in the body.2 The product examined in the trial was developed to utilize engineered patient cells to target prostate-specific membrane antigen; it was also designed to use a dominant TGFβ receptor to block an important checkpoint involved in cancer.

Twenty-four patients were recruited for the dose-escalating study and the company plans to release data from high-dose cohorts later in 2021.

“We are going to present all of this in a peer-reviewed publication because we want to share this with the field,” Azam said. “Because everything we’ve encountered, no matter what…people are going to encounter this when they get into the clinic, and I don’t think they’ve really understood yet because so many are preclinical companies that are not in the clinic with solid tumors. And the rubber meets the road when you get in the clinic, because the ultimate in vivo model is the human model.”

Azam added that the company plans to develop a new investigational new drug for version 2, which they hope will result in a safer product.

References

  1. Carroll J. Exclusive: Carl June’s Tmunity encounters a lethal roadblock as 2 patient deaths derail lead trial, raise red flag forcing rethink of CAR-T for solid tumors. Endpoints News. June 2, 2021. Accessed June 3, 2021. https://bit.ly/3wPYWm0
  2. Research and Development. Tmunity Therapeutics website. Accessed June 3, 2021. https://bit.ly/3fOH3OR

Forward to 2022

Reprogramming a new type of T cell to go after cancers with less side effects, longer impact

A Sloan Kettering Institute research team thinks new, killer, innate-like T cells could make promising candidates to treat cancers that so far haven’t responded to immunotherapy treatments. (koto_feja)

Immunotherapy is one of the more appealing and effective kinds of cancer treatment when it works, but the relatively new approach is still fairly limited in the kinds of cancer it can be used for. Researchers at the Sloan Kettering Institute have discovered a new kind of immune cell and how it could be used to expand the reach of immunotherapy treatments to a much wider pool of patients.

The cells in question are called killer innate-like T cells, a threatening name for a potentially lifesaving innovation. Unlike normal killer T cells, killer innate-like T cells stay active much longer and can burrow further into potentially cancerous tissue to attack tumors. The research team first reported these cells in 2016, but it’s only recently that they were able to properly understand and identify them.

“We think these killer innate-like T cells could be targeted or genetically engineered for cancer therapy,” said the study’s lead author, Ming Li, Ph.D., in a press release. “They may be better at reaching and killing solid tumors than conventional T cells.”

Below is the referenced paper from Pubmed:

Evaluation of the safety and efficacy of humanized anti-CD19 chimeric antigen receptor T-cell therapy in older patients with relapsed/refractory diffuse large B-cell lymphoma based on the comprehensive geriatric assessment system

Affiliations 

Abstract

Anti-CD19 chimeric antigen receptor (CAR) T-cell therapy has led to unprecedented results to date in relapsed/refractory (R/R) diffuse large B-cell lymphoma (DLBCL), yet its clinical application in elderly patients with R/R DLBCL remains somewhat limited. In this study, a total of 31 R/R DLBCL patients older than 65 years of age were enrolled and received humanized anti-CD19 CAR T-cell therapy. Patients were stratified into a fit, unfit, or frail group according to the comprehensive geriatric assessment (CGA). The fit group had a higher objective response (OR) rate (ORR) and complete response (CR) rate than that of the unfit/frail group, but there was no difference in the part response (PR) rate between the groups. The unfit/frail group was more likely to experience AEs than the fit group. The peak proportion of anti-CD19 CAR T-cells in the fit group was significantly higher than that of the unfit/frail group. The CGA can be used to effectively predict the treatment response, adverse events, and long-term survival.

Introduction

Diffuse large B-cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma (NHL), accounting for 30–40% of cases, with the median age of onset being older than 65 years [1]. Although the five-year survival rate for patients with DLBCL has risen to more than 60% with the application of standardized treatments and hematopoietic stem cell transplantation, nearly half of patients progress to relapsed/refractory (R/R) DLBCL. Patients with R/R DLBCL, especially elderly individuals, have a poor prognosis [2,3], so new treatments are needed to prolong survival and improve the prognosis of this population.

As a revolutionary immunotherapy therapy, anti-CD19 chimeric antigen receptor (CAR) T-cell therapy has achieved unprecedented results in hematological tumors [4]. As CD19 is expressed on the surface of most B-cell malignant tumors but not on pluripotent bone marrow stem cells, CD19 has been used as a target for B-cell malignancies, including B-cell acute lymphoblastic leukemia, NHL, multiple myeloma, and chronic lymphocytic leukemia [5]. Despite the wide application and high efficacy of anti-CD19 CAR T-cell therapy, reports of adverse events (AEs) such as cytokine release syndrome (CRS) and immune effector cell-associated neurotoxic syndrome (ICANS) have influenced its use [6]. Especially in elderly patients, AEs associated with anti-CD19 CAR T-cell therapy might be more obvious.

Although anti-CD19 CAR T-cell therapy has been reported in the treatment of NHL, including R/R DLBCL, few studies to date have assessed the safety of anti-CD19 CAR T-cell therapy in elderly R/R DLBCL patients, and its clinical application in the elderly R/R DLBCL population is limited. In ZUMA-1 [7] to R/R DLBCL patients who received CAR T-cell therapy, the CR rate in patients ≥65 years was higher than that of in patients <65 years (75% vs. 53%). Lin et al. [8] reported 49 R/R DLBCL patients (24 patients >65 years, 25 patients <65 years) who received CAR T-cell therapy with a median follow-up of 179 days. The CR rate at 100 days was 51%, while the 6-month progression-free survival (PFS) and overall survival (OS) were 48% and 71%, respectively. Neither of the two studies carried out a comprehensive geriatric assessment (CGA) of fit, unfit, and frail groups of R/R DLBCL patients over 65 years of age and further analyzed the differences in efficacy and side effects in the three groups. The CGA is an effective system designed to evaluate the prognosis and improve the survival of elderly patients with cancer. The CGA system includes age, activities of daily living (ADL), instrumental ADL (IADL), and the Cumulative Illness Rating Score for Geriatrics (CIRS-G) [9].

In this study, elderly R/R DLBCL patients were grouped according to their CGA results (fit vs. unfit/frail) before receiving humanized anti-CD19 CAR T-cell therapy. We then analyzed the efficacy and AEs of anti-CD19 CAR T-cell therapy and compared findings between these groups.

 

Well it appears that the discriminator was only fitness going into the trial  a bit odd that the whole field appears to be lacking in development of Safety Biomarkers.

 

 

However Genentech (subsidiary of Roche) may now be using some data to develop therapies which may combat resistance to CART therapies which may provide at least, for now, a toxicokinetic approach to reducing AEs by lowering the amount of CARTs needed to be administered.

 

Source: https://www.fiercebiotech.com/research/genentech-uncovers-how-cancer-cells-resist-t-cell-attack-potential-boon-immunotherapy

Roche’s Genentech is exploring inhibiting ESCRT as an anticancer strategy, said Ira Mellman, Ph.D., Genentech’s vice president of cancer immunology. (Roche)

Cancer cells deploy various tactics to avoid being targeted and killed by the immune system. A research team led by Roche’s Genentech has now identified one such method that cancer cells use to resist T-cell assault by repairing damage.

To destroy their targets, cancer-killing T cells known as cytotoxic T lymphocytes (CTLs) secrete the toxin perforin to form little pores in the target cells’ surface. Another type of toxin called granzymes are delivered directly into the cells through those portals to induce cell death.

By using high-res imaging in live cells, the Genentech-led team found that the membrane damage caused by perforin could trigger a repair response. The tumor cells could recruit endosomal sorting complexes required for transport (ESCRT) proteins to remove the lesions, thereby preventing granzymes from entering, the team showed in a new study published in Science.

The following is the Science paper

Membrane repair in target cell defenses

Killer T cells destroy virus-infected and cancer cells by secreting two protein toxins that act as a powerful one-two punch. Pore-forming toxins, perforins, form holes in the plasma membrane of the target cell. Cytotoxic proteins released by T cells then pass through these portals, inducing target cell death. Ritter et al. combined high-resolution imaging data with functional analysis to demonstrate that tumor-derived cells fight back (see the Perspective by Andrews). Protein complexes of the ESCRT family were able to repair perforin holes in target cells, thereby delaying or preventing T cell–induced killing. ESCRT-mediated membrane repair may thus provide a mechanism of resistance to immune attack. —SMH

Abstract

Cytotoxic T lymphocytes (CTLs) and natural killer cells kill virus-infected and tumor cells through the polarized release of perforin and granzymes. Perforin is a pore-forming toxin that creates a lesion in the plasma membrane of the target cell through which granzymes enter the cytosol and initiate apoptosis. Endosomal sorting complexes required for transport (ESCRT) proteins are involved in the repair of small membrane wounds. We found that ESCRT proteins were precisely recruited in target cells to sites of CTL engagement immediately after perforin release. Inhibition of ESCRT machinery in cancer-derived cells enhanced their susceptibility to CTL-mediated killing. Thus, repair of perforin pores by ESCRT machinery limits granzyme entry into the cytosol, potentially enabling target cells to resist cytolytic attack.
Cytotoxic lymphocytes, including cytotoxic T lymphocytes (CTLs) and natural killer (NK) cells, are responsible for identifying and destroying virus-infected or tumorigenic cells. To kill their targets, CTLs and NK cells secrete a pore-forming toxin called perforin through which apoptosis-inducing serine proteases (granzymes) are delivered directly into the cytosol. Successful killing of target cells often requires multiple hits from single or multiple T cells (1). This has led to the idea that cytotoxicity is additive, often requiring multiple rounds of sublethal lytic granule secretion events before a sufficient threshold of cytosolic granzyme activity is reached to initiate apoptosis in the target (2).
Loss of plasma membrane integrity induced by cytolytic proteins or mechanical damage leads to a membrane repair response. Damage results in an influx of extracellular Ca2+, which has been proposed to lead to the removal of the membrane lesion by endocytosis, resealing of the lesions by lysosomal secretion, or budding into extracellular vesicles (3). Perforin pore formation was initially reported to enhance endocytosis of perforin (4), but subsequent work has challenged this claim (5). Endosomal sorting complexes required for transport (ESCRT) proteins can repair small wounds and pores in the plasma membrane caused by bacterial pore-forming toxins, mechanical wounding, and laser ablation (67). ESCRT proteins are transiently recruited to sites of membrane damage in a Ca2+-dependent fashion, where they assemble budding structures that shed to eliminate the wound and restore plasma membrane integrity. ESCRT-dependent membrane repair has been implicated in the resealing of endogenous pore-mediated plasma membrane damage during necroptosis (8) and pyroptosis (9).

Localization of target-derived ESCRT proteins to the cytolytic synapse

To investigate whether ESCRT-mediated membrane repair might be involved in the removal of perforin pores during T cell killing, we first determined whether ESCRT proteins in cancer-derived cells were recruited to sites of CTL engagement after perforin secretion. We used CTLs from OT-I mice that express a high-affinity T cell receptor (TCR) that recognizes the ovalbumin peptide SIINFEKL (OVA257-264) bound to the major histocompatibility complex (MHC) allele H-2Kb (10). We performed live-cell microscopy of OT-I CTLs engaging SIINFEKL-pulsed target cells that express enhanced green fluorescent protein (EGFP)–tagged versions of Tsg101 or Chmp4b, two ESCRT proteins implicated in membrane repair (6). To correlate recruitment of ESCRT proteins with perforin exposure in time, we monitored CTL-target interaction in media with a high concentration of propidium iodide (PI), a cell-impermeable fluorogenic dye that can rapidly diffuse through perforin pores to bind and illuminate nucleic acids in the cytosol and nucleus of the target (5). EGFP-tagged ESCRT proteins were consistently recruited to the site of CTL engagement within 30 to 60 s after PI influx (Fig. 1, A and B). EGFP-Tsg101 and EGFP-Chmp4b in target cells accumulated at the cytolytic synapse after PI influx in 25 of 27 (92.6%) and 31 of 33 (93.9%) of conjugates monitored, respectively, compared with a cytosolic EGFP control, which was not recruited (Fig. 1C and movies S1 to S3). Notably, ESCRT-laden material, presumably membrane fragments, frequently detached from the target cell and adhered to the surface of the CTL (Fig. 1, D and E, and movie S2). We observed this phenomenon in ~60% of conjugates imaged in which targets expressed EGFP-Tsg101 or EGFP-Chmp4b (17 of 27 and 20 of 33 conjugates, respectively; Fig. 1D). Shedding of ESCRT-positive membrane from the cell after repair occurs after laser-induced plasma membrane wounding (67). Plasma membrane fragments shed from the target cell into the synaptic cleft likely contain ligands for CTL-resident receptors. Target cell death would separate the CTL and target, revealing target-derived material on the CTL surface.
FIG. 1. Fluorescently tagged ESCRT proteins in targets localize to site of CTL killing after perforin secretion.
(A) Live-cell spinning disk confocal imaging of a fluorescently labeled OT-I CTL (magenta) engaging an MC38 cancer cell expressing EGFP-Tsg101 (green) in media containing 100 μM PI (red). Yellow arrowheads highlight ESCRT recruitment. T-0:00 is the first frame of PI influx into the target cell (time in minutes:seconds). Scale bar, 10 μm. (B) Graph of EGFP-Tsg101 and PI fluorescence intensity at the IS within the target over time, from example in (A). AU, arbitrary units. (C and D) Quantification of CTL-target conjugates exhibiting accumulation of EGFP at the synapse after PI influx (C) or detectable EGFP-labeled material associated with CTL after target interaction (D) (EGFP condition: N = 22 conjugates in seven independent experiments; EGFP-Tsg101 condition: N = 27 conjugates in nine independent experiments; EGFP-Chmp4b condition: N = 33 conjugates in 24 independent experiments). (E) Live-cell spinning disk confocal imaging of OT-I CTL (magenta) killing MC38 expressing EGFP-Chmp4b (green), demonstrating the presence of target-derived EGFP-Chmp4b material (yellow arrowheads) associated with CTL membrane after a productive target encounter. T-0:00 is the first frame of PI influx into the target cell. Scale bar, 10 μm.
OPEN IN VIEWER

3D cryo-SIM and FIB-SEM imaging of CTLs caught in the act of killing target cells

Although live-cell imaging indicated that ESCRT complexes were rapidly recruited at sites of T cell–target cell contact, light microscopy alone is of insufficient resolution to establish that this event occurred at the immunological synapse (IS). We thus sought to capture a comprehensive view of the IS in the moments immediately after secretion of lytic granules. We used cryo–fluorescence imaging followed by correlative focused ion beam–scanning electron microscopy (FIB-SEM), which can achieve isotropic three-dimensional (3D) imaging of whole cells at 8-nm resolution or better (1113). To capture the immediate response of target cells after perforin exposure, we developed a strategy whereby cryo-fixed CTL-target conjugates were selected shortly after perforation, indicated by the presence of a PI gradient in the target (fig. S1A). In live-cell imaging experiments, PI fluorescence across the nucleus of SIINFEKL-pulsed ID8 target cells began as a gradient and became homogeneous 158 ± 64 s, on average, after initial PI influx (N = 31 conjugates; fig. S1, B and C, and movie S4). Thus, fixed CTL-target conjugates that exhibited a gradient of PI across the nucleus would have been captured within ~3 min of perforin exposure.
Coverslips of CTL-target conjugates underwent high-pressure freezing and were subsequently imaged with wide-field cryogenic fluorescence microscopy followed by 3D cryo–structured illumination microscopy (3D cryo-SIM) performed in a customized optical cryostat (14). We selected candidate conjugates for FIB-SEM imaging on the basis of whether a gradient of PI fluorescence was observed across the nucleus of the target emanating from an attached CTL (movie S5). FIB-SEM imaging of the CTL-target conjugate at 8-nm isotropic voxels resulted in a stack of >10,000 individual electron microscopy (EM) images. The image stack was then annotated using a human-assisted machine learning–computer vision platform to segment the plasma membranes of each cell along with cell nuclei and various organelles (https://ariadne.ai/).
We captured four isotropic 3D 8-nm-resolution EM datasets of CTLs killing cancer cells moments after the secretion of lytic granule contents (Fig. 2A and movie S6). Semiautomated segmentation of the cell membranes, nuclei, lytic granules, Golgi apparatus, mitochondria, and centrosomes of the T cells allow for easier visualization and analysis of the 3D EM data. All FIB-SEM datasets and segmentations can be explored online at https://openorganelle.janelia.org (see links in the supplementary materials). Reconstructed views of the segmented data clearly demonstrate the polarization of the centrosome, Golgi apparatus, and lytic granules to the IS—all of which are hallmarks of CTL killing [Fig. 2A, i to iii, and movie S6, time stamp (TS) 1:33] (1516). On the target cell side, we noted cytoplasmic alterations consistent with cell damage including enhanced electron density of mitochondria adjacent to the IS (fig. S2A). Close visual scanning of the postsynaptic target cell membrane in the raw EM data failed to reveal obvious perforin pores, which have diameters (16 to 22 nm) close to the limit of resolution for this technique (17).
FIG. 2. Eight-nm-resolution 3D FIB-SEM imaging of whole CTL-target conjugate.
(A) 3D rendering of segmented plasma membrane predictions derived from isotropic 8-nm-resolution FIB-SEM imaging of a high-pressure frozen OT-I CTL (red) captured moments after secretion of lytic granules toward a peptide-pulsed ID8 ovarian cancer cell (blue). (i) Side-on sliced view corresponding to the gray horizontal line within the inset box in (A). Seen here are 3D renderings of the segmented plasma membrane of the cancer cell (blue) as well as the CTL plasma membrane (red), centrosome (gold), Golgi apparatus (cyan), lytic granules (purple), mitochondria (green), and nucleus (gray). (ii and iii) A zoomed-in view from the dashed white box in (i) shows the details of the IS (ii) and a single corresponding FIB-SEM slice docked onto the segmented data (iii). (B) Single top-down FIB-SEM slice showing overlaid target cell (blue) and CTL (red) segmentation. (i) Zoomed-in view from dashed white box in (B) details the intercellular material (IM) (gray) between the CTL and target at the IS. (C) Zoomed-in image of a 3D rendering of the surface of the target cell plasma membrane (white) opposite the intercellular material (IM) at the IS. Yellow arrowheads mark plasma membrane buds protruding into the synaptic cleft. (i and ii) Accompanying images demonstrate the orientation of the view in (C) with the rendering of the CTL (red) present (i) and removed (ii), and the dashed yellow box in (ii) indicates the area of detail shown in (C).
OPEN IN VIEWER
The segmentation of the two cells illustrates the detailed topography of the plasma membrane of the CTL and target at the IS (fig. S2B). The raw EM and segmentation data reveal a dense accumulation of particles, vesicles, and multilamellar membranous materials, which crowd the synaptic cleft between the CTL and the target (Fig. 2B and movie S6, TS 0:40 to 0:50). The source of this intercellular material (IM) was likely in part the lytic granules because close inspection revealed similar particles and dense vesicles located within as-yet-unreleased granules (fig. S2C). To determine whether some of the membranous material within the intercellular space might also have been derived from the target cell, we examined the surface topology of the postsynaptic target cell. We noted multiple tubular and bud-like protrusions of the target cell membrane that extended into the synaptic space; thus, at least some of the membrane structures observed were still in continuity with the target cell (Fig. 2C and movie S6, TS 0:58 to 1:11). ESCRT proteins have been shown to generate budding structures in the context of plasma membrane repair (6), which led us to next assess where target-derived ESCRT proteins are distributed in the context of the postsecretion IS.
To map the localization of target-derived ESCRT proteins onto a high-resolution landscape of the IS, we captured three FIB-SEM datasets that have associated 3D cryo-SIM fluorescence data for mEmerald-Chmp4b localization (Fig. 3A, fig. S3, and movie S7). This correlative light and electron microscopy (CLEM) revealed that mEmerald-Chmp4b expressed in the target cell was specifically recruited to the target plasma membrane opposite the secreted IM (Fig. 3, B and C). The topography of the plasma membrane at the site of ESCRT recruitment was markedly convoluted, exhibiting many bud-like projections (movie S7, TS 0:37 to 0:40). mEmerald-Chmp4b fluorescence also overlapped with some vesicular structures in the intercellular synaptic space (Fig. 3C). Together, the live-cell imaging and the 3D cryo-SIM and FIB-SEM CLEM demonstrate the localization of ESCRT proteins at the synapse that was the definitive site of CTL killing and was thus spatially and temporally correlated to perforin secretion. These data implicate the ESCRT complex in the repair of perforin pores.
FIG. 3. Correlative 3D cryo-SIM and FIB-SEM reveal localization of target-derived ESCRT within the cytolytic IS.
(A) Three example datasets showing correlative 3D cryo-SIM and FIB-SEM imaging of OT-I CTLs (red) captured moments after secretion of lytic granules toward peptide-pulsed ID8 cancer cells (blue) expressing mEmerald-Chmp4b (green fluorescence). (B and C) Single FIB-SEM slices corresponding to the orange boxes in (A), overlaid with CTL and cancer cell segmentation (B) or correlative cryo-SIM fluorescence of mEmerald-Chmp4b derived from the target cell (C).
OPEN IN VIEWER

Function of ESCRT proteins in repair of perforin pores

We next investigated whether ESCRT inhibition could enhance the susceptibility of target cells to CTL-mediated killing. Prolonged inactivation of the ESCRT pathway is itself cytotoxic (9). We thus developed strategies to ablate ESCRT function that would allow us a window of time to assess CTL killing (fig. S4). We used two approaches to block ESCRT function: CRISPR knockout of the Chmp4b gene or overexpression of VPS4aE228Q (E228Q, Glu228 → Gln), a dominant-negative kinase allele that impairs ESCRT function (fig. S4, A to C) (10). We took care to complete our assessment of target killing well in advance of spontaneous target cell death (fig. S4D).
We tested the capacity of OT-I CTLs to kill targets presenting one of four previously characterized peptides that demonstrate a range of potencies at stimulating the OT-I TCR: SIINFEKL (N4), the cognate peptide, and three separate variants (in order of highest to lowest affinity), SIITFEKL (T4), SIIQFEHL (Q4H7), and SIIGFEKL (G4) (1819). Target cells were pulsed with peptide, washed, transferred to 96-well plates, and allowed to adhere before the addition of OT-I CTLs. Killing was assessed by monitoring the uptake of a fluorogenic caspase 3/7 indicator (Fig. 4, A to D, and fig. S5A). Killing was significantly more efficient in ESCRT-inhibited target cells for both CRISPR depletion of Chmp4b (Fig. 4, A to C) and expression of the dominant-negative VPS4aE228Q (Fig. 4D). The difference in killing between the ESCRT-inhibited and control cells was greater when the lower-potency T4, Q4H7, and G4 peptides were used. Nevertheless, ESCRT inhibition moderately improved killing efficiency even in the case of the high-potency SIINFEKL peptide. ESCRT inhibition had no effect on MHC class I expression on the surface of target cells (fig. S5B). Thus, ESCRT inhibition could sensitize target cells to perforin- and granzyme-mediated killing, especially at physiologically relevant TCR-peptide MHC affinities.
FIG. 4. ESCRT inhibition enhances susceptibility of cancer cells to CTL killing and recombinant lytic proteins.
(A) Representative time-lapse data of killing of peptide-pulsed Chmp4b knockout (KO) or control B16-F10 cells by OT-I CTLs. Affinity of the pulsed peptide to OT-I TCR decreases from left to right. Error bars indicate SDs. (B) Images extracted from T4 medium-affinity peptide condition show software-detected caspase 3/7+ events in control and Chmp4b KO conditions. (C and D) Data representing the 4-hour time point of assays measuring OT-I T cells killing either Chmp4b KO (C) or VPS4 dominant-negative (D) target cells with matched controls. Error bars indicate SDs of data. Data are representative of at least three independent experimental replicates. pMHC, peptide-MHC; HA, hemagglutinin. (E and F) Determination of sublytic dose of Prf. B16-F10 cells expressing VPS4a (WT or E228Q) were exposed to increasing concentrations of Prf. Cell viability was determined by morphological gating (E). FSC, forward scatter; SSC, side scatter. (G and H) B16-F10 cells expressing VPS4a (WT or E228Q) were exposed to a sublytic dose of Prf in combination with increasing concentrations of recombinant GZMB (rGZMB). Cell death was determined by Annexin V–allophycocyanin (APC) staining (G). Controls include a condition with no perforin and 5000 ng/ml rGZMB and sublytic perforin with no rGZMB. Graphs in (F) and (H) represent the means of three experiments, and error bars indicate SDs. Statistical significance was determined by multiple unpaired t tests with alpha = 0.05. ns, not significant; *P < 0.05; **P < 0.01; ***P < 0.001.
OPEN IN VIEWER
We next directly tested the effects of ESCRT inhibition when target cells were exposed to both recombinant perforin (Prf) and granzyme B (GZMB), the most potently proapoptotic granzyme in humans and mice (20). Prf alone at high concentrations can lyse cells (4), so we first determined a sublytic Prf concentration that would temporarily permeabilize the plasma membrane but permit the cells to recover. B16-F10 cells expressing either VPS4aWT (WT, wild-type) or VPS4aE228Q were exposed to a range of Prf concentrations in the presence of PI, and cell viability and PI uptake were assessed using flow cytometry. Cells that expressed dominant-negative VPS4aE228Q were more sensitive to Prf alone than ESCRT-competent cells (Fig. 4, E and F). At 160 ng/ml Prf, there was no significant difference in cell viability for either condition. Cells in the live gate that were PI+ had been permeabilized by Prf but recovered. Although the percentage of PI+ live cells was similar under both sets of conditions, the mean fluorescence intensity of PI was higher in live ESCRT-inhibited cells (fig. S6). A delay in plasma membrane resealing could account for this difference.
We reasoned that delaying perforin pore repair might also enhance GZMB uptake into the target. ESCRT-inhibited cells were more sensitive to combined perforin-GZMB when cell death was measured by Annexin V staining (Fig. 4, G and H). Similar results were observed when these experiments were repeated with a murine lymphoma cancer cell line (fig. S7). The observation that ESCRT-inhibited target cells are more sensitive to both CTL-secreted and Prf-GZMB supports the hypothesis that the ESCRT pathway contributes to membrane repair after Prf exposure.
Escaping cell death is one of the hallmarks of cancer. Our findings suggest that ESCRT-mediated membrane repair of perforin pores may restrict accessibility of the target cytosol to CTL-secreted granzyme, thus promoting survival of cancer-derived cells under cytolytic attack. Although other factors may contribute to setting the threshold for target susceptibility to killing, the role of active repair of perforin pores must now be considered as a clear contributing factor.

Acknowledgments

We thank members of the Mellman laboratory for advice, discussion, and reagents; B. Haley for assistance with plasmid construct design; the Genentech FACS Core Facility for technical assistance; S. Van Engelenburg of Denver University for invaluable discussions and guidance; A. Wanner, S. Spaar, and the Ariande AI AG (https://ariadne.ai/) for assistance with FIB-SEM segmentation, CLEM coregistration, data presentation, and rendering; D. Bennett of the Janelia Research Campus for assisting with data upload to https://openorganelle.janelia.org; and the Genentech Postdoctoral Program for support.
Funding: A.T.R. and I.M. are funded by Genentech/Roche. C.S.X., G.S., A.W., D.A., N.I., and H.F.H. are funded by the Howard Hughes Medical Institute (HHMI).

Please look for a Followup Post concerning “Developing a Pharmacovigilence Framework for Engineered T-Cell Therapies”

 

References

  1. Ertl HC, Zaia J, Rosenberg SA, June CH, Dotti G, Kahn J, Cooper LJ, Corrigan-Curay J, Strome SE: Considerations for the clinical application of chimeric antigen receptor T cells: observations from a recombinant DNA Advisory Committee Symposium held June 15, 2010. Cancer research 2011, 71(9):3175-3181.
  2. Morgan RA, Yang JC, Kitano M, Dudley ME, Laurencot CM, Rosenberg SA: Case report of a serious adverse event following the administration of T cells transduced with a chimeric antigen receptor recognizing ERBB2. Molecular therapy : the journal of the American Society of Gene Therapy 2010, 18(4):843-851.
  3. Kandalaft LE, Powell DJ, Jr., Coukos G: A phase I clinical trial of adoptive transfer of folate receptor-alpha redirected autologous T cells for recurrent ovarian cancer. Journal of translational medicine 2012, 10:157.

Other posts on this site on Immunotherapy and Cancer include

Report on Cancer Immunotherapy Market & Clinical Pipeline Insight

New Immunotherapy Could Fight a Range of Cancers

Combined anti-CTLA4 and anti-PD1 immunotherapy shows promising results against advanced melanoma

Molecular Profiling in Cancer Immunotherapy: Debraj GuhaThakurta, PhD

Pancreatic Cancer: Genetics, Genomics and Immunotherapy

$20 million Novartis deal with ‘University of Pennsylvania’ to develop Ultra-Personalized Cancer Immunotherapy

Upcoming Meetings on Cancer Immunogenetics

Tang Prize for 2014: Immunity and Cancer

ipilimumab, a Drug that blocks CTLA-4 Freeing T cells to Attack Tumors @DM Anderson Cancer Center

Juno’s approach eradicated cancer cells in 10 of 12 leukemia patients, indicating potential to transform the standard of care in oncology

Read Full Post »

More on the Performance of High Sensitivity Troponin T and with Amino Terminal Pro BNP in Diabetes

Writer and Curator: Larry H. Bernstein, MD, FCAP

 

UPDATED on 9/1/2019

Risk-Based Thresholds for hs-Troponin I Safely Speed MI Rule-Out

HISTORIC suggests benefit to patients, clinicians

PARIS — Using different cutoffs for high-sensitivity cardiac troponin I (hs-cTnI) testing based on risk accurately ruled out MI and sent patients home from the emergency department sooner without missing adverse cardiac events, the HISTORIC trial found.

In the stepped-wedge trial of over 30,000 consecutive patients, introduction of the risk-based approach reduced length of stay at the emergency department by over 3 hours compared with standard care (6.8 vs 10.1 hours, P<0.001), reported Nicholas Mills, MD, PhD, of the University of Edinburgh in Scotland.

And 74% of patients under the new pathway were discharged without requiring hospital admission versus 53% under standard protocols (adjusted risk ratio 1.57, 95% CI 1.34-1.83, P<0.001).

For the primary safety endpoint, 2.5% of patients in the standard group died from cardiac causes or had an MI at 12 months post-discharge versus 1.8% of those in the early rule-out group (adjusted OR 1.02, 95% CI 0.74-1.40).

“Adoption of this approach will have major benefit for both patients and healthcare providers,” said Mills during a late-breaking press briefing at the 2019 European Society of Cardiology (ESC) congress.

For example, many patients will need only a single troponin test under the algorithm to lead to a decision on admission, he noted, which could have “absolutely enormous” cost savings.

SOURCE

https://www.medpagetoday.com/meetingcoverage/esc/81926?xid=nl_mpt_ACC_Reporter_2019-09-01&eun=g5099207d2r

 

UPDATED on 8/7/2018

Siemens’ high-sensitivity Troponin I (TnIH) assays got FDA clearance for use in diagnosing acute myocardial infarction. (Cardiovascular Business) The first high-sensitivity Troponin T test was cleared last year, as MedPage Today reported.

SOURCE

https://www.medpagetoday.com/cardiology/prevention/74423?xid=nl_mpt_cardiobreak2018-08-06&eun=g99985d0r&utm_source=Sailthru&utm_medium=email&utm_campaign=Ca

 

This is the final up to date review of the status of hs troponin T (or I) with or without the combined use of the Brain Type Natriuretic Peptide or its Amino Terminal peptide precursor.  In addition, a new identification of the role of the Atrial Natriuretic Peptide has been reported with respect to arrythmogenic activity.  On the one hand, the diagnostic value of the NT-proBNP has been seen as disappointing, in part because of the question of what information is gained by the test in overt known congestive heart failure, and in part because of uncertainty about following the test during a short hospital stay.  At least, this is the view of this reviewer.  However, in the last several years there has been an emphasis on the value this test adds to prediction of adverse outcomes.   In addition, there has been a hidden nvariable that has much to do with the original reference values that were established for age ranges, without any consideration of pathophysiology that might affect the values within those ranges, leading one to consider values in an aging population as normal, that might well be high.  Why is this?  Aging patients are more likely to have hypertension, and also the onset of type-2 diabetes mellitus, with cardiovascular disease consequences.  Type-2 diabetes mellitus (T2DM), for instance, is associated with insulin resistance and also fat gain with generation of adipokines, but the is also a hyalinization of insulin forming beta-cells of the pancreas, and there is hyalinization of glomeruli (glomerulosclerosis) and afferent arteriolonephrosclerosis with expected decline in glomerular filtrattion rate and hypertension as well.   Of course, this is also associated with hepatosteatosis.   Nevertheless, a reference range is established that takes none of this pathophysiology into account.   While a more reasonable approach has been pointed out, there has been no followup in the literature.

On the other hand, there has been much confusion over the restandardization of a high sensitivity troponin I or T test (hs-Tn(I or T).  The reference range declines precipitously, and there is a good identification of patients who are for the most part disease free, but there is no delineation of patients who are at high risk of acute coronary syndrome with plaque rupture, vs a  host of other cardiovascular conditions.  These have no relationship to plaque rupture, but may be serious and require further evaluation.  The question then becomes whether to admit for a hospital stay, to refer to clinic after an evaluation in the ICU without admission, or to do an extensive evaluation in the emergency department overnight before release for followup.  There is still another dimension of this that has to do with prediction of outcomes using hs-Tn(s) with or without the natriuretic peptides.  Another matter that is not for discussion in this article is the underutilization of hs-CRP.  Originally used for a marker of sepsis in the 1970s, it has come to be tied in with identification of an ongoing inflammatory condition.  Therefore, the existence of a known inflammatory condition in the family of autoimmune diseases, with one exception, might make it unnecessary.

The discussion is broken into three parts:

Part 1.   New findings on the troponins.
Part 2.  The use of combined hs-Tn with a natriuretic peptide (NT-proBNP)
Part 3.  Atrial natriuretic peptide

Part 1.    New findings on the troponins.

Troponin: more lessons to learn

C Liebetrau,HM Nef,andCW.Hamm*
KerckhoffHeartandThoraxCenter;DepartmentofCardiology,BadNauheim,
Germany; (GermanCentreforCardiovascularResearch),partnersite
RheinMain,BadNauheim, Germany; and UniversityofGiessen,Medizinische
KlinikI,KardiologieundAngiologie,Giessen,Germany
European Hear tJournal
http://dx.doi.org/10.1093/eurheartj/eht357This editorial refers to ‘Risk stratification in patients with acute chest pain
using three high-sensitivity cardiac troponin assays’,
by P. Haafetal. http://dx.doi.org/10.1093/eurheartj/eht218Cardiac troponin entered our diagnostic armamentarium 20 years ago and –
unlike any other biomarker –

  • is going through constant expansion in its application.

Troponin started out as a marker of risk in unstable angina’, then was used

  • as gold standard for risk stratification and therapy guiding in acute coronary syndrome
  •  served further to redefine myocardial infarction, and
  • has also become a risk factor in apparently healthy subjects.

The recently introduced high-sensitivity cardiac troponin (hs-cTn) assays

  • have not only expanded the potential of troponins, but
  • have also resulted in a certain amount of confusion
    • among unprepared users.

After many years troponins were accepted as the gold standard in

  • patients with chest pain by
  • classifying them into troponin-positive and
    • troponin-negative patients.

The new generation of hs-cTn assays has

  • improved the accuracy at the lower limit of detection and
  • provided incremental diagnostic information especially
    • in the early phase of myocardial infarction.

Moreover, low levels of measurable troponins

  • unrelated to ACS have been associated with
    • an adverse long-term outcome.

Several studies demonstrated that

  • these low levels of cardiac troponin measureable 
    • only by hs-Tn assays
  • are able to predict mortality in patients with ACS
  • as well as patients with assumed
    • stable coronary artery disease.

Furthermore, hs-cTn has the potential

  • to play a role in the care of patients
    • undergoing non-cardiac surgery.

The additional determination of hs-cTn

  • improves risk stratification despite
  • established risk scores providing both diagnosis and
  • for prognosis prediction in chest pain patients.

The daily clinical challenge in using the highly sensitive assays is to 

  • interpret the troponin concentrations, especially
  • in patients with concomitant diseases
    • independently from myocardial ischaemia
  • influencing cardiac troponin concentrations
    (e.g. chronic kidney disease, or stroke). 

The troponin test lost its ‘pregnancy test’ quality with the different users.
Different opinions exist on

  • the change of hs-cTn levels compared to simple ‘positive–negative’ interpretation
  • and thereby makes diagnosis finding more complex than before.

This uncertainty probably has the paradigm that

  • serial measurements of troponins are necessary, and also
    • boosted the number of diagnoses of ACS and
    • invasive diagnostic procedures in some locations.

This is more than understandable, with acute chest pain using

  • three high-sensitivity cardiac troponins with their respective baseline value
    • before the diagnosis of acute myocardial infarction (AMI) can be made.

What is a relevant change in concentrations compatible with acute myocardial necrosis and

  • what is only biological variation for the specific biomarker and assay?

Changes in serial measurements between 20% and 200% have been debated, and
the discussion is ongoing. Furthermore, it has been proposed that

  • absolute changes in cardiac troponin concentrations 
    • have a higher diagnostic accuracy for AMI
  • compared with relative changes, and

it might be helpful in distinguishing AMI from other causes of cardiac troponin elevation.

Do we obtain any helpful directives from experts and guidelines for our daily practice?
Foreseeing this dilemma, the 2011 European Society of Cardiology (ESC) Guidelines

  • on non ST-elevation ACS acted.
  • Minor elevations of  troponins were accepted as hs-cTn values in the ‘grey zone’.

This was and still is the rule, but

  • the ESC provided a general algorithm on how to manage patients with limited data.

The ‘Study Group on Biomarkers in Cardiology’ suggested

  • a rise of 50% from the baseline value at low concentrations.

However, this group of experts could also not find a substitute for the missing data

  • needed to validate the proposed recommendation.

The story is just too complex:

  • different troponin assays with
  • different epitope targets,
  • different patient populations,
  • different sampling protocols,
  • different follow-up lengths, and much more.

Therefore, any study that helps us to see better through the fog is welcome here.

Haaf et al. have now presented the results of their study of

  • different hs-cTn assays
    (hs-cTnT, Roche Diagnostics; hs-cTnI, Beckman-Coulter; and  hs-cTnI, Siemens)

    • with respect to the -outcome of patients with acute chest pain.

The authors examine 1117 consecutive patients presenting with acute chest pain.
[340 patients with ACS (30.5%)] from the Advantageous Predictors of Acute Coronary Syndrome
Evaluation (APACE) study. Blood was collected

  • directly on admission and
  • serially thereafter at 2, 3, and 6h.

Eighty-two patients (7.3%) died during the 2-year follow-up. The main finding of the study is that

  1. hs-cTnT predicts mortality more accurately than the hs-cTnI assays, 
  2. -that a single measurement is sufficient
  3. challenges causes of cardiac troponin elevation.

These results of APACE remain in contrast to recent findings from a GUSTO IV cohortof 1335 patients with ACS (Table1).

Table1 Studies investigating high sensitivity troponins for long-term prognosis

Variable                                                       APACE (n 5 1117)              GUSTO IV (n 5 1335)              PEACE (n 5 3567)

………………………………………………………………………………………………………………………………………………………….

Patient cohort                                                   Unstable                            Unstable                               Stable

Blood sampling                                     On admission,1,2,3,6h                    48h after
study randomization           Before randomization

No. of patients with detection limit             883 (79.1%)                                 UKN                                      UKN

No. of patients with hs-cTnT.
99thpercentile                                        401 (35.9%)                              1015 (76%)                          395 (10.9%)

No. of patients with hs-cTnI (Abbott).
detection limit                                           UKN                                             UKN                              3567 (98.5%)

No.of patients with hs-cTnI (Abbott).
99th percentile                                          UKN                                         988(74%)                           105 (2.9%)

No. of patients with NSTEMI                     170 (15.2%)                              100 (100%)                             0 (0%)

Follow-up                                               24 months                                  12 months                            5.2 years

Non-fatal AMI                                           UKN                                              UKN                               209 (5.9%)

Mortality or primary endpoint                    82 (7.3%)                                 119(8.9%)                           203 (5.7%)

………………………………………………………………………………………………………………………………………………………….

Key findings                                    cTnT better than cTnI                      cTnI ¼cTnT                   cTnI better than cTnT

Single cTn sample sufficient

AMI, acute mycordial infaction; cTn, cardiac tropononin; NSTEMI ,non-ST-elevation myocardial infarction; UKN, unknown

NSTEMI defined in the GUSTO IV trial:
  1. one or more episodes of angina lasting ≥ 5min,
  2. within 24h of admission and
  3. either a positive cardiac TnT or I test
    (above the upper limit of a normal for the local assay; during the years 1999 and 2000)
  4. or ≥ 0.5 mm of transient or persistent ST-segment depression.

the prognostic capacity of four different sensitive cardiac troponin assays were compared

  1. hs-cTnT; Roche Diagnostics,
  2. cTnI and hs-cTnI;
  3. Abbott Diagnostics, and
  4. Acc-cTnI; Beckman-Coulter.

In total, 119 patients (8.9%) died during the 1-year follow-up. Looking at their

  • receiver operating characteristic curve (ROC) analyses,
  • there were only negligible diffferences
    • in the area under the curves between the assays.

Contrasting results have also been reported in patients(n 1/4 3.623)

  • with stable coronary artery disease and preserved systolic left ventricular function

from the PEACE trial (Table1).

During a median follow-up period of 5.2 years,

  • there were 203 (5.6%) cardiovascular deaths or
  • first hospitalization for heart failure.

Concentrations of hs-cTnI (Abbott Diagnostics) at or above

  • the limit of detection of the assay were measured in 3567 patients (98.5%), but
  • concentrations of hs-cTnI at or above the gender-specific 99th percentile
    • were found in only 105 patients (2.9%).

This study revealed that

  • there was a strong and graded association
  • between increasing quartiles of hs-cTnI concentrations and
  • the risk for cardiovascular death or heart failure.

Hs-cTnI provided incremental prognostication information

  • over conventional risk markers and
  • other established cardiovascular biomarkers,
  • including hs-cTnT.

In contrast to the APACE results, only hs-cTnI, but

  • no ths-cTnT, was significantly
  • associated with the risk for AMI.

Is there a real difference between cardiac troponin T and cardiac troponin I

  • in predicting long term prognosis?

The question arises of whether there is a true clinically relevant

  • difference between cTnT and cTnI.

Given the biochemical and analytical differences,the two

  • troponins display rather similar serum profiles during AMI.

While minor biological differences between cTnT and cTnI are

  • apparently not relevant for diagnosis
  • and clinical management in the acute setting of ACS.

This is a provocative theory, but appears premature in our opinion.
Above all, the results of the current study appear

  • too inconsistent to allow such conclusions.

In the present study, hs-cTnT (Roche Diagnostics) outperformed

  • hs-cTnI (Siemens and Beckman-Coulter) in terms of
  • very long term prediction of cardiovascular death and
    • heart failure in stable patients.

We don’t know how hs-cTnI from Abbott Diagnostics

  • performs in the APACE consort.

The number of patients and endpoints provided

  • by the APACE registry are rather low.
  • The results could, therefore, be a chance finding.

It is far too early to favour one high sensitivity assay over the other. The findings need confirmation.

Implications for clinical practice

There is no doubt that high-sensitivity assays

  • are the analytical method of choice
    • in terms of risk stratification in patients with ACS.

What is new?
A single measurement of hs-cTn seems to be adequate

  • for long-term risk stratification in patients without AMI.

However, the question of which troponin might be preferable

  • for long-term risk stratification remains unanswered.

Part 2. ability of high-sensitivity cTnT and NT pro-BNP to predict cardiovascular events and death in patients with T2DM

Hillis GS; Welsh P; Chalmers J; Perkovic V; Chow CK; Li Q; Jun M; Neal B; Zoungas S; Poulter N; Mancia G; Williams B; Sattar N; Woodward M
Diabetes Care.  2014; 37(1):295-303 (ISSN: 1935-5548)

OBJECTIVE

Current methods of risk stratification in patients with

  • type 2 diabetes are suboptimal.

The current study assesses the ability of

  • N-terminal pro-B-type natriuretic peptide (NT-proBNP) and
  • high-sensitivity cardiac troponin T (hs-cTnT)

to improve the prediction of cardiovascular events and death in patients with type 2 diabetes.

RESEARCH DESIGN AND METHODS

A nested case-cohort study was performed in 3,862 patients who participated in the Action in Diabetes and Vascular Disease:

Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) trial.

RESULTS

Seven hundred nine (18%) patients experienced a

  • major cardiovascular event

(composite of cardiovascular death, nonfatal myocardial infarction, or nonfatal stroke) and

  • 706 (18%) died during a median of 5 years of follow-up.

In Cox regression models, adjusting for all established risk predictors,

  • the hazard ratio for cardiovascular events for NT-proBNP was 1.95 per 1 SD increase (95% CI 1.72, 2.20) and
  • the hazard ratio for hs-cTnT was 1.50 per 1 SD increase (95% CI 1.36, 1.65). The hazard ratios for death were
    • 1.97 (95% CI 1.73, 2.24) and
    • 1.52 (95% CI 1.37, 1.67), respectively.

The addition of either marker improved 5-year risk classification for cardiovascular events
(net reclassification index in continuous model,

  • 39% for NT-proBNP and 46% for hs-cTnT).

Likewise, both markers greatly improved the accuracy with which the 5-year risk of death was predicted.
The combination of both markers provided optimal risk discrimination.

CONCLUSIONS

NT-proBNP and hs-cTnT appear to greatly improve the accuracy with which the

  • risk of cardiovascular events or death can be estimated in patients with type 2 diabetes.

PreMedline Identifier: 24089534


Part 3. M-Atrial Natriuretic Peptide

M-Atrial Natriuretic Peptide and Nitroglycerin in a Canine Model of Experimental Acute Hypertensive Heart Failure:
Differential Actions of 2 cGMP Activating Therapeutics.

Paul M McKie, Alessandro Cataliotti, Tomoko Ichiki, S Jeson Sangaralingham, Horng H Chen, John C Burnett
Journal of the American Heart Association 01/2014; 3(1):e000206. http://dx.doi.org/10.1161/JAHA.113.000206
Source: PubMed

ABSTRACT

Systemic hypertension is a common characteristic in

  • acute heart failure (HF).

This increasingly recognized phenotype

  • is commonly associated with renal dysfunction and
  • there is an unmet need for renal enhancing therapies.

In a canine model of HF and acute vasoconstrictive hypertension

  • we characterized and compared the cardiorenal actions of M-atrial natriuretic peptide (M-ANP),
    a novel particulate guanylyl cyclase (pGC) activator, and
  • nitroglycerin, a soluble guanylyl cyclase (sGC) activator.

HF was induced by rapid RV pacing (180 beats per minute) for 10 days. On day 11, hypertension was induced by continuous angiotensin II
infusion. We characterized the cardiorenal and humoral actions

  • prior to,
  • during, and
  • following intravenous infusions of
  1. M-ANP (n=7),
  2. nitroglycerin (n=7),
  3. and vehicle (n=7) infusion.

Mean arterial pressure (MAP) was reduced by

  1. M-ANP (139±4 to 118±3 mm Hg, P<0.05) and
  2. nitroglycerin (137±3 to 116±4 mm Hg, P<0.05);

similar findings were recorded for

  1. pulmonary wedge pressure (PCWP) with M-ANP (12±2 to 6±2 mm Hg, P<0.05)
  2. and nitroglycerin (12±1 to 6±1 mm Hg, P<0.05).

M-ANP enhanced renal function with significant increases (P<0.05) in

  • glomerular filtration rate (38±4 to 53±5 mL/min),
  • renal blood flow (132±18 to 236±23 mL/min), and
  • natriuresis (11±4 to 689±37 mEq/min) and
  • also inhibited aldosterone activation (32±3 to 23±2 ng/dL, P<0.05), whereas

nitroglycerin had no significant (P>0.05) effects on these renal parameters or aldosterone activation.

Our results advance

the differential cardiorenal actions of

  • pGC (M-ANP) and sGC (nitroglycerin) mediated cGMP activation.

These distinct renal and aldosterone modulating actions make

M-ANP an attractive therapeutic for HF with concomitant hypertension, where

  • renal protection is a key therapeutic goal.

Read Full Post »

Why should Quality Assurance be difficult and awkward? Take a strategic view on achieving compliance (focus on ISO 13485)

Why should Quality Assurance be difficult and awkward? Take a strategic view on achieving compliance (focus on ISO 13485)

 Reporter: Dror Nir, PhD

Converting life-science innovations into useful products involves allocation of significant resources to handling of regulatory processes. A typical approach that makes the management of these processes difficult and awkward is starting your project and later patching it with a QA system. It then becomes a source of sever headaches to many people who need to live and operate according to such patch.
I hope that the following post by Rina will inspire you all.
It is all too easy to dive into the list of requirements contained within the ISO 13485 and achieve compliance by just ticking the boxes: looking at one requirement or one area at a time and making sure you have put in place something to address that requirement. This may easily result in a quality system that feels like a patchwork. Compliant, perhaps, but most certainly awkward and difficult to sustain.
The second most common mistake is to not ask yourself how software tools can help in setting up the quality system. “We already have MS Word, MS Excel, email, and we can always print a document and have it signed.” This is only a solution if you think that the quality system is a one-off activity. In the longer run, the system turns out to be a constant struggle with non-integrated elements that have no cohesion.
A better way to address compliance is to:
  1. Accept the fact that the quality system is a long term commitment and that it is very demanding.
  2. Assume that the right software tools do help.
  3. Think strategically, reviewing the whole standard, and try to identify the different areas, in respect to what type of software would help address those.

Real life example: A company maintains an Excel list of all corrective actions. The date of effectiveness check is filled in manually. A QA engineer needs to review the Excel spreadsheet once a week to identify which effectiveness checks are due. Last audit revealed that in most cases, effectiveness checks were not followed up.
Real life question: Meetings and other events are registered in a calendar and you are reminded when they are due. Wouldn’t it be easier if effectiveness checks due dates were also linked to a calendar? Putting those dates in Excel does not make more sense than putting your meetings in Excel…..

What follows is how we can divide the ISO-13485:2003 in regard to the type of software features which can help us. You do not need to be an IT expert to follow the logic or the explanation – if you know the standard and see my examples hopefully you will get the idea.
In any case, I put here the complete mapping of the ISO into the different categories I describe. I also mention the main Atlassian tools we use to address each area. In future posts we will dive deeper into each of those categories and provide more details on exactly how we achieve easy and sustainable, compliance.
So, as promised, these are the various categories that appear in the ISO 13485:2003:
  1. Document management: These are the various requirements relating to the procedures, manuals, and device related documents you need to have, and how they should be handled within the organization. The ISO elaborates in quite a detailed manner about how the controlled documents needs to be approved, who should access them, etc. Confluence is the key tool we use to handle all these requirements.
  2. Procedures and records are the evidence that the organization lives up to its quality system: The various procedures and work instructions should be followed consistently on a daily basis, forms or other records should be collected as evidence. Some examples (with reference to the standard section):
    • Training( 6.2.2).
    • Customer complaints: (8.5.1).
    • Corrective and preventive actions: (8.5.2, 8.5.3)
    • Subcontractor approvals( 7.4.1)
    • Purchasing forms( 7.4.1).

Those records may be created as electronic or physical paper forms which need to be completed by the authorized person. However, a much better way is to implement an automatic workflow that makes it easier for the team to create, follow, and document all the various tasks they need to do. Such a workflow can automatically schedule tasks, remind and alert, thus triggering better compliance to the quality system and at the same time automatically creating the required records. This is a double win. JIRA® is our tool of choice and it provides a state-of-the-art solution to everything related to forms and workflows.

  1. Design control: Some of the issues covered by section 7 of the ISO 13485 require quite advanced control along several phases of design and development. The risk mitigation measures and the product requirements should be, for example, verified in the product verification stage. This verification, or the test file, could be written as a simple Word or Excel document, but a far better implementation is to create it within JIRA. The advantage of JIRA here is the various reporting that it allows once the data is in and the fact that it can connect directly into the work scheduling of the various team members. JIRA is the principal tool we use for design control. Confluence can be used in some advanced implementations. If the medical device involves software, then the development suite from Atlassian can be implemented to provide a complete software life cycle management suite.
  2. Manufacturing and product traceability: Some requirements relate to your manufacturing setup. Depending on the scale and type of manufacturing, specialized ERP may be the best option. When manufacturing is more basic and does not call for a full blown manufacturing facility, JIRA can handle the requirements of the standard.
  3. Monitoring and improving: A key theme of the standard is the need of the organization to measure and improve (for example, section 8.2.3). The nice thing is that the framework we have put in place to support the other categories, if done correctly, should provide us with the reports, alerts, and statistics we need. Indeed, all the processes we have implemented in JIRA, as well as the various elements we have implemented in Confluence, may easily be collected and displayed in practically endless variations of reports and dashboards.
Requirement (Article) Requirement type
4.Quality management system – 1.General requirements Non specific
4.Quality management system – 2.Documentation requirements – 1.General Document management
4.Quality management system – 2.Documentation requirements – 2.Quality manual Document management
4.Quality management system – 2.Documentation requirements – 3.Control of documents Document management
4.Quality management system – 2.Documentation requirements – 4.Control of records Procedures and records
5.Management responsibility – 1.Management commitment Document management
5.Management responsibility – 2.Customer focus Non specific
5.Management responsibility – 3.Quality policy Monitoring and ongoing improvement
5.Management responsibility – 4.Planning – 1.Quality objectives Monitoring and ongoing improvement
5.Management responsibility – 4.Planning – 2.Quality management system planning Monitoring and ongoing improvement
5.Management responsibility – 5.Responsibility, authority and communication – 1.Responsibility and authority Document management
5.Management responsibility – 5.Responsibility, authority and communication – 2.Management representative Monitoring and ongoing improvement
5.Management responsibility – 5.Responsibility, authority and communication – 3.Internal communication Monitoring and ongoing improvement
5.Management responsibility – 6.Management review – 1.General Monitoring and ongoing improvement
5.Management responsibility – 6.Management review – 2.Review input Monitoring and ongoing improvement
5.Management responsibility – 6.Management review – 3.Review output Monitoring and ongoing improvement
6.Resource management – 1.Provision of resources Non specific
6.Resource management – 2.Human resources – 1.General Procedures and records
6.Resource management – 2.Human resources – 2.Competence, awareness and training Procedures and records
6.Resource management – 3.Infrastructure Manufacturing and product traceability
6.Resource management – 4.Work environment Non specific
7.Product realization – 1.Planning of product realization Design control
7.Product realization – 2.Customer-related processes – 1.Determination of requirements related to the product Design control
7.Product realization – 2.Customer-related processes – 2.Review of requirements related to the product Design control
7.Product realization – 2.Customer-related processes – 3.Customer communication Design control
7.Product realization – 3.Design and development – 1.Design and development planning Design control
7.Product realization – 3.Design and development – 1.Design and development input Design control
7.Product realization – 3.Design and development – 3.Design and development outputs Design control
7.Product realization – 3.Design and development – 4.Design and development review Design control
7.Product realization – 3.Design and development – 5.Design and development verification Design control
7.Product realization – 3.Design and development – 6.Design and development validation Design control
7.Product realization – 3.Design and development – 7.Control of design and development changes Design control
7.Product realization – 4.Purchasing – 1.Purchasing process Procedures and records
7.Product realization – 4.Purchasing – 2.Purchasing information Procedures and records
7.Product realization – 4.Purchasing – 3.Verification of purchased product Procedures and records
7.Product realization – 5.Production and service provision – 1.Control of production and service provision – 1.General requirements Procedures and records
7.Product realization – 5.Production and service provision – 1.Control of production and service provision – 2.Control of production and service provision: Specific requirements – 1.Cleanliness of product and contamination control Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 1.Control of production and service provision – 2.Control of production and service provision: Specific requirements – 2.Installation ativities Procedures and records
7.Product realization – 5.Production and service provision – 1.Control of production and service provision – 2. – 3.Servicing activities Procedures and records
7.Product realization – 5.Production and service provision – 1.Control of production and service provision – 3.Particular requirements for sterile medical devices Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 2.Validation of processes for production and service provision – 1.General requirements Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 2.Validation of processes for production and service provision – 2.Particular requirements for sterile medical devices Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 3. Identification and traceability – 1.Identification Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 3. Identification and traceability – 2.Traceability – 1.General Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 3. Identification and traceability – 2.Particular requirements for active implantable medical devices and implantable medical devices Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 3. Identification and traceability – 3.Status identification Manufacturing and product traceability
7.Product realization – 5.Production and service provision – 4.Customer property Non specific
7.Product realization – 5.Production and service provision – 5.Preservation of product Procedures and records
7.Product realization – 6.Control of monitoring and measuring devices Manufacturing and product traceability
8.Measurement, analysis and improvement – 1.General Monitoring and ongoing improvement
8.Measurement, analysis and improvement – 2.Monitoring and measurement – 1.Feedback Monitoring and ongoing improvement
8.Measurement, analysis and improvement – 2.Monitoring and measurement – 2.Internal audit Procedures and records
8.Measurement, analysis and improvement – 2.Monitoring and measurement – 3.Monitoring and measurement of processes Monitoring and ongoing improvement
8.Measurement, analysis and improvement – 2.Monitoring and measurement – 4.Monitoring and measurement of product – 1. General requirements Design control
8.Measurement, analysis and improvement – 2.Monitoring and measurement – 4.Monitoring and measurement of product – 2.Particular requirement for active implantable medical devices and implantable medical devices Procedures and records
8.Measurement, analysis and improvement – 3.Control of nonconforming product Procedures and records
8.Measurement, analysis and improvement – 4.Aalysis of data Monitoring and ongoing improvement
8.Measurement, analysis and improvement – 5.Improvement – 1.General Monitoring and ongoing improvement
8.Measurement, analysis and improvement – 5.Improvement – 2.Corrective action Procedures and records
8.Measurement, analysis and improvement – 5.Improvement – 3.Preventive action Procedures and records

Read Full Post »

Diagnostic Value of Cardiac Biomarkers

Diagnostic Value of Cardiac Biomarkers

Author and Curator: Larry H Bernstein, MD, FCAP 

These presentations covered several views of the utilization of cardiac markers that have evolved for over 60 years.  The first stage was the introduction of enzymatic assays and isoenzyme measurements to distinguish acute hepatitis and acute myocardial infarction, which included lactate dehydrogenase (LD isoenzymes 1, 2) at a time that late presentation of the patient in the emergency rooms were not uncommon, with the creatine kinase isoenzyme MB declining or disappeared from the circulation.  The world health organization (WHO) standard definition then was the presence of two of three:

1. Typical or atypical precordial pressure in the chest, usually with radiation to the left arm

2. Electrocardiographic changes of Q-wave, not previously seen, definitive; ST- elevation of acute myocardial injury with repolarization;
T-wave inversion.

3. The release into the circulation of myocardial derived enzymes –
creatine kinase – MB (which was adapted to measure infarct size), LD-1,
both of which were replaced with troponins T and I, which are part of the actomyosin contractile apparatus.

The research on infarct size elicited a major research goal for early diagnosis and reduction of infarct size, first with fibrinolysis of a ruptured plaque, and this proceeded into the full development of a rapidly evolving interventional cardiology as well as cardiothoracic surgery, in both cases, aimed at removal of plaque or replacement of vessel.  Surgery became more imperative for multivessel disease, even if only one vessel was severely affected.

So we have clinical history, physical examination, and emerging biomarkers playing a large role for more than half a century.  However, the role of biomarkers broadened.  Patients were treated with antiplatelet agents, and a hypercoagulable state coexisted with myocardial ischemic injury.  This made the management of the patient reliant on long term followup for Warfarin with the international normalized ratio (INR) for a standardized prothrombin time (PT), and reversal of the PT required transfusion with thawed fresh frozen plasma (FFP).  The partial thromboplastin test (PPT) was necessary in hospitalization to monitor the heparin effect.

Thus, we have identified the use of traditional cardiac biomarkers for:

1. Diagnosis
2. Therapeutic monitoring

The story is only the beginning.  Many patients who were atypical in presentation, or had cardiovascular ischemia without plaque rupture were problematic.  This led to a concerted effort to redesign the troponin assays for high sensitivity with the concern that the circulation should normally be free of a leaked structural marker of myocardial damage. But of course, there can be a slow leak or a decreased rate of removal of such protein from the circulation, and the best example of this would be the patient with significant renal insufficiency, as TnT is clear only through the kidney, and TNI is clear both by the kidney and by vascular endothelium.  The introduction of the high sensitivity assay has been met with considerable confusion, and highlights the complexity of diagnosis in heart disease.  Another test that is used for the diagnosis of heart failure is in the class of natriuretic peptides (BNP, pro NT-BNP, and ANP), the last of which has been under development.

While there is an exponential increase in the improvement of cardiac devices and discovery of pharmaceutical targets, the laboratory support for clinical management is not mature.  There are miRNAs that may prove valuable, matrix metalloprotein(s), and potential endothelial and blood cell surface markers, they require

1. codevelopment with new medications
2. standardization across the IVD industry
3. proficiency testing applied to all laboratories that provide testing
4. the measurement  on multitest automated analyzers with high capability in proteomic measurement  (MS, time of flight, MS-MS)

nejmra1216063_f1   Atherosclerotic Plaques Associated with Various Presentations               nejmra1216063_f2     Inflammatory Pathways Predisposing Coronary Arteries to Rupture and Thrombosis.        atherosclerosis progression

Read Full Post »

Risk of Bias in Translational Science

Author: Larry H. Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN

 

Assessment of risk of bias in translational science

Andre Barkhordarian1, Peter Pellionisz2, Mona Dousti1, Vivian Lam1,Lauren Gleason1, Mahsa Dousti1, Josemar Moura3 and Francesco Chiappelli14*  

1Oral Biology & Medicine, School of Dentistry, UCLA, Evidence-Based Decisions Practice-Based Research Network, Los Angeles, USA

2Pre-medical program, UCLA, Los Angeles, CA

3School of Medicine, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

4Evidence-Based Decisions Practice-Based Research Network, UCLA School of Dentistry, Los Angeles, CA

Journal of Translational Medicine 2013, 11:184   http://dx.doi.org/10.1186/1479-5876-11-184
http://www.translational-medicine.com/content/11/1/184

This is an Open Access article distributed under the terms of the Creative Commons Attribution License 
http://creativecommons.org/licenses/by/2.0

Abstract

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

Background

As recently discussed in this journal [1], translational medicine is a rapidly evolving field. In its most recent conceptualization, it consists of two primary domains:

  • translational research proper and
  • translational effectiveness.

This distinction arises from a cogent articulation of the fundamental construct of translational medicine in particular, and of translational health care in general.

The Institute of Medicine’s Clinical Research Roundtable conceptualized the field as being composed by two fundamental “blocks”:

  • one translational “block” (T1) was defined as “…the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans…”, and
  • the second translational “block” (T2) was described as “…the translation of results from clinical studies into everyday clinical practice and health decision making…” [2].

These are clearly two distinct facets of one meta-construct, as outlined in Figure 1. As signaled by others, “…Referring to T1 and T2 by the same name—translational research—has become a source of some confusion. The 2 spheres are alike in name only. Their goals, settings, study designs, and investigators differ…” [3].

1479-5876-11-184-1  Fig 1. TM construct

Figure 1. Schematic representation of the meta-construct of translational health carein general, and translational medicine in particular, which consists of two fundamental constructs: the T1 “block” (as per Institute of Medicine’s Clinical Research Roundtable nomenclature), which represents the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention as well as their first testing in humans, and the T2 “block”, which pertains to translation of results from clinical studies into everyday clinical practice and health decision making [[3]]. The two “blocks” are inextricably intertwined because they jointly strive toward patient-centered research outcomes (PCOR) through the process of comparative effectiveness and efficacy research/review and analysis for clinical practice (CEERAP). The domain of each construct is distinct, since the “block” T1 is set in the context of a laboratory infrastructure within a nurturing academic institution, whereas the setting of “block” T2 is typically community-based (e.g., patient-centered medical/dental home/neighborhoods [4]; “communities of practice” [5]).

For the last five years at least, the Federal responsibilities for “block” T1 and T2 have been clearly delineated. The National Institutes of Health (NIH) predominantly concerns itself with translational research proper – the bench-to-bedside enterprise (T1); the Agency for Healthcare Research Quality (AHRQ) focuses on the result-translation enterprise (T2). Specifically: “…the ultimate goal [of AHRQ] is research translation—that is, making sure that findings from AHRQ research are widely disseminated and ready to be used in everyday health care decision-making…” [6]. The terminology of translational effectiveness has emerged as a means of distinguishing the T2 block from T1.

Therefore, the bench-to-bedside enterprise pertains to translational research, and the result-translation enterprise describes translational effectiveness. The meta-construct of translational health care (viz., translational medicine) thus consists of these two fundamental constructs:

  • translational research and
  • translational effectiveness,

which have distinct purposes, protocols and products, while both converging on the same goal of new and improved means of

  • individualized patient-centered diagnostic and prognostic care.

It is important to note that the U.S. Patient Protection and Affordable Care Act (PPACA, 23 March 2010) has created an environment that facilitates the pursuit of translational health care because it emphasizes patient-centered outcomes research (PCOR). That is to say, it fosters the transaction between translational research (i.e., “block” T1)(TR) and translational effectiveness (i.e., “block” T2)(TE), and favors the establishment of communities of practice-research interaction. The latter, now recognized as practice-based research networks, incorporate three or more clinical practices in the community into

  • a community of practices network coordinated by an academic center of research.

Practice-based research networks may be a third “block” (T3)(PBTN) in translational health care and they could be conceptualized as a stepping-stone, a go-between bench-to-bedside translational research and result-translation translational effectiveness [7]. Alternatively, practice-based research networks represent the practical entities where the transaction between

  • translational research and translational effectiveness can most optimally be undertaken.

It is within the context of the practice-based research network that the process of bench-to-bedside can best seamlessly proceed, and it is within the framework of the practice-based research network that

  • the best evidence of results can be most efficiently translated into practice and
  • be utilized in evidence-based clinical decision-making, viz. translational effectiveness.

Translational effectiveness

As noted, translational effectiveness represents the translation of the best available evidence in the clinical practice to ensure its utilization in clinical decisions. Translational effectiveness fosters evidence-based revisions of clinical practice guidelines. It also encourages

  • effectiveness-focused,
  • patient-centered and
  • evidence-based clinical decision-making.

Translational effectiveness rests not only on the expertise of the clinical staff and the empowerment of patients, caregivers and stakeholders, but also, and

  • most importantly on the best available evidence [8].

The pursuit of the best available evidence is the foundation of

  • translational effectiveness and more generally of
  • translational medicine in evidence-based health care.

The best available evidence is obtained through a systematic process driven by

  • a research question/hypothesis that is articulated about clearly stated criteria that pertain to the
  • patient (P), the interventions (I) under consideration (C), for the sought clinical outcome (O), within a given timeline (T) and clinical setting (S).

PICOTS is tested on the appropriate bibliometric sample, with tools of measurements designed to establish the level (e.g., CONSORT) and the quality of the evidence. Statistical and meta-analytical inferences, often enhanced by analyses of clinical relevance [9], converge into the formulation of the consensus of the best available evidence. Its dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation

  • in the utilization of the best available evidence in clinical decisions, viz., translational effectiveness.

To be clear, translational effectiveness – and, in the perspective discussed above, translational health care – is anchored on obtaining the best available evidence,

  • which emerges from highest quality research.
  • which is obtained when errors are minimized.

In an early conceptualization [10], errors in research were presented as

  • those situations that threaten the internal and the external validity of a research study –

that is, conditions that impede either the study’s reproducibility, or its generalization. In point of fact, threats to internal and external validity [10] represent specific aspects of systematic errors (i.e., bias) in the

  • research design,
  • methodology and
  • data analysis.

Thence emerged a branch of science that seeks to

  • understand,
  • control and
  • reduce risk of bias in research.

Risk of bias and the best available evidence

It follows that the best available evidence comes from research with the fewest threats to internal and to external validity – that is to say, the fewest systematic errors: the lowest risk of bias. Quality of research, as defined in the field of research synthesis [11], has become synonymous with

  • low bias and contained risk of bias [1215].

Several years ago, the Cochrane group embarked on a new strategy for assessing the quality of research studies by examining potential sources of bias. Certain original areas of potential bias in research were identified, which pertain to

(a) the sampling and the sample allocation process, to measurement, and to other related sources of errors (reliability of testing),

(b) design issues, including blinding, selection and drop-out, and design-specific caveats, and

(c) analysis-related biases.

A Risk of Bias tool was created (Cochrane Risk of Bias), which covered six specific domains:

1. selection bias,

2. performance bias,

3. detection bias,

4. attrition bias,

5. reporting bias, and

6. other research protocol-related biases.

Assessments were made within each domain by one or more items specific for certain aspects of the domain. Each items was scored in two distinct steps:

1. the support for judgment was intended to provide a succinct free-text description of the domain being queried;

2. each item was scored high, low, or unclear risk of material bias (defined here as “…bias of sufficient magnitude to have a notable effect on the results or conclusions…” [16]).

It was advocated that assessments across items in the tool should be critically summarized for each outcome within each report. These critical summaries were to inform the investigator so that the primary meta-analysis could be performed either

  • only on studies at low risk of bias, or for
  • the studies stratified according to risk of bias [16].

This is a form of acceptable sampling analysis designed to yield increased homogeneity of meta-analytical outcomes [17]. Alternatively, the homogeneity of the meta-analysis can be further enhanced by means of the more direct quality-effects meta-analysis inferential model [18].

Clearly, one among the major drawbacks of the Cochrane Risk of Bias tool is

  • the subjective nature of its assessment protocol.

In an effort to correct for this inherent weakness of the instrument, the Cochrane group produced

  • detailed criteria for making judgments about the risk of bias from each individual item[16], and
  • that judgments be made independently by at least two people, with any discrepancies resolved by discussion [16].

This approach to increase the reliability of measurement in research synthesis protocols

  • is akin to that described by us [19,20] and by AHRQ [21].

In an effort to aid clinicians and patients in making effective health care related decisions, AHRQ developed an alternative Risk of Bias instrument for enabling systematical evaluation of evidence reporting [22]. The AHRQ Risk of Bias instrument was created to monitor four primary domains:

1. risk of bias: design, methodology, analysis scoring – low, medium, high

2. consistency: extent of similarity in effect sizes across studies within a bibliome scoring – consistent, inconsistent, unknown

3. directness: unidirectional link between the interventions of interest and the sought outcome, as opposed to multiple links in a casual chain scoring – direct, indirect

4. precision: extent of certainty for estimate of effect with respect to the outcome scoring – precise, imprecise In addition, four secondary domains were identified:

a. Dose response association: pattern of a larger effect with greater exposure (Present/Not Present/Not Applicable or Not Tested)

a. Confounders: consideration of confounding variables (Present/Absent)

a. Strength of association: likelihood that the observed effect is large enough that it cannot have occurred solely as a result of bias from potential confounding factors (Strong/Weak)

a. Publication bias

The AHRQ Risk of Bias instrument is also designed to yield an overall grade of the estimated risk of bias in quality reporting:

•Strength of Evidence Grades (scored as high – moderate – low – insufficient)

This global assessment, in addition to incorporating the assessments above, also rates:

–major benefit

–major harm

–jointly benefits and harms

–outcomes most relevant to patients, clinicians, and stakeholders

The AHRQ Risk of Bias instrument suffers from the same two major limitations as the Cochrane tool:

1. lack of formal psychometric validation as most other tools in the field [21], and

2. providing a subjective and not quantifiable assessment.

To begin the process of engaging in a systematic dialectic of the two instruments in terms of their respective construct and content validity, it is necessary

  • to validate each for reliability and validity either by means of the classic psychometric theory or generalizability (G) theory, which allows
  • the simultaneous estimation of multiple sources of measurement error variance (i.e., facets)
  • while generalizing the main findings across the different study facets.

G theory is particularly useful in clinical care analysis of this type, because it permits the assessment of the reliability of clinical assessment protocols.

  • the reliability and minimal detectable changes across varied combinations of these facets are then simply calculated [23], but
  • it is recommended that G theory determination follow classic theory psychometric assessment.

Therefore, we have commenced a process of revision the AHRQ Risk of Bias instrument by rendering questions in primary domains quantifiable (scaled 1–4),

  • which established the intra-rater reliability (r = 0.94, p < 0.05), and
  • the criterion validity (r = 0.96, p < 0.05) for this instrument (Figure 2).

????????????????????????????????????????

 

Figure 2. Proportion of shared variance in criterion validity (A) and inter-rater reliability (B) in the AHRQ Risk of Bias instrument revised as described.
Two raters were trained and standardized 
[20] with the revised AHRQ Risk of Bias and with the R-Wong instrument, which has been previously validated[24]. Each rater independently produced ratings on a sample of research reports with both instruments on two separate occasions, 1–2 months apart. Pearson correlation coefficient was used to compute the respective associations. The figure shows Venn diagrams to illustrate the intersection between each two sets data used in the correlations. The overlap between the sets in each panel represents the proportion of shared variance for that correlation. The percent of unexplained variance is given in the insert of each panel.

A similar revision of the Cochrane Risk of Bias tool may also yield promising validation data. G theory validation of both tools will follow. Together, these results will enable a critical and systematic dialectical comparison of the Cochrane and the AHRQ Risk of Bias measures.

Discussion

The critical evaluation of the best available evidence is critical to patient-centered care, because biased research findings are fundamentally invalid and potentially harmful to the patient. Depending upon the tool of measurement, the validity of an instrument in a study is obtained by means of criterion validity through correlation coefficients. Criterion validity refers to the extent to which one measures or predicts the value of another measure or quality based on a previously well-established criterion. There are other domains of validity such as: construct validity and content validity that are rather more descriptive than quantitative. Reliability however is used to describe the consistency of a measure, the extent to which a measurement is repeatable. It is commonly assessed quantitatively by correlation coefficients. Inter-rater reliability is rendered as a Pearson correlation coefficient between two independent readers, and establishes equivalence of ratings produced by independent observers or readers. Intra-rater reliability is determined by repeated measurement performed by the same subject (rater/reader) at two different points in time to assess the correlation or strength of association of the two sets of scores.

To establish the reliability of research quality assessment tools it is necessary, as we previously noted [20]:

•a) to train multiple readers in sharing a common view for the cognitive interpretation of each item. Readers must possess declarative knowledge a factual form of information known to be static in nature a certain depth of knowledge and understanding of the facts about which they are reviewing the literature. They must also have procedural knowledge known as imperative knowledge that can be directly applied to a task in this case a clear understanding of the fundamental concepts of research methodology, design, analysis and inference.

•b) to train the readers to read and evaluate the quality of a set of papers independently and blindly. They must also be trained to self-monitor and self-assess their skills for the purpose of insuring quality control.

•c) to refine the process until the inter-rater correlation coefficient and Cohen coefficient of agreement are about 0.9 (over 81% shared variance). This will establishes that the degree of attained agreement among well-trained readers is beyond chance.

•d) to obtain independent and blind reading assessments from readers on reports under study.

•e) to compute means and standard deviation of scores for each question across the reports, repeat process if the coefficient of variations are greater than 5% (i.e., less than 5% error among the readers across each questions).

The quantification provided by instruments validated in such a manner to assess the quality and the relative lack of bias in the research evidence allows for the analysis of the scores by means of the acceptable sampling protocol. Acceptance sampling is a statistical procedure that uses statistical sampling to determine whether a given lot, in this case evidence gathered from an identified set of published reports, should be accepted or rejected [12,25]. Acceptable sampling of the best available evidence can be obtained by:

•convention: accept the top 10 percentile of papers based on the score of the quality of the evidence (e.g., low Risk of Bias);

•confidence interval (CI95): accept the papers whose scores fall at of beyond the upper confidence limit at 95%, obtained with mean and variance of the scores of the entire bibliome;

•statistical analysis: accept the papers that sustain sequential repeated Friedman analysis.

To be clear, the Friedman test is a non-parametric equivalent of the analysis of variance for factorial designs. The process requires the 4-E process outlined below:

•establishing a significant Friedman outcome, which indicates significant differences in scores among the individual reports being tested for quality;

•examining marginal means and standard deviations to identify inconsistencies, and to identify the uniformly strong reports across all the domains tested by the quality instrument

•excluding those reports that show quality weakness or bias

•executing the Friedman analysis again, and repeating the 4-E process as many times as necessary, in a statistical process akin to hierarchical regression, to eliminate the evidence reports that exhibit egregious weakness, based on the analysis of the marginal values, and to retain only the group of report that harbor homogeneously strong evidence.

Taken together, and considering the domain and the structure of both tools, expectations are that these analyses will confirm that these instruments are two related entities, each measuring distinct aspects of bias. We anticipate that future research will establish that both tools assess complementary sub-constructs of one and the same archetype meta-construct of research quality.

References

  1. Jiang F, Zhang J, Wang X, Shen X: Important steps to improve translation from medical research to health policy.

    J Trans Med 2013, 11:33. BioMed Central Full Text OpenURL

  2. Sung NS, Crowley WF Jr, Genel M, Salber P, Sandy L, Sherwood LM, Johnson SB, Catanese V, Tilson H, Getz K, Larson EL, Scheinberg D, Reece EA, Slavkin H, Dobs A, Grebb J, Martinez RA, Korn A, Rimoin D:Central challenges facing the national clinical research enterprise.

    JAMA 2003, 289:1278-1287. PubMed Abstract | Publisher Full Text OpenURL

  3. Woolf SH: The meaning of translational research and why it matters.

    JAMA 2008, 299(2):211-213. PubMed Abstract | Publisher Full Text OpenURL

  4. Chiappelli F: From translational research to translational effectiveness: the “patient-centered dental home” model.

    Dental Hypotheses 2011, 2:105-112. Publisher Full Text OpenURL

  5. Maida C: Building communities of practice in comparative effectiveness research. In Comparative effectiveness and efficacy research and analysis for practice (CEERAP): applications for treatment options in health care. Edited by Chiappelli F, Brant X, Cajulis C. Heidelberg: Springer–Verlag; 2012.

    Chapter 1

    OpenURL

  6. Agency for Healthcare Research and Quality: Budget estimates for appropriations committees, fiscal year (FY) 2008: performance budget submission for congressional justification.

    Performance budget overview 2008.

    http://www.ahrq.gov/about/cj2008/cjweb08a.htm#Statement webcite. Accessed 11 May 2013

    OpenURL

  7. Westfall JM, Mold J, Fagnan L: Practice-based research—“blue highways” on the NIH roadmap.

    JAMA 2007, 297:403-406. PubMed Abstract | Publisher Full Text OpenURL

  8. Chiappelli F, Brant X, Cajulis C: Comparative effectiveness and efficacy research and analysis for practice (CEERAP) applications for treatment options in health care. Heidelberg: Springer–Verlag; 2012. OpenURL

  9. Dousti M, Ramchandani MH, Chiappelli F: Evidence-based clinical significance in health care: toward an inferential analysis of clinical relevance.

    Dental Hypotheses 2011, 2:165-177. Publisher Full Text OpenURL

  10. Campbell D, Stanley J: Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally; 1963. OpenURL

  11. Littell JH, Corcoran J, Pillai V: Research synthesis reports and meta-analysis. New York, NY: Oxford Univeristy Press; 2008. OpenURL

  12. Chiappelli F: The science of research synthesis: a manual of evidence-based research for the health sciences. Hauppauge NY: NovaScience Publisher, Inc; 2008. OpenURL

  13. Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions version 5.0.1. Chichester, West Sussex, UK: John Wiley & Sons. The Cochrane collaboration; 2008. OpenURL

  14. CRD: Systematic Reviews: CRD’s guidance for undertaking reviews in health care. National Institute for Health Research (NIHR). University of York, UK: Center for reviews and dissemination; 2009. PubMed Abstract| Publisher Full Text OpenURL

  15. McDonald KM, Chang C, Schultz E: Closing the quality Gap: revisiting the state of the science. Summary report. U.S. Department of Health & Human Services. AHRQ, Rockville, MD: Summary report. AHRQ publication No. 12(13)-E017; 2013. OpenURL


Read Full Post »

%d bloggers like this: