Feeds:
Posts
Comments

Archive for the ‘Curation’ Category

Point of care pulse oximetry

Curator: Larry H. Bernstein, MD, FCAP

LPBI

 

Not All FDA-Cleared Finger Pulse Oximeters Perform Alike

http://www.mdtmag.com/news/2016/05/study-not-all-fda-cleared-finger-pulse-oximeters-perform-alike

Nonin Medical, Inc., the inventor of finger pulse oximetry, today announced the results of a new independent hypoxia lab study in humans that demonstrates that Nonin’s PureSAT pulse oximetry technology captures and reports worsening patient conditions better than other Food and Drug Administration (FDA)-cleared oximeter brands.1Nonin made the results available in a white paper at the American Thoracic Society (ATS) and American Telemedicine Association (ATA) conferences this week.

Not all FDA-cleared finger pulse oximeters perform alike, says a new study. Nonin Medical’s pulse oximetry technology was found to be more accurate in patients with challenging conditions, such as COPD.

In the study, conducted independently by Clinimark Laboratories in Boulder, Colo., three finger pulse oximeters were tested; one from Nonin Medical and two from large, private-labeled manufacturers. All oximeters had FDA 510(k) clearance as “medical devices,” but two of them did not provide the clinical accuracy required to track desaturations in patients with low blood circulation and labored breathing. Only the Nonin Medical oximeter was able to accurately track the desaturation events as compared to an independent hospital tabletop oximeter control device.

Not all FDA-cleared finger pulse oximeters perform alike, says a new study. Nonin Medical’s pulse oximetry technology was found to be more accurate in patients with challenging conditions, such as COPD. (Credit: PRNewsFoto/Nonin Medical, Inc.)

“Over the years, a number of inexpensive, imported FDA-cleared oximeters have flooded the market, all claiming to be accurate,” said Jim Russell, vice president of quality, regulatory and clinical affairs for Nonin Medical. “This study dispels the myth that all pulse oximeters perform alike, especially on challenging patients such as those with chronic obstructive pulmonary disease (COPD).

“Clinicians, hospitals and telemedicine providers can better manage COPD patients and potentially reduce readmission rates by choosing pulse oximeters that provide early and accurate data on all patients, including the sickest patients. Nonin Medical’s oximeter performance is proven,” Russell said.

References
1Batchelder, P.B., Fingertip Pulse Oximeter Performance in Dyspnea and Low Perfusion During Hypoxic Events. Clinimark Laboratories, Boulder, Colorado. 2016. White Paper.

Read Full Post »

Disease related changes in proteomics, protein folding, protein-protein interaction, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Disease related changes in proteomics, protein folding, protein-protein interaction

Curator: Larry H. Bernstein, MD, FCAP

LPBI

 

Frankenstein Proteins Stitched Together by Scientists

http://www.genengnews.com/gen-news-highlights/frankenstein-proteins-stitched-together-by-scientists/81252715/

http://www.genengnews.com/Media/images/GENHighlight/thumb_May11_2016_Wikipedia_1831Frankenstein2192501426.jpg

The Frankenstein monster, stitched together from disparate body parts, proved to be an abomination, but stitched together proteins may fare better. They may, for example, serve specific purposes in medicine, research, and industry. At least, that’s the ambition of scientists based at the University of North Carolina. They have developed a computational protocol called SEWING that builds new proteins from connected or disconnected pieces of existing structures. [Wikipedia]

Unlike Victor Frankenstein, who betrayed Promethean ambition when he sewed together his infamous creature, today’s biochemists are relatively modest. Rather than defy nature, they emulate it. For example, at the University of North Carolina (UNC), researchers have taken inspiration from natural evolutionary mechanisms to develop a technique called SEWING—Structure Extension With Native-substructure Graphs. SEWING is a computational protocol that describes how to stitch together new proteins from connected or disconnected pieces of existing structures.

“We can now begin to think about engineering proteins to do things that nothing else is capable of doing,” said UNC’s Brian Kuhlman, Ph.D. “The structure of a protein determines its function, so if we are going to learn how to design new functions, we have to learn how to design new structures. Our study is a critical step in that direction and provides tools for creating proteins that haven’t been seen before in nature.”

Traditionally, researchers have used computational protein design to recreate in the laboratory what already exists in the natural world. In recent years, their focus has shifted toward inventing novel proteins with new functionality. These design projects all start with a specific structural “blueprint” in mind, and as a result are limited. Dr. Kuhlman and his colleagues, however, believe that by removing the limitations of a predetermined blueprint and taking cues from evolution they can more easily create functional proteins.

Dr. Kuhlman’s UNC team developed a protein design approach that emulates natural mechanisms for shuffling tertiary structures such as pleats, coils, and furrows. Putting the approach into action, the UNC team mapped 50,000 stitched together proteins on the computer, and then it produced 21 promising structures in the laboratory. Details of this work appeared May 6 in the journal Science, in an article entitled, “Design of Structurally Distinct Proteins Using Strategies Inspired by Evolution.”

“Helical proteins designed with SEWING contain structural features absent from other de novo designed proteins and, in some cases, remain folded at more than 100°C,” wrote the authors. “High-resolution structures of the designed proteins CA01 and DA05R1 were solved by x-ray crystallography (2.2 angstrom resolution) and nuclear magnetic resonance, respectively, and there was excellent agreement with the design models.”

Essentially, the UNC scientists confirmed that the proteins they had synthesized contained the unique structural varieties that had been designed on the computer. The UNC scientists also determined that the structures they had created had new surface and pocket features. Such features, they noted, provide potential binding sites for ligands or macromolecules.

“We were excited that some had clefts or grooves on the surface, regions that naturally occurring proteins use for binding other proteins,” said the Science article’s first author, Tim M. Jacobs, Ph.D., a former graduate student in Dr. Kuhlman’s laboratory. “That’s important because if we wanted to create a protein that can act as a biosensor to detect a certain metabolite in the body, either for diagnostic or research purposes, it would need to have these grooves. Likewise, if we wanted to develop novel therapeutics, they would also need to attach to specific proteins.”

Currently, the UNC researchers are using SEWING to create proteins that can bind to several other proteins at a time. Many of the most important proteins are such multitaskers, including the blood protein hemoglobin.

 

Histone Mutation Deranges DNA Methylation to Cause Cancer

http://www.genengnews.com/gen-news-highlights/histone-mutation-deranges-dna-methylation-to-cause-cancer/81252723/

http://www.genengnews.com/Media/images/GENHighlight/thumb_May13_2016_RockefellerUniv_ChildhoodSarcoma1293657114.jpg

In some cancers, including chondroblastoma and a rare form of childhood sarcoma, a mutation in histone H3 reduces global levels of methylation (dark areas) in tumor cells but not in normal cells (arrowhead). The mutation locks the cells in a proliferative state to promote tumor development. [Laboratory of Chromatin Biology and Epigenetics at The Rockefeller University]

They have been called oncohistones, the mutated histones that are known to accompany certain pediatric cancers. Despite their suggestive moniker, oncohistones have kept their oncogenic secrets. For example, it has been unclear whether oncohistones are able to cause cancer on their own, or whether they need to act in concert with additional DNA mutations, that is, mutations other than those affecting histone structures.

While oncohistone mechanisms remain poorly understood, this particular question—the oncogenicity of lone oncohistones—has been resolved, at least in part. According to researchers based at The Rockefeller University, a change to the structure of a histone can trigger a tumor on its own.

This finding appeared May 13 in the journal Science, in an article entitled, “Histone H3K36 Mutations Promote Sarcomagenesis Through Altered Histone Methylation Landscape.” The article describes the Rockefeller team’s study of a histone protein called H3, which has been found in about 95% of samples of chondoblastoma, a benign tumor that arises in cartilage, typically during adolescence.

The Rockefeller scientists found that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo.

After the scientists inserted the H3 histone mutation into mouse mesenchymal progenitor cells (MPCs)—which generate cartilage, bone, and fat—they watched these cells lose the ability to differentiate in the lab. Next, the scientists injected the mutant cells into living mice, and the animals developed the tumors rich in MPCs, known as an undifferentiated sarcoma. Finally, the researchers tried to understand how the mutation causes the tumors to develop.

The scientists determined that H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases.

“Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation,” the authors of the Science study wrote. “After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation.”

Essentially, when the H3K36M mutation occurs, the cell becomes locked in a proliferative state—meaning it divides constantly, leading to tumors. Specifically, the mutation inhibits enzymes that normally tag the histone with chemical groups known as methyls, allowing genes to be expressed normally.

In response to this lack of modification, another part of the histone becomes overmodified, or tagged with too many methyl groups. “This leads to an overall resetting of the landscape of chromatin, the complex of DNA and its associated factors, including histones,” explained co-author Peter Lewis, Ph.D., a professor at the University of Wisconsin-Madison and a former postdoctoral fellow in laboratory of C. David Allis, Ph.D., a professor at Rockefeller.

The finding—that a “resetting” of the chromatin landscape can lock the cell into a proliferative state—suggests that researchers should be on the hunt for more mutations in histones that might be driving tumors. For their part, the Rockefeller researchers are trying to learn more about how this specific mutation in histone H3 causes tumors to develop.

“We want to know which pathways cause the mesenchymal progenitor cells that carry the mutation to continue to divide, and not differentiate into the bone, fat, and cartilage cells they are destined to become,” said co-author Chao Lu, Ph.D., a postdoctoral fellow in the Allis lab.

Once researchers understand more about these pathways, added Dr. Lewis, they can consider ways of blocking them with drugs, particularly in tumors such as MPC-rich sarcomas—which, unlike chondroblastoma, can be deadly. In fact, drugs that block these pathways may already exist and may even be in use for other types of cancers.

“One long-term goal of our collaborative team is to better understand fundamental mechanisms that drive these processes, with the hope of providing new therapeutic approaches,” concluded Dr. Allis.

 

Histone H3K36 mutations promote sarcomagenesis through altered histone methylation landscape

Chao Lu, Siddhant U. Jain, Dominik Hoelper, …, C. David Allis1,, Nada Jabado,, Peter W. Lewis,
Science  13 May 2016; 352(6287):844-849 http://dx.doi.org:/10.1126/science.aac7272  http://science.sciencemag.org/content/352/6287/844

An oncohistone deranges inhibitory chromatin

Missense mutations (that change one amino acid for another) in histone H3 can produce a so-called oncohistone and are found in a number of pediatric cancers. For example, the lysine-36–to-methionine (K36M) mutation is seen in almost all chondroblastomas. Lu et al. show that K36M mutant histones are oncogenic, and they inhibit the normal methylation of this same residue in wild-type H3 histones. The mutant histones also interfere with the normal development of bone-related cells and the deposition of inhibitory chromatin marks.

Science, this issue p. 844

Several types of pediatric cancers reportedly contain high-frequency missense mutations in histone H3, yet the underlying oncogenic mechanism remains poorly characterized. Here we report that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo. H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases. Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation. After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation. Our findings are mirrored in human undifferentiated sarcomas in which novel K36M/I mutations in H3.1 are identified.

 

Mitochondria? We Don’t Need No Stinking Mitochondria!

 

http://www.genengnews.com/Media/images/GENHighlight/thumb_fx11801711851.jpg
Diagram comparing typical eukaryotic cell to the newly discovered mitochondria-free organism. [Karnkowska et al., 2016, Current Biology 26, 1–11]
  • The organelle that produces a significant portion of energy for eukaryotic cells would seemingly be indispensable, yet over the years, a number of organisms have been discovered that challenge that biological pretense. However, these so-called amitochondrial species may lack a defined organelle, but they still retain some residual functions of their mitochondria-containing brethren. Even the intestinal eukaryotic parasite Giardia intestinalis, which was for many years considered to be mitochondria-free, was proven recently to contain a considerably shriveled version of the organelle.
  • Now, an international group of scientists has released results from a new study that challenges the notion that mitochondria are essential for eukaryotes—discovering an organism that resides in the gut of chinchillas that contains absolutely no trace of mitochondria at all.
  • “In low-oxygen environments, eukaryotes often possess a reduced form of the mitochondrion, but it was believed that some of the mitochondrial functions are so essential that these organelles are indispensable for their life,” explained lead study author Anna Karnkowska, Ph.D., visiting scientist at the University of British Columbia in Vancouver. “We have characterized a eukaryotic microbe which indeed possesses no mitochondrion at all.”

 

Mysterious Eukaryote Missing Mitochondria

Researchers uncover the first example of a eukaryotic organism that lacks the organelles.

By Anna Azvolinsky | May 12, 2016

http://www.the-scientist.com/?articles.view/articleNo/46077/title/Mysterious-Eukaryote-Missing-Mitochondria

http://www.the-scientist.com/images/News/May2016/620_Monocercomonides-Pa203.jpg

Monocercomonoides sp. PA203VLADIMIR HAMPL, CHARLES UNIVERSITY, PRAGUE, CZECH REPUBLIC

Scientists have long thought that mitochondria—organelles responsible for energy generation—are an essential and defining feature of a eukaryotic cell. Now, researchers from Charles University in Prague and their colleagues are challenging this notion with their discovery of a eukaryotic organism,Monocercomonoides species PA203, which lacks mitochondria. The team’s phylogenetic analysis, published today (May 12) in Current Biology,suggests that Monocercomonoides—which belong to the Oxymonadida group of protozoa and live in low-oxygen environmentsdid have mitochondria at one point, but eventually lost the organelles.

“This is quite a groundbreaking discovery,” said Thijs Ettema, who studies microbial genome evolution at Uppsala University in Sweden and was not involved in the work.

“This study shows that mitochondria are not so central for all lineages of living eukaryotes,” Toni Gabaldonof the Center for Genomic Regulation in Barcelona, Spain, who also was not involved in the work, wrote in an email to The Scientist. “Yet, this mitochondrial-devoid, single-cell eukaryote is as complex as other eukaryotic cells in almost any other aspect of cellular complexity.”

Charles University’s Vladimir Hampl studies the evolution of protists. Along with Anna Karnkowska and colleagues, Hampl decided to sequence the genome of Monocercomonoides, a little-studied protist that lives in the digestive tracts of vertebrates. The 75-megabase genome—the first of an oxymonad—did not contain any conserved genes found on mitochondrial genomes of other eukaryotes, the researchers found. It also did not contain any nuclear genes associated with mitochondrial functions.

“It was surprising and for a long time, we didn’t believe that the [mitochondria-associated genes were really not there]. We thought we were missing something,” Hampl told The Scientist. “But when the data kept accumulating, we switched to the hypothesis that this organism really didn’t have mitochondria.”

Because researchers have previously not found examples of eukaryotes without some form of mitochondria, the current theory of the origin of eukaryotes poses that the appearance of mitochondria was crucial to the identity of these organisms.

“We now view these mitochondria-like organelles as a continuum from full mitochondria to very small . Some anaerobic protists, for example, have only pared down versions of mitochondria, such as hydrogenosomes and mitosomes, which lack a mitochondrial genome. But these mitochondrion-like organelles perform essential functions of the iron-sulfur cluster assembly pathway, which is known to be conserved in virtually all eukaryotic organisms studied to date.

Yet, in their analysis, the researchers found no evidence of the presence of any components of this mitochondrial pathway.

Like the scaling down of mitochondria into mitosomes in some organisms, the ancestors of modernMonocercomonoides once had mitochondria. “Because this organism is phylogenetically nested among relatives that had conventional mitochondria, this is most likely a secondary adaptation,” said Michael Gray, a biochemist who studies mitochondria at Dalhousie University in Nova Scotia and was not involved in the study. According to Gray, the finding of a mitochondria-deficient eukaryote does not mean that the organelles did not play a major role in the evolution of eukaryotic cells.

To be sure they were not missing mitochondrial proteins, Hampl’s team also searched for potential mitochondrial protein homologs of other anaerobic species, and for signature sequences of a range of known mitochondrial proteins. While similar searches with other species uncovered a few mitochondrial proteins, the team’s analysis of Monocercomonoides came up empty.

“The data is very complete,” said Ettema. “It is difficult to prove the absence of something but [these authors] do a convincing job.”

To form the essential iron-sulfur clusters, the team discovered that Monocercomonoides use a sulfur mobilization system found in the cytosol, and that an ancestor of the organism acquired this system by lateral gene transfer from bacteria. This cytosolic, compensating system allowed Monocercomonoides to lose the otherwise essential iron-sulfur cluster-forming pathway in the mitochondrion, the team proposed.

“This work shows the great evolutionary plasticity of the eukaryotic cell,” said Karnkowska, who participated in the study while she was a postdoc at Charles University. Karnkowska, who is now a visiting researcher at the University of British Columbia in Canada, added: “This is a striking example of how far the evolution of a eukaryotic cell can go that was beyond our expectations.”

“The results highlight how many surprises may await us in the poorly studied eukaryotic phyla that live in under-explored environments,” Gabaldon said.

Ettema agreed. “Now that we’ve found one, we need to look at the bigger picture and see if there are other examples of eukaryotes that have lost their mitochondria, to understand how adaptable eukaryotes are.”

  1. Karnkowska et al., “A eukaryote without a mitochondrial organelle,” Current Biology,doi:10.1016/j.cub.2016.03.053, 2016.

organellesmitochondriagenetics & genomics and evolution

 

A Eukaryote without a Mitochondrial Organelle

Anna Karnkowska,  Vojtěch Vacek,  Zuzana Zubáčová,…,  Čestmír Vlček,  Vladimír HamplDOI: http://dx.doi.org/10.1016/j.cub.2016.03.053  Article Info

PDF (2 MB)   Extended PDF (2 MB)  Download Images(.ppt)  About Images & Usage

Highlights

  • Monocercomonoides sp. is a eukaryotic microorganism with no mitochondria
  • •The complete absence of mitochondria is a secondary loss, not an ancestral feature
  • •The essential mitochondrial ISC pathway was replaced by a bacterial SUF system

The presence of mitochondria and related organelles in every studied eukaryote supports the view that mitochondria are essential cellular components. Here, we report the genome sequence of a microbial eukaryote, the oxymonad Monocercomonoides sp., which revealed that this organism lacks all hallmark mitochondrial proteins. Crucially, the mitochondrial iron-sulfur cluster assembly pathway, thought to be conserved in virtually all eukaryotic cells, has been replaced by a cytosolic sulfur mobilization system (SUF) acquired by lateral gene transfer from bacteria. In the context of eukaryotic phylogeny, our data suggest that Monocercomonoides is not primitively amitochondrial but has lost the mitochondrion secondarily. This is the first example of a eukaryote lacking any form of a mitochondrion, demonstrating that this organelle is not absolutely essential for the viability of a eukaryotic cell.

http://www.cell.com/cms/attachment/2056332410/2061316405/fx1.jpg

 

HIV Particles Used to Trap Intact Mammalian Protein Complexes

Belgian scientists from VIB and UGent developed Virotrap, a viral particle sorting approach for purifying protein complexes under native conditions.

http://www.technologynetworks.com/Proteomics/news.aspx?ID=191122

This method catches a bait protein together with its associated protein partners in virus-like particles that are budded from human cells. Like this, cell lysis is not needed and protein complexes are preserved during purification.

With his feet in both a proteomics lab and an interactomics lab, VIB/UGent professor Sven Eyckerman is well aware of the shortcomings of conventional approaches to analyze protein complexes. The lysis conditions required in mass spectrometry–based strategies to break open cell membranes often affect protein-protein interactions. “The first step in a classical study on protein complexes essentially turns the highly organized cellular structure into a big messy soup”, Eyckerman explains.

Inspired by virus biology, Eyckerman came up with a creative solution. “We used the natural process of HIV particle formation to our benefit by hacking a completely safe form of the virus to abduct intact protein machines from the cell.” It is well known that the HIV virus captures a number of host proteins during its particle formation. By fusing a bait protein to the HIV-1 GAG protein, interaction partners become trapped within virus-like particles that bud from mammalian cells. Standard proteomic approaches are used next to reveal the content of these particles. Fittingly, the team named the method ‘Virotrap’.

The Virotrap approach is exceptional as protein networks can be characterized under natural conditions. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. The researchers showed the method was suitable for detection of known binary interactions as well as mass spectrometry-based identification of novel protein partners.

Virotrap is a textbook example of bringing research teams with complementary expertise together. Cross-pollination with the labs of Jan Tavernier (VIB/UGent) and Kris Gevaert (VIB/UGent) enabled the development of this platform.

Jan Tavernier: “Virotrap represents a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. It is complementary to the arsenal of existing interactomics methods, but also holds potential for other fields, like drug target characterization. We also developed a small molecule-variant of Virotrap that could successfully trap protein partners for small molecule baits.”

Kris Gevaert: “Virotrap can also impact our understanding of disease pathways. We were actually surprised to see that this virus-based system could be used to study antiviral pathways, like Toll-like receptor signaling. Understanding these protein machines in their natural environment is essential if we want to modulate their activity in pathology.“

 

Trapping mammalian protein complexes in viral particles

Sven Eyckerman, Kevin Titeca, …Kris GevaertJan Tavernier
Nature Communications Apr 2016; 7(11416)   http://dx.doi.org:/10.1038/ncomms11416

Cell lysis is an inevitable step in classical mass spectrometry–based strategies to analyse protein complexes. Complementary lysis conditions, in situ cross-linking strategies and proximal labelling techniques are currently used to reduce lysis effects on the protein complex. We have developed Virotrap, a viral particle sorting approach that obviates the need for cell homogenization and preserves the protein complexes during purification. By fusing a bait protein to the HIV-1 GAG protein, we show that interaction partners become trapped within virus-like particles (VLPs) that bud from mammalian cells. Using an efficient VLP enrichment protocol, Virotrap allows the detection of known binary interactions and MS-based identification of novel protein partners as well. In addition, we show the identification of stimulus-dependent interactions and demonstrate trapping of protein partners for small molecules. Virotrap constitutes an elegant complementary approach to the arsenal of methods to study protein complexes.

Proteins mostly exert their function within supramolecular complexes. Strategies for detecting protein–protein interactions (PPIs) can be roughly divided into genetic systems1 and co-purification strategies combined with mass spectrometry (MS) analysis (for example, AP–MS)2. The latter approaches typically require cell or tissue homogenization using detergents, followed by capture of the protein complex using affinity tags3 or specific antibodies4. The protein complexes extracted from this ‘soup’ of constituents are then subjected to several washing steps before actual analysis by trypsin digestion and liquid chromatography–MS/MS analysis. Such lysis and purification protocols are typically empirical and have mostly been optimized using model interactions in single labs. In fact, lysis conditions can profoundly affect the number of both specific and nonspecific proteins that are identified in a typical AP–MS set-up. Indeed, recent studies using the nuclear pore complex as a model protein complex describe optimization of purifications for the different proteins in the complex by examining 96 different conditions5. Nevertheless, for new purifications, it remains hard to correctly estimate the loss of factors in a standard AP–MS experiment due to washing and dilution effects during treatments (that is, false negatives). These considerations have pushed the concept of stabilizing PPIs before the actual homogenization step. A classical approach involves cross-linking with simple reagents (for example, formaldehyde) or with more advanced isotope-labelled cross-linkers (reviewed in ref. 2). However, experimental challenges such as cell permeability and reactivity still preclude the widespread use of cross-linking agents. Moreover, MS-generated spectra of cross-linked peptides are notoriously difficult to identify correctly. A recent lysis-independent solution involves the expression of a bait protein fused to a promiscuous biotin ligase, which results in labelling of proteins proximal to the activity of the enzyme-tagged bait protein6. When compared with AP–MS, this BioID approach delivers a complementary set of candidate proteins, including novel interaction partners78. Such particular studies clearly underscore the need for complementary approaches in the co-complex strategies.

The evolutionary stress on viruses promoted highly condensed coding of information and maximal functionality for small genomes. Accordingly, for HIV-1 it is sufficient to express a single protein, the p55 GAG protein, for efficient production of virus-like particles (VLPs) from cells910. This protein is highly mobile before its accumulation in cholesterol-rich regions of the membrane, where multimerization initiates the budding process11. A total of 4,000–5,000 GAG molecules is required to form a single particle of about 145 nm (ref. 12). Both VLPs and mature viruses contain a number of host proteins that are recruited by binding to viral proteins. These proteins can either contribute to the infectivity (for example, Cyclophilin/FKBPA13) or act as antiviral proteins preventing the spreading of the virus (for example, APOBEC proteins14).

We here describe the development and application of Virotrap, an elegant co-purification strategy based on the trapping of a bait protein together with its associated protein partners in VLPs that are budded from the cell. After enrichment, these particles can be analysed by targeted (for example, western blotting) or unbiased approaches (MS-based proteomics). Virotrap allows detection of known binary PPIs, analysis of protein complexes and their dynamics, and readily detects protein binders for small molecules.

Concept of the Virotrap system

Classical AP–MS approaches rely on cell homogenization to access protein complexes, a step that can vary significantly with the lysis conditions (detergents, salt concentrations, pH conditions and so on)5. To eliminate the homogenization step in AP–MS, we reasoned that incorporation of a protein complex inside a secreted VLP traps the interaction partners under native conditions and protects them during further purification. We thus explored the possibility of protein complex packaging by the expression of GAG-bait protein chimeras (Fig. 1) as expression of GAG results in the release of VLPs from the cells910. As a first PPI pair to evaluate this concept, we selected the HRAS protein as a bait combined with the RAF1 prey protein. We were able to specifically detect the HRAS–RAF1 interaction following enrichment of VLPs via ultracentrifugation (Supplementary Fig. 1a). To prevent tedious ultracentrifugation steps, we designed a novel single-step protocol wherein we co-express the vesicular stomatitis virus glycoprotein (VSV-G) together with a tagged version of this glycoprotein in addition to the GAG bait and prey. Both tagged and untagged VSV-G proteins are probably presented as trimers on the surface of the VLPs, allowing efficient antibody-based recovery from large volumes. The HRAS–RAF1 interaction was confirmed using this single-step protocol (Supplementary Fig. 1b). No associations with unrelated bait or prey proteins were observed for both protocols.

Figure 1: Schematic representation of the Virotrap strategy.

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f1.jpg

 

Expression of a GAG-bait fusion protein (1) results in submembrane multimerization (2) and subsequent budding of VLPs from cells (3). Interaction partners of the bait protein are also trapped within these VLPs and can be identified after purification by western blotting or MS analysis (4).

Virotrap for the detection of binary interactions

We next explored the reciprocal detection of a set of PPI pairs, which were selected based on published evidence and cytosolic localization15. After single-step purification and western blot analysis, we could readily detect reciprocal interactions between CDK2 and CKS1B, LCP2 and GRAP2, and S100A1 and S100B (Fig. 2a). Only for the LCP2 prey we observed nonspecific association with an irrelevant bait construct. However, the particle levels of the GRAP2 bait were substantially lower as compared with those of the GAG control construct (GAG protein levels in VLPs; Fig. 2a, second panel of the LCP2 prey). After quantification of the intensities of bait and prey proteins and normalization of prey levels using bait levels, we observed a strong enrichment for the GAG-GRAP2 bait (Supplementary Fig. 2).

…..

Virotrap for unbiased discovery of novel interactions

For the detection of novel interaction partners, we scaled up VLP production and purification protocols (Supplementary Fig. 5 and Supplementary Note 1 for an overview of the protocol) and investigated protein partners trapped using the following bait proteins: Fas-associated via death domain (FADD), A20 (TNFAIP3), nuclear factor-κB (NF-κB) essential modifier (IKBKG), TRAF family member-associated NF-κB activator (TANK), MYD88 and ring finger protein 41 (RNF41). To obtain specific interactors from the lists of identified proteins, we challenged the data with a combined protein list of 19 unrelated Virotrap experiments (Supplementary Table 1 for an overview). Figure 3 shows the design and the list of candidate interactors obtained after removal of all proteins that were found in the 19 control samples (including removal of proteins from the control list identified with a single peptide). The remaining list of confident protein identifications (identified with at least two peptides in at least two biological repeats) reveals both known and novel candidate interaction partners. All candidate interactors including single peptide protein identifications are given in Supplementary Data 2 and also include recurrent protein identifications of known interactors based on a single peptide; for example, CASP8 for FADD and TANK for NEMO. Using alternative methods, we confirmed the interaction between A20 and FADD, and the associations with transmembrane proteins (insulin receptor and insulin-like growth factor receptor 1) that were captured using RNF41 as a bait (Supplementary Fig. 6). To address the use of Virotrap for the detection of dynamic interactions, we activated the NF-κB pathway via the tumour necrosis factor (TNF) receptor (TNFRSF1A) using TNFα (TNF) and performed Virotrap analysis using A20 as bait (Fig. 3). This resulted in the additional enrichment of receptor-interacting kinase (RIPK1), TNFR1-associated via death domain (TRADD), TNFRSF1A and TNF itself, confirming the expected activated complex20.

Figure 3: Use of Virotrap for unbiased interactome analysis

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f3.jpg

Figure 4: Use of Virotrap for detection of protein partners of small molecules.

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f4.jpg

….

Lysis conditions used in AP–MS strategies are critical for the preservation of protein complexes. A multitude of lysis conditions have been described, culminating in a recent report where protein complex stability was assessed under 96 lysis/purification protocols5. Moreover, the authors suggest to optimize the conditions for every complex, implying an important workload for researchers embarking on protein complex analysis using classical AP–MS. As lysis results in a profound change of the subcellular context and significantly alters the concentration of proteins, loss of complex integrity during a classical AP–MS protocol can be expected. A clear evolution towards ‘lysis-independent’ approaches in the co-complex analysis field is evident with the introduction of BioID6 and APEX25 where proximal proteins, including proteins residing in the complex, are labelled with biotin by an enzymatic activity fused to a bait protein. A side-by-side comparison between classical AP–MS and BioID showed overlapping and unique candidate binding proteins for both approaches78, supporting the notion that complementary methods are needed to provide a comprehensive view on protein complexes. This has also been clearly demonstrated for binary approaches15 and is a logical consequence of the heterogenic nature underlying PPIs (binding mechanism, requirement for posttranslational modifications, location, affinity and so on).

In this report, we explore an alternative, yet complementary method to isolate protein complexes without interfering with cellular integrity. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. This constitutes a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. A comparison of our Virotrap approach with AP–MS shows complementary data, with specific false positives and false negatives for both methods (Supplementary Fig. 7).

The current implementation of the Virotrap platform implies the use of a GAG-bait construct resulting in considerable expression of the bait protein. Different strategies are currently pursued to reduce bait expression including co-expression of a native GAG protein together with the GAG-bait protein, not only reducing bait expression but also creating more ‘space’ in the particles potentially accommodating larger bait protein complexes. Nevertheless, the presence of the bait on the forming GAG scaffold creates an intracellular affinity matrix (comparable to the early in vitro affinity columns for purification of interaction partners from lysates26) that has the potential to compete with endogenous complexes by avidity effects. This avidity effect is a powerful mechanism that aids in the recruitment of cyclophilin to GAG27, a well-known weak interaction (Kd=16 μM (ref. 28)) detectable as a background association in the Virotrap system. Although background binding may be increased by elevated bait expression, weaker associations are readily detectable (for example, MAL—MYD88-binding study; Fig. 2c).

The size of Virotrap particles (around 145 nm) suggests limitations in the size of the protein complex that can be accommodated in the particles. Further experimentation is required to define the maximum size of proteins or the number of protein complexes that can be trapped inside the particles.

….

In conclusion, Virotrap captures significant parts of known interactomes and reveals new interactions. This cell lysis-free approach purifies protein complexes under native conditions and thus provides a powerful method to complement AP–MS or other PPI data. Future improvements of the system include strategies to reduce bait expression to more physiological levels and application of advanced data analysis options to filter out background. These developments can further aid in the deployment of Virotrap as a powerful extension of the current co-complex technology arsenal.

 

New Autism Blood Biomarker Identified

Researchers at UT Southwestern Medical Center have identified a blood biomarker that may aid in earlier diagnosis of children with autism spectrum disorder, or ASD

http://www.technologynetworks.com/Proteomics/news.aspx?ID=191268

 

In a recent edition of Scientific Reports, UT Southwestern researchers reported on the identification of a blood biomarker that could distinguish the majority of ASD study participants versus a control group of similar age range. In addition, the biomarker was significantly correlated with the level of communication impairment, suggesting that the blood test may give insight into ASD severity.

“Numerous investigators have long sought a biomarker for ASD,” said Dr. Dwight German, study senior author and Professor of Psychiatry at UT Southwestern. “The blood biomarker reported here along with others we are testing can represent a useful test with over 80 percent accuracy in identifying ASD.”

ASD1 –  was 66 percent accurate in diagnosing ASD. When combined with thyroid stimulating hormone level measurements, the ASD1-binding biomarker was 73 percent accurate at diagnosis

 

A Search for Blood Biomarkers for Autism: Peptoids

Sayed ZamanUmar Yazdani,…, Laura Hewitson & Dwight C. German
Scientific Reports 2016; 6(19164) http://dx.doi.org:/10.1038/srep19164

Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social interaction and communication, and restricted, repetitive patterns of behavior. In order to identify individuals with ASD and initiate interventions at the earliest possible age, biomarkers for the disorder are desirable. Research findings have identified widespread changes in the immune system in children with autism, at both systemic and cellular levels. In an attempt to find candidate antibody biomarkers for ASD, highly complex libraries of peptoids (oligo-N-substituted glycines) were screened for compounds that preferentially bind IgG from boys with ASD over typically developing (TD) boys. Unexpectedly, many peptoids were identified that preferentially bound IgG from TD boys. One of these peptoids was studied further and found to bind significantly higher levels (>2-fold) of the IgG1 subtype in serum from TD boys (n = 60) compared to ASD boys (n = 74), as well as compared to older adult males (n = 53). Together these data suggest that ASD boys have reduced levels (>50%) of an IgG1 antibody, which resembles the level found normally with advanced age. In this discovery study, the ASD1 peptoid was 66% accurate in predicting ASD.

….

Peptoid libraries have been used previously to search for autoantibodies for neurodegenerative diseases19 and for systemic lupus erythematosus (SLE)21. In the case of SLE, peptoids were identified that could identify subjects with the disease and related syndromes with moderate sensitivity (70%) and excellent specificity (97.5%). Peptoids were used to measure IgG levels from both healthy subjects and SLE patients. Binding to the SLE-peptoid was significantly higher in SLE patients vs. healthy controls. The IgG bound to the SLE-peptoid was found to react with several autoantigens, suggesting that the peptoids are capable of interacting with multiple, structurally similar molecules. These data indicate that IgG binding to peptoids can identify subjects with high levels of pathogenic autoantibodies vs. a single antibody.

In the present study, the ASD1 peptoid binds significantly lower levels of IgG1 in ASD males vs. TD males. This finding suggests that the ASD1 peptoid recognizes antibody(-ies) of an IgG1 subtype that is (are) significantly lower in abundance in the ASD males vs. TD males. Although a previous study14 has demonstrated lower levels of plasma IgG in ASD vs. TD children, here, we additionally quantified serum IgG levels in our individuals and found no difference in IgG between the two groups (data not shown). Furthermore, our IgG levels did not correlate with ASD1 binding levels, indicating that ASD1 does not bind IgG generically, and that the peptoid’s ability to differentiate between ASD and TD males is related to a specific antibody(-ies).

ASD subjects underwent a diagnostic evaluation using the ADOS and ADI-R, and application of the DSM-IV criteria prior to study inclusion. Only those subjects with a diagnosis of Autistic Disorder were included in the study. The ADOS is a semi-structured observation of a child’s behavior that allows examiners to observe the three core domains of ASD symptoms: reciprocal social interaction, communication, and restricted and repetitive behaviors1. When ADOS subdomain scores were compared with peptoid binding, the only significant relationship was with Social Interaction. However, the positive correlation would suggest that lower peptoid binding is associated with better social interaction, not poorer social interaction as anticipated.

The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.

It is interesting that in serum samples from older men, the ASD1 binding is similar to that in the ASD boys. This is consistent with the observation that with aging there is a reduction in the strength of the immune system, and the changes are gender-specific25. Recent studies using parabiosis26, in which blood from young mice reverse age-related impairments in cognitive function and synaptic plasticity in old mice, reveal that blood constituents from young subjects may contain important substances for maintaining neuronal functions. Work is in progress to identify the antibody/antibodies that are differentially binding to the ASD1 peptoid, which appear as a single band on the electrophoresis gel (Fig. 4).

……..

The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.

 

  • Titration of IgG binding to ASD1 using serum pooled from 10 TD males and 10 ASD males demonstrates ASD1’s ability to differentiate between the two groups. (B)Detecting IgG1 subclass instead of total IgG amplifies this differentiation. (C) IgG1 binding of individual ASD (n=74) and TD (n=60) male serum samples (1:100 dilution) to ASD1 significantly differs with TD>ASD. In addition, IgG1 binding of older adult male (AM) serum samples (n=53) to ASD1 is significantly lower than TD males, and not different from ASD males. The three groups were compared with a Kruskal-Wallis ANOVA, H = 10.1781, p<0.006. **p<0.005. Error bars show SEM. (D) Receiver-operating characteristic curve for ASD1’s ability to discriminate between ASD and TD males.

http://www.nature.com/article-assets/npg/srep/2016/160114/srep19164/images_hires/m685/srep19164-f3.jpg

 

Association between peptoid binding and ADOS and ADI-R subdomains

Higher scores in any domain on the ADOS and ADI-R are indicative of more abnormal behaviors and/or symptoms. Among ADOS subdomains, there was no significant relationship between Communication and peptoid binding (z = 0.04, p = 0.966), Communication + Social interaction (z = 1.53, p = 0.127), or Stereotyped Behaviors and Restrictive Interests (SBRI) (z = 0.46, p = 0.647). Higher scores on the Social Interaction domain were significantly associated with higher peptoid binding (z = 2.04, p = 0.041).

Among ADI-R subdomains, higher scores on the Communication domain were associated with lower levels of peptoid binding (z = −2.28, p = 0.023). There was not a significant relationship between Social Interaction (z = 0.07, p = 0.941) or Restrictive/Repetitive Stereotyped Behaviors (z = −1.40, p = 0.162) and peptoid binding.

 

 

Computational Model Finds New Protein-Protein Interactions

Researchers at University of Pittsburgh have discovered 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia.

http://www.technologynetworks.com/Proteomics/news.aspx?id=190995

Using a computational model they developed, researchers at the University of Pittsburgh School of Medicine have discovered more than 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia. The findings, published online in npj Schizophrenia, a Nature Publishing Group journal, could lead to greater understanding of the biological underpinnings of this mental illness, as well as point the way to treatments.

There have been many genome-wide association studies (GWAS) that have identified gene variants associated with an increased risk for schizophrenia, but in most cases there is little known about the proteins that these genes make, what they do and how they interact, said senior investigator Madhavi Ganapathiraju, Ph.D., assistant professor of biomedical informatics, Pitt School of Medicine.

“GWAS studies and other research efforts have shown us what genes might be relevant in schizophrenia,” she said. “What we have done is the next step. We are trying to understand how these genes relate to each other, which could show us the biological pathways that are important in the disease.”

Each gene makes proteins and proteins typically interact with each other in a biological process. Information about interacting partners can shed light on the role of a gene that has not been studied, revealing pathways and biological processes associated with the disease and also its relation to other complex diseases.

Dr. Ganapathiraju’s team developed a computational model called High-Precision Protein Interaction Prediction (HiPPIP) and applied it to discover PPIs of schizophrenia-linked genes identified through GWAS, as well as historically known risk genes. They found 504 never-before known PPIs, and noted also that while schizophrenia-linked genes identified historically and through GWAS had little overlap, the model showed they shared more than 100 common interactors.

“We can infer what the protein might do by checking out the company it keeps,” Dr. Ganapathiraju explained. “For example, if I know you have many friends who play hockey, it could mean that you are involved in hockey, too. Similarly, if we see that an unknown protein interacts with multiple proteins involved in neural signaling, for example, there is a high likelihood that the unknown entity also is involved in the same.”

Dr. Ganapathiraju and colleagues have drawn such inferences on protein function based on the PPIs of proteins, and made their findings available on a website Schizo-Pi. This information can be used by biologists to explore the schizophrenia interactome with the aim of understanding more about the disease or developing new treatment drugs.

Schizophrenia interactome with 504 novel protein–protein interactions

MK GanapathirajuM Thahir,…,  CE LoscherEM Bauer & S Chaparala
npj Schizophrenia 2016;  2(16012)   http://dx.doi.org:/10.1038/npjschz.2016.12

(GWAS) have revealed the role of rare and common genetic variants, but the functional effects of the risk variants remain to be understood. Protein interactome-based studies can facilitate the study of molecular mechanisms by which the risk genes relate to schizophrenia (SZ) genesis, but protein–protein interactions (PPIs) are unknown for many of the liability genes. We developed a computational model to discover PPIs, which is found to be highly accurate according to computational evaluations and experimental validations of selected PPIs. We present here, 365 novel PPIs of liability genes identified by the SZ Working Group of the Psychiatric Genomics Consortium (PGC). Seventeen genes that had no previously known interactions have 57 novel interactions by our method. Among the new interactors are 19 drug targets that are targeted by 130 drugs. In addition, we computed 147 novel PPIs of 25 candidate genes investigated in the pre-GWAS era. While there is little overlap between the GWAS genes and the pre-GWAS genes, the interactomes reveal that they largely belong to the same pathways, thus reconciling the apparent disparities between the GWAS and prior gene association studies. The interactome including 504 novel PPIs overall, could motivate other systems biology studies and trials with repurposed drugs. The PPIs are made available on a webserver, called Schizo-Pi at http://severus.dbmi.pitt.edu/schizo-pi with advanced search capabilities.

Schizophrenia (SZ) is a common, potentially severe psychiatric disorder that afflicts all populations.1 Gene mapping studies suggest that SZ is a complex disorder, with a cumulative impact of variable genetic effects coupled with environmental factors.2 As many as 38 genome-wide association studies (GWAS) have been reported on SZ out of a total of 1,750 GWAS publications on 1,087 traits or diseases reported in the GWAS catalog maintained by the National Human Genome Research Institute of USA3 (as of April 2015), revealing the common variants associated with SZ.4 The SZ Working Group of the Psychiatric Genomics Consortium (PGC) identified 108 genetic loci that likely confer risk for SZ.5 While the role of genetics has been clearly validated by this study, the functional impact of the risk variants is not well-understood.6,7 Several of the genes implicated by the GWAS have unknown functions and could participate in possibly hitherto unknown pathways.8 Further, there is little or no overlap between the genes identified through GWAS and ‘candidate genes’ proposed in the pre-GWAS era.9

Interactome-based studies can be useful in discovering the functional associations of genes. For example,disrupted in schizophrenia 1 (DISC1), an SZ related candidate gene originally had no known homolog in humans. Although it had well-characterized protein domains such as coiled-coil domains and leucine-zipper domains, its function was unknown.10,11 Once its protein–protein interactions (PPIs) were determined using yeast 2-hybrid technology,12 investigators successfully linked DISC1 to cAMP signaling, axon elongation, and neuronal migration, and accelerated the research pertaining to SZ in general, and DISC1 in particular.13 Typically such studies are carried out on known protein–protein interaction (PPI) networks, or as in the case of DISC1, when there is a specific gene of interest, its PPIs are determined by methods such as yeast 2-hybrid technology.

Knowledge of human PPI networks is thus valuable for accelerating discovery of protein function, and indeed, biomedical research in general. However, of the hundreds of thousands of biophysical PPIs thought to exist in the human interactome,14,15 <100,000 are known today (Human Protein Reference Database, HPRD16 and BioGRID17 databases). Gold standard experimental methods for the determination of all the PPIs in human interactome are time-consuming, expensive and may not even be feasible, as about 250 million pairs of proteins would need to be tested overall; high-throughput methods such as yeast 2-hybrid have important limitations for whole interactome determination as they have a low recall of 23% (i.e., remaining 77% of true interactions need to be determined by other means), and a low precision (i.e., the screens have to be repeated multiple times to achieve high selectivity).18,19Computational methods are therefore necessary to complete the interactome expeditiously. Algorithms have begun emerging to predict PPIs using statistical machine learning on the characteristics of the proteins, but these algorithms are employed predominantly to study yeast. Two significant computational predictions have been reported for human interactome; although they have had high false positive rates, these methods have laid the foundation for computational prediction of human PPIs.20,21

We have created a new PPI prediction model called High-Confidence Protein–Protein Interaction Prediction (HiPPIP) model. Novel interactions predicted with this model are making translational impact. For example, we discovered a PPI between OASL and DDX58, which on validation showed that an increased expression of OASL could boost innate immunity to combat influenza by activating the RIG-I pathway.22 Also, the interactome of the genes associated with congenital heart disease showed that the disease morphogenesis has a close connection with the structure and function of cilia.23Here, we describe the HiPPIP model and its application to SZ genes to construct the SZ interactome. After computational evaluations and experimental validations of selected novel PPIs, we present here 504 highly confident novel PPIs in the SZ interactome, shedding new light onto several uncharacterized genes that are associated with SZ.

We developed a computational model called HiPPIP to predict PPIs (see Methods and Supplementary File 1). The model has been evaluated by computational methods and experimental validations and is found to be highly accurate. Evaluations on a held-out test data showed a precision of 97.5% and a recall of 5%. 5% recall out of 150,000 to 600,000 estimated number of interactions in the human interactome corresponds to 7,500–30,000 novel PPIs in the whole interactome. Note that, it is likely that the real precision would be higher than 97.5% because in this test data, randomly paired proteins are treated as non-interacting protein pairs, whereas some of them may actually be interacting pairs with a small probability; thus, some of the pairs that are treated as false positives in test set are likely to be true but hitherto unknown interactions. In Figure 1a, we show the precision versus recall of our method on ‘hub proteins’ where we considered all pairs that received a score >0.5 by HiPPIP to be novel interactions. In Figure 1b, we show the number of true positives versus false positives observed in hub proteins. Both these figures also show our method to be superior in comparison to the prediction of membrane-receptor interactome by Qi et al’s.24 True positives versus false positives are also shown for individual hub proteins by our method in Figure 1cand by Qi et al’s.23 in Figure 1d. These evaluations showed that our predictions contain mostly true positives. Unlike in other domains where ranked lists are commonly used such as information retrieval, in PPI prediction the ‘false positives’ may actually be unlabeled instances that are indeed true interactions that are not yet discovered. In fact, such unlabeled pairs predicted as interactors of the hub gene HMGB1 (namely, the pairs HMGB1-KL and HMGB1-FLT1) were validated by experimental methods and found to be true PPIs (See the Figures e–g inSupplementary File 3). Thus, we concluded that the protein pairs that received a score of ⩾0.5 are highly confident to be true interactions. The pairs that receive a score less than but close to 0.5 (i.e., in the range of 0.4–0.5) may also contain several true PPIs; however, we cannot confidently say that all in this range are true PPIs. Only the PPIs predicted with a score >0.5 are included in the interactome.

Figure 1

http://www.nature.com/article-assets/npg/npjschz/2016/npjschz201612/images_hires/w582/npjschz201612-f1.jpg

Computational evaluation of predicted protein–protein interactions on hub proteins: (a) precision recall curve. (b) True positive versus false positives in ranked lists of hub type membrane receptors for our method and that by Qi et al. True positives versus false positives are shown for individual membrane receptors by our method in (c) and by Qi et al. in (d). Thick line is the average, which is also the same as shown in (b). Note:x-axis is recall in (a), whereas it is number of false positives in (bd). The range of y-axis is observed by varying the threshold from 1.0–0 in (a), and to 0.5 in (bd).

SZ interactome

By applying HiPPIP to the GWAS genes and Historic (pre-GWAS) genes, we predicted over 500 high confidence new PPIs adding to about 1400 previously known PPIs.

Schizophrenia interactome: network view of the schizophrenia interactome is shown as a graph, where genes are shown as nodes and PPIs as edges connecting the nodes. Schizophrenia-associated genes are shown as dark blue nodes, novel interactors as red color nodes and known interactors as blue color nodes. The source of the schizophrenia genes is indicated by its label font, where Historic genes are shown italicized, GWAS genes are shown in bold, and the one gene that is common to both is shown in italicized and bold. For clarity, the source is also indicated by the shape of the node (triangular for GWAS and square for Historic and hexagonal for both). Symbols are shown only for the schizophrenia-associated genes; actual interactions may be accessed on the web. Red edges are the novel interactions, whereas blue edges are known interactions. GWAS, genome-wide association studies of schizophrenia; PPI, protein–protein interaction.

http://www.nature.com/article-assets/npg/npjschz/2016/npjschz201612/images_hires/m685/npjschz201612-f2.jpg

 

Webserver of SZ interactome

We have made the known and novel interactions of all SZ-associated genes available on a webserver called Schizo-Pi, at the addresshttp://severus.dbmi.pitt.edu/schizo-pi. This webserver is similar to Wiki-Pi33 which presents comprehensive annotations of both participating proteins of a PPI side-by-side. The difference between Wiki-Pi which we developed earlier, and Schizo-Pi, is the inclusion of novel predicted interactions of the SZ genes into the latter.

Despite the many advances in biomedical research, identifying the molecular mechanisms underlying the disease is still challenging. Studies based on protein interactions were proven to be valuable in identifying novel gene associations that could shed new light on disease pathology.35 The interactome including more than 500 novel PPIs will help to identify pathways and biological processes associated with the disease and also its relation to other complex diseases. It also helps identify potential drugs that could be repurposed to use for SZ treatment.

Functional and pathway enrichment in SZ interactome

When a gene of interest has little known information, functions of its interacting partners serve as a starting point to hypothesize its own function. We computed statistically significant enrichment of GO biological process terms among the interacting partners of each of the genes using BinGO36 (see online at http://severus.dbmi.pitt.edu/schizo-pi).

 

Protein aggregation and aggregate toxicity: new insights into protein folding, misfolding diseases and biological evolution

Massimo Stefani · Christopher M. Dobson

Abstract The deposition of proteins in the form of amyloid fibrils and plaques is the characteristic feature of more than 20 degenerative conditions affecting either the central nervous system or a variety of peripheral tissues. As these conditions include Alzheimer’s, Parkinson’s and the prion diseases, several forms of fatal systemic amyloidosis, and at least one condition associated with medical intervention (haemodialysis), they are of enormous importance in the context of present-day human health and welfare. Much remains to be learned about the mechanism by which the proteins associated with these diseases aggregate and form amyloid structures, and how the latter affect the functions of the organs with which they are associated. A great deal of information concerning these diseases has emerged, however, during the past 5 years, much of it causing a number of fundamental assumptions about the amyloid diseases to be reexamined. For example, it is now apparent that the ability to form amyloid structures is not an unusual feature of the small number of proteins associated with these diseases but is instead a general property of polypeptide chains. It has also been found recently that aggregates of proteins not associated with amyloid diseases can impair the ability of cells to function to a similar extent as aggregates of proteins linked with specific neurodegenerative conditions. Moreover, the mature amyloid fibrils or plaques appear to be substantially less toxic than the prefibrillar aggregates that are their precursors. The toxicity of these early aggregates appears to result from an intrinsic ability to impair fundamental cellular processes by interacting with cellular membranes, causing oxidative stress and increases in free Ca2+ that eventually lead to apoptotic or necrotic cell death. The ‘new view’ of these diseases also suggests that other degenerative conditions could have similar underlying origins to those of the amyloidoses. In addition, cellular protection mechanisms, such as molecular chaperones and the protein degradation machinery, appear to be crucial in the prevention of disease in normally functioning living organisms. It also suggests some intriguing new factors that could be of great significance in the evolution of biological molecules and the mechanisms that regulate their behaviour.

The genetic information within a cell encodes not only the specific structures and functions of proteins but also the way these structures are attained through the process known as protein folding. In recent years many of the underlying features of the fundamental mechanism of this complex process and the manner in which it is regulated in living systems have emerged from a combination of experimental and theoretical studies [1]. The knowledge gained from these studies has also raised a host of interesting issues. It has become apparent, for example, that the folding and unfolding of proteins is associated with a whole range of cellular processes from the trafficking of molecules to specific organelles to the regulation of the cell cycle and the immune response. Such observations led to the inevitable conclusion that the failure to fold correctly, or to remain correctly folded, gives rise to many different types of biological malfunctions and hence to many different forms of disease [2]. In addition, it has been recognised recently that a large number of eukaryotic genes code for proteins that appear to be ‘natively unfolded’, and that proteins can adopt, under certain circumstances, highly organised multi-molecular assemblies whose structures are not specifically encoded in the amino acid sequence. Both these observations have raised challenging questions about one of the most fundamental principles of biology: the close relationship between the sequence, structure and function of proteins, as we discuss below [3].

It is well established that proteins that are ‘misfolded’, i.e. that are not in their functionally relevant conformation, are devoid of normal biological activity. In addition, they often aggregate and/or interact inappropriately with other cellular components leading to impairment of cell viability and eventually to cell death. Many diseases, often known as misfolding or conformational diseases, ultimately result from the presence in a living system of protein molecules with structures that are ‘incorrect’, i.e. that differ from those in normally functioning organisms [4]. Such diseases include conditions in which a specific protein, or protein complex, fails to fold correctly (e.g. cystic fibrosis, Marfan syndrome, amyotonic lateral sclerosis) or is not sufficiently stable to perform its normal function (e.g. many forms of cancer). They also include conditions in which aberrant folding behaviour results in the failure of a protein to be correctly trafficked (e.g. familial hypercholesterolaemia, α1-antitrypsin deficiency, and some forms of retinitis pigmentosa) [4]. The tendency of proteins to aggregate, often to give species extremely intractable to dissolution and refolding, is of course also well known in other circumstances. Examples include the formation of inclusion bodies during overexpression of heterologous proteins in bacteria and the precipitation of proteins during laboratory purification procedures. Indeed, protein aggregation is well established as one of the major difficulties associated with the production and handling of proteins in the biotechnology and pharmaceutical industries [5].

Considerable attention is presently focused on a group of protein folding diseases known as amyloidoses. In these diseases specific peptides or proteins fail to fold or to remain correctly folded and then aggregate (often with other components) so as to give rise to ‘amyloid’ deposits in tissue. Amyloid structures can be recognised because they possess a series of specific tinctorial and biophysical characteristics that reflect a common core structure based on the presence of highly organised βsheets [6]. The deposits in strictly defined amyloidoses are extracellular and can often be observed as thread-like fibrillar structures, sometimes assembled further into larger aggregates or plaques. These diseases include a range of sporadic, familial or transmissible degenerative diseases, some of which affect the brain and the central nervous system (e.g. Alzheimer’s and Creutzfeldt-Jakob diseases), while others involve peripheral tissues and organs such as the liver, heart and spleen (e.g. systemic amyloidoses and type II diabetes) [7, 8]. In other forms of amyloidosis, such as primary or secondary systemic amyloidoses, proteinaceous deposits are found in skeletal tissue and joints (e.g. haemodialysis-related amyloidosis) as well as in several organs (e.g. heart and kidney). Yet other components such as collagen, glycosaminoglycans and proteins (e.g. serum amyloid protein) are often present in the deposits protecting them against degradation [9, 10, 11]. Similar deposits to those in the amyloidoses are, however, found intracellularly in other diseases; these can be localised either in the cytoplasm, in the form of specialised aggregates known as aggresomes or as Lewy or Russell bodies or in the nucleus (see below).

The presence in tissue of proteinaceous deposits is a hallmark of all these diseases, suggesting a causative link between aggregate formation and pathological symptoms (often known as the amyloid hypothesis) [7, 8, 12]. At the present time the link between amyloid formation and disease is widely accepted on the basis of a large number of biochemical and genetic studies. The specific nature of the pathogenic species, and the molecular basis of their ability to damage cells, are however, the subject of intense debate [13, 14, 15, 16, 17, 18, 19, 20]. In neurodegenerative disorders it is very likely that the impairment of cellular function follows directly from the interactions of the aggregated proteins with cellular components [21, 22]. In the systemic non-neurological diseases, however, it is widely believed that the accumulation in vital organs of large amounts of amyloid deposits can by itself cause at least some of the clinical symptoms [23]. It is quite possible, however, that there are other more specific effects of aggregates on biochemical processes even in these diseases. The presence of extracellular or intracellular aggregates of a specific polypeptide molecule is a characteristic of all the 20 or so recognised amyloid diseases. The polypeptides involved include full length proteins (e.g. lysozyme or immunoglobulin light chains), biological peptides (amylin, atrial natriuretic factor) and fragments of larger proteins produced as a result of specific processing (e.g. the Alzheimer βpeptide) or of more general degradation [e.g. poly(Q) stretches cleaved from proteins with poly(Q) extensions such as huntingtin, ataxins and the androgen receptor]. The peptides and proteins associated with known amyloid diseases are listed in Table 1. In some cases the proteins involved have wild type sequences, as in sporadic forms of the diseases, but in other cases these are variants resulting from genetic mutations associated with familial forms of the diseases. In some cases both sporadic and familial diseases are associated with a given protein; in this case the mutational variants are usually associated with early-onset forms of the disease. In the case of the neurodegenerative diseases associated with the prion protein some forms of the diseases are transmissible. The existence of familial forms of a number of amyloid diseases has provided significant clues to the origins of the pathologies. For example, there are increasingly strong links between the age at onset of familial forms of disease and the effects of the mutations involved on the propensity of the affected proteins to aggregate in vitro. Such findings also support the link between the process of aggregation and the clinical manifestations of disease [24, 25].

The presence in cells of misfolded or aggregated proteins triggers a complex biological response. In the cytosol, this is referred to as the ‘heat shock response’ and in the endoplasmic reticulum (ER) it is known as the ‘unfolded protein response’. These responses lead to the expression, among others, of the genes for heat shock proteins (Hsp, or molecular chaperone proteins) and proteins involved in the ubiquitin-proteasome pathway [26]. The evolution of such complex biochemical machinery testifies to the fact that it is necessary for cells to isolate and clear rapidly and efficiently any unfolded or incorrectly folded protein as soon as it appears. In itself this fact suggests that these species could have a generally adverse effect on cellular components and cell viability. Indeed, it was a major step forward in understanding many aspects of cell biology when it was recognised that proteins previously associated only with stress, such as heat shock, are in fact crucial in the normal functioning of living systems. This advance, for example, led to the discovery of the role of molecular chaperones in protein folding and in the normal ‘housekeeping’ processes that are inherent in healthy cells [27, 28]. More recently a number of degenerative diseases, both neurological and systemic, have been linked to, or shown to be affected by, impairment of the ubiquitin-proteasome pathway (Table 2). The diseases are primarily associated with a reduction in either the expression or the biological activity of Hsps, ubiquitin, ubiquitinating or deubiquitinating enzymes and the proteasome itself, as we show below [29, 30, 31, 32], or even to the failure of the quality control mechanisms that ensure proper maturation of proteins in the ER. The latter normally leads to degradation of a significant proportion of polypeptide chains before they have attained their native conformations through retrograde translocation to the cytosol [33, 34].

….

It is now well established that the molecular basis of protein aggregation into amyloid structures involves the existence of ‘misfolded’ forms of proteins, i.e. proteins that are not in the structures in which they normally function in vivo or of fragments of proteins resulting from degradation processes that are inherently unable to fold [4, 7, 8, 36]. Aggregation is one of the common consequences of a polypeptide chain failing to reach or maintain its functional three-dimensional structure. Such events can be associated with specific mutations, misprocessing phenomena, aberrant interactions with metal ions, changes in environmental conditions, such as pH or temperature, or chemical modification (oxidation, proteolysis). Perturbations in the conformational properties of the polypeptide chain resulting from such phenomena may affect equilibrium 1 in Fig. 1 increasing the population of partially unfolded, or misfolded, species that are much more aggregation-prone than the native state.

Fig. 1 Overview of the possible fates of a newly synthesised polypeptide chain. The equilibrium ① between the partially folded molecules and the natively folded ones is usually strongly in favour of the latter except as a result of specific mutations, chemical modifications or partially destabilising solution conditions. The increased equilibrium populations of molecules in the partially or completely unfolded ensemble of structures are usually degraded by the proteasome; when this clearance mechanism is impaired, such species often form disordered aggregates or shift equilibrium ② towards the nucleation of pre-fibrillar assemblies that eventually grow into mature fibrils (equilibrium ③). DANGER! indicates that pre-fibrillar aggregates in most cases display much higher toxicity than mature fibrils. Heat shock proteins (Hsp) can suppress the appearance of pre-fibrillar assemblies by minimising the population of the partially folded molecules by assisting in the correct folding of the nascent chain and the unfolded protein response target incorrectly folded proteins for degradation.

……

Little is known at present about the detailed arrangement of the polypeptide chains themselves within amyloid fibrils, either those parts involved in the core βstrands or in regions that connect the various β-strands. Recent data suggest that the sheets are relatively untwisted and may in some cases at least exist in quite specific supersecondary structure motifs such as β-helices [6, 40] or the recently proposed µ-helix [41]. It seems possible that there may be significant differences in the way the strands are assembled depending on characteristics of the polypeptide chain involved [6, 42]. Factors including length, sequence (and in some cases the presence of disulphide bonds or post-translational modifications such as glycosylation) may be important in determining details of the structures. Several recent papers report structural models for amyloid fibrils containing different polypeptide chains, including the Aβ40 peptide, insulin and fragments of the prion protein, based on data from such techniques as cryo-electron microscopy and solid-state magnetic resonance spectroscopy [43, 44]. These models have much in common and do indeed appear to reflect the fact that the structures of different fibrils are likely to be variations on a common theme [40]. It is also emerging that there may be some common and highly organised assemblies of amyloid protofilaments that are not simply extended threads or ribbons. It is clear, for example, that in some cases large closed loops can be formed [45, 46, 47], and there may be specific types of relatively small spherical or ‘doughnut’ shaped structures that can result in at least some circumstances (see below).

…..

The similarity of some early amyloid aggregates with the pores resulting from oligomerisation of bacterial toxins and pore-forming eukaryotic proteins (see below) also suggest that the basic mechanism of protein aggregation into amyloid structures may not only be associated with diseases but in some cases could result in species with functional significance. Recent evidence indicates that a variety of micro-organisms may exploit the controlled aggregation of specific proteins (or their precursors) to generate functional structures. Examples include bacterial curli [52] and proteins of the interior fibre cells of mammalian ocular lenses, whose β-sheet arrays seem to be organised in an amyloid-like supramolecular order [53]. In this case the inherent stability of amyloid-like protein structure may contribute to the long-term structural integrity and transparency of the lens. Recently it has been hypothesised that amyloid-like aggregates of serum amyloid A found in secondary amyloidoses following chronic inflammatory diseases protect the host against bacterial infections by inducing lysis of bacterial cells [54]. One particularly interesting example is a ‘misfolded’ form of the milk protein α-lactalbumin that is formed at low pH and trapped by the presence of specific lipid molecules [55]. This form of the protein has been reported to trigger apoptosis selectively in tumour cells providing evidence for its importance in protecting infants from certain types of cancer [55]. ….

Amyloid formation is a generic property of polypeptide chains ….

It is clear that the presence of different side chains can influence the details of amyloid structures, particularly the assembly of protofibrils, and that they give rise to the variations on the common structural theme discussed above. More fundamentally, the composition and sequence of a peptide or protein affects profoundly its propensity to form amyloid structures under given conditions (see below).

Because the formation of stable protein aggregates of amyloid type does not normally occur in vivo under physiological conditions, it is likely that the proteins encoded in the genomes of living organisms are endowed with structural adaptations that mitigate against aggregation under these conditions. A recent survey involving a large number of structures of β-proteins highlights several strategies through which natural proteins avoid intermolecular association of β-strands in their native states [65].  Other surveys of protein databases indicate that nature disfavours sequences of alternating polar and nonpolar residues, as well as clusters of several consecutive hydrophobic residues, both of which enhance the tendency of a protein to aggregate prior to becoming completely folded [66, 67].

……

Precursors of amyloid fibrils can be toxic to cells

It was generally assumed until recently that the proteinaceous aggregates most toxic to cells are likely to be mature amyloid fibrils, the form of aggregates that have been commonly detected in pathological deposits. It therefore appeared probable that the pathogenic features underlying amyloid diseases are a consequence of the interaction with cells of extracellular deposits of aggregated material. As well as forming the basis for understanding the fundamental causes of these diseases, this scenario stimulated the exploration of therapeutic approaches to amyloidoses that focused mainly on the search for molecules able to impair the growth and deposition of fibrillar forms of aggregated proteins. ….

Structural basis and molecular features of amyloid toxicity

The presence of toxic aggregates inside or outside cells can impair a number of cell functions that ultimately lead to cell death by an apoptotic mechanism [95, 96]. Recent research suggests, however, that in most cases initial perturbations to fundamental cellular processes underlie the impairment of cell function induced by aggregates of disease-associated polypeptides. Many pieces of data point to a central role of modifications to the intracellular redox status and free Ca2+ levels in cells exposed to toxic aggregates [45, 89, 97, 98, 99, 100, 101]. A modification of the intracellular redox status in such cells is associated with a sharp increase in the quantity of reactive oxygen species (ROS) that is reminiscent of the oxidative burst by which leukocytes destroy invading foreign cells after phagocytosis. In addition, changes have been observed in reactive nitrogen species, lipid peroxidation, deregulation of NO metabolism [97], protein nitrosylation [102] and upregulation of heme oxygenase-1, a specific marker of oxidative stress [103]. ….

Results have recently been reported concerning the toxicity towards cultured cells of aggregates of poly(Q) peptides which argues against a disease mechanism based on specific toxic features of the aggregates. These results indicate that there is a close relationship between the toxicity of proteins with poly(Q) extensions and their nuclear localisation. In addition they support the hypotheses that the toxicity of poly(Q) aggregates can be a consequence of altered interactions with nuclear coactivator or corepressor molecules including p53, CBP, Sp1 and TAF130 or of the interaction with transcription factors and nuclear coactivators, such as CBP, endowed with short poly(Q) stretches ([95] and references therein)…..

Concluding remarks
The data reported in the past few years strongly suggest that the conversion of normally soluble proteins into amyloid fibrils and the toxicity of small aggregates appearing during the early stages of the formation of the latter are common or generic features of polypeptide chains. Moreover, the molecular basis of this toxicity also appears to display common features between the different systems that have so far been studied. The ability of many, perhaps all, natural polypeptides to ‘misfold’ and convert into toxic aggregates under suitable conditions suggests that one of the most important driving forces in the evolution of proteins must have been the negative selection against sequence changes that increase the tendency of a polypeptide chain to aggregate. Nevertheless, as protein folding is a stochastic process, and no such process can be completely infallible, misfolded proteins or protein folding intermediates in equilibrium with the natively folded molecules must continuously form within cells. Thus mechanisms to deal with such species must have co-evolved with proteins. Indeed, it is clear that misfolding, and the associated tendency to aggregate, is kept under control by molecular chaperones, which render the resulting species harmless assisting in their refolding, or triggering their degradation by the cellular clearance machinery [166, 167, 168, 169, 170, 171, 172, 173, 175, 177, 178].

Misfolded and aggregated species are likely to owe their toxicity to the exposure on their surfaces of regions of proteins that are buried in the interior of the structures of the correctly folded native states. The exposure of large patches of hydrophobic groups is likely to be particularly significant as such patches favour the interaction of the misfolded species with cell membranes [44, 83, 89, 90, 91, 93]. Interactions of this type are likely to lead to the impairment of the function and integrity of the membranes involved, giving rise to a loss of regulation of the intracellular ion balance and redox status and eventually to cell death. In addition, misfolded proteins undoubtedly interact inappropriately with other cellular components, potentially giving rise to the impairment of a range of other biological processes. Under some conditions the intracellular content of aggregated species may increase directly, due to an enhanced propensity of incompletely folded or misfolded species to aggregate within the cell itself. This could occur as the result of the expression of mutational variants of proteins with decreased stability or cooperativity or with an intrinsically higher propensity to aggregate. It could also occur as a result of the overproduction of some types of protein, for example, because of other genetic factors or other disease conditions, or because of perturbations to the cellular environment that generate conditions favouring aggregation, such as heat shock or oxidative stress. Finally, the accumulation of misfolded or aggregated proteins could arise from the chaperone and clearance mechanisms becoming overwhelmed as a result of specific mutant phenotypes or of the general effects of ageing [173, 174].

The topics discussed in this review not only provide a great deal of evidence for the ‘new view’ that proteins have an intrinsic capability of misfolding and forming structures such as amyloid fibrils but also suggest that the role of molecular chaperones is even more important than was thought in the past. The role of these ubiquitous proteins in enhancing the efficiency of protein folding is well established [185]. It could well be that they are at least as important in controlling the harmful effects of misfolded or aggregated proteins as in enhancing the yield of functional molecules.

 

Nutritional Status is Associated with Faster Cognitive Decline and Worse Functional Impairment in the Progression of Dementia: The Cache County Dementia Progression Study1

Sanders, Chelseaa | Behrens, Stephaniea | Schwartz, Sarahb | Wengreen, Heidic | Corcoran, Chris D.b; d | Lyketsos, Constantine G.e | Tschanz, JoAnn T.a; d;
Journal of Alzheimer’s Disease 2016; 52(1):33-42,     http://content.iospress.com/articles/journal-of-alzheimers-disease/jad150528   http://dx.doi.org:/10.3233/JAD-150528

Nutritional status may be a modifiable factor in the progression of dementia. We examined the association of nutritional status and rate of cognitive and functional decline in a U.S. population-based sample. Study design was an observational longitudinal study with annual follow-ups up to 6 years of 292 persons with dementia (72% Alzheimer’s disease, 56% female) in Cache County, UT using the Mini-Mental State Exam (MMSE), Clinical Dementia Rating Sum of Boxes (CDR-sb), and modified Mini Nutritional Assessment (mMNA). mMNA scores declined by approximately 0.50 points/year, suggesting increasing risk for malnutrition. Lower mMNA score predicted faster rate of decline on the MMSE at earlier follow-up times, but slower decline at later follow-up times, whereas higher mMNA scores had the opposite pattern (mMNA by time β= 0.22, p = 0.017; mMNA by time2 β= –0.04, p = 0.04). Lower mMNA score was associated with greater impairment on the CDR-sb over the course of dementia (β= 0.35, p <  0.001). Assessment of malnutrition may be useful in predicting rates of progression in dementia and may provide a target for clinical intervention.

 

Shared Genetic Risk Factors for Late-Life Depression and Alzheimer’s Disease

Ye, Qing | Bai, Feng* | Zhang, Zhijun
Journal of Alzheimer’s Disease 2016; 52(1): 1-15.                                      http://dx.doi.org:/10.3233/JAD-151129

Background: Considerable evidence has been reported for the comorbidity between late-life depression (LLD) and Alzheimer’s disease (AD), both of which are very common in the general elderly population and represent a large burden on the health of the elderly. The pathophysiological mechanisms underlying the link between LLD and AD are poorly understood. Because both LLD and AD can be heritable and are influenced by multiple risk genes, shared genetic risk factors between LLD and AD may exist. Objective: The objective is to review the existing evidence for genetic risk factors that are common to LLD and AD and to outline the biological substrates proposed to mediate this association. Methods: A literature review was performed. Results: Genetic polymorphisms of brain-derived neurotrophic factor, apolipoprotein E, interleukin 1-beta, and methylenetetrahydrofolate reductase have been demonstrated to confer increased risk to both LLD and AD by studies examining either LLD or AD patients. These results contribute to the understanding of pathophysiological mechanisms that are common to both of these disorders, including deficits in nerve growth factors, inflammatory changes, and dysregulation mechanisms involving lipoprotein and folate. Other conflicting results have also been reviewed, and few studies have investigated the effects of the described polymorphisms on both LLD and AD. Conclusion: The findings suggest that common genetic pathways may underlie LLD and AD comorbidity. Studies to evaluate the genetic relationship between LLD and AD may provide insights into the molecular mechanisms that trigger disease progression as the population ages.

 

Association of Vitamin B12, Folate, and Sulfur Amino Acids With Brain Magnetic Resonance Imaging Measures in Older Adults: A Longitudinal Population-Based Study

B Hooshmand, F Mangialasche, G Kalpouzos…, et al.
AMA Psychiatry. Published online April 27, 2016.    http://dx.doi.org:/10.1001/jamapsychiatry.2016.0274

Importance  Vitamin B12, folate, and sulfur amino acids may be modifiable risk factors for structural brain changes that precede clinical dementia.

Objective  To investigate the association of circulating levels of vitamin B12, red blood cell folate, and sulfur amino acids with the rate of total brain volume loss and the change in white matter hyperintensity volume as measured by fluid-attenuated inversion recovery in older adults.

Design, Setting, and Participants  The magnetic resonance imaging subsample of the Swedish National Study on Aging and Care in Kungsholmen, a population-based longitudinal study in Stockholm, Sweden, was conducted in 501 participants aged 60 years or older who were free of dementia at baseline. A total of 299 participants underwent repeated structural brain magnetic resonance imaging scans from September 17, 2001, to December 17, 2009.

Main Outcomes and Measures  The rate of brain tissue volume loss and the progression of total white matter hyperintensity volume.

Results  In the multi-adjusted linear mixed models, among 501 participants (300 women [59.9%]; mean [SD] age, 70.9 [9.1] years), higher baseline vitamin B12 and holotranscobalamin levels were associated with a decreased rate of total brain volume loss during the study period: for each increase of 1 SD, β (SE) was 0.048 (0.013) for vitamin B12 (P < .001) and 0.040 (0.013) for holotranscobalamin (P = .002). Increased total homocysteine levels were associated with faster rates of total brain volume loss in the whole sample (β [SE] per 1-SD increase, –0.035 [0.015]; P = .02) and with the progression of white matter hyperintensity among participants with systolic blood pressure greater than 140 mm Hg (β [SE] per 1-SD increase, 0.000019 [0.00001]; P = .047). No longitudinal associations were found for red blood cell folate and other sulfur amino acids.

Conclusions and Relevance  This study suggests that both vitamin B12 and total homocysteine concentrations may be related to accelerated aging of the brain. Randomized clinical trials are needed to determine the importance of vitamin B12supplementation on slowing brain aging in older adults.

 

 

Notes from Kurzweill

This vitamin stops the aging process in organs, say Swiss researchers

A potential breakthrough for regenerative medicine, pending further studies

http://www.kurzweilai.net/this-vitamin-stops-the-aging-process-in-organs-say-swiss-researchers

Improved muscle stem cell numbers and muscle function in NR-treated aged mice: Newly regenerated muscle fibers 7 days after muscle damage in aged mice (left: control group; right: fed NR). (Scale bar = 50 μm). (credit: Hongbo Zhang et al./Science) http://www.kurzweilai.net/images/improved-muscle-fibers.png

EPFL researchers have restored the ability of mice organs to regenerate and extend life by simply administering nicotinamide riboside (NR) to them.

NR has been shown in previous studies to be effective in boosting metabolism and treating a number of degenerative diseases. Now, an article by PhD student Hongbo Zhang published in Science also describes the restorative effects of NR on the functioning of stem cells for regenerating organs.

As in all mammals, as mice age, the regenerative capacity of certain organs (such as the liver and kidneys) and muscles (including the heart) diminishes. Their ability to repair them following an injury is also affected. This leads to many of the disorders typical of aging.

Mitochondria —> stem cells —> organs

To understand how the regeneration process deteriorates with age, Zhang teamed up with colleagues from ETH Zurich, the University of Zurich, and universities in Canada and Brazil. By using several biomarkers, they were able to identify the molecular chain that regulates how mitochondria — the “powerhouse” of the cell — function and how they change with age. “We were able to show for the first time that their ability to function properly was important for stem cells,” said Auwerx.

Under normal conditions, these stem cells, reacting to signals sent by the body, regenerate damaged organs by producing new specific cells. At least in young bodies. “We demonstrated that fatigue in stem cells was one of the main causes of poor regeneration or even degeneration in certain tissues or organs,” said Zhang.

How to revitalize stem cells

Which is why the researchers wanted to “revitalize” stem cells in the muscles of elderly mice. And they did so by precisely targeting the molecules that help the mitochondria to function properly. “We gave nicotinamide riboside to 2-year-old mice, which is an advanced age for them,” said Zhang.

“This substance, which is close to vitamin B3, is a precursor of NAD+, a molecule that plays a key role in mitochondrial activity. And our results are extremely promising: muscular regeneration is much better in mice that received NR, and they lived longer than the mice that didn’t get it.”

Parallel studies have revealed a comparable effect on stem cells of the brain and skin. “This work could have very important implications in the field of regenerative medicine,” said Auwerx. This work on the aging process also has potential for treating diseases that can affect — and be fatal — in young people, like muscular dystrophy (myopathy).

So far, no negative side effects have been observed following the use of NR, even at high doses. But while it appears to boost the functioning of all cells, it could include pathological ones, so further in-depth studies are required.

Abstract of NAD+ repletion improves mitochondrial and stem cell function and enhances life span in mice

Adult stem cells (SCs) are essential for tissue maintenance and regeneration yet are susceptible to senescence during aging. We demonstrate the importance of the amount of the oxidized form of cellular nicotinamide adenine dinucleotide (NAD+) and its impact on mitochondrial activity as a pivotal switch to modulate muscle SC (MuSC) senescence. Treatment with the NAD+ precursor nicotinamide riboside (NR) induced the mitochondrial unfolded protein response (UPRmt) and synthesis of prohibitin proteins, and this rejuvenated MuSCs in aged mice. NR also prevented MuSC senescence in the Mdx mouse model of muscular dystrophy. We furthermore demonstrate that NR delays senescence of neural SCs (NSCs) and melanocyte SCs (McSCs), and increased mouse lifespan. Strategies that conserve cellular NAD+ may reprogram dysfunctional SCs and improve lifespan in mammals.

references:

Hongbo Zhang, Dongryeol Ryu, Yibo Wu, Karim Gariani, Xu Wang, Peiling Luan, Davide D’amico, Eduardo R. Ropelle, Matthias P. Lutolf, Ruedi Aebersold, Kristina Schoonjans, Keir J. Menzies, Johan Auwerx. NAD repletion improves mitochondrial and stem cell function and enhances lifespan in mice. Science, 2016 DOI: 10.1126/science.aaf2693

 

Enhancer–promoter interactions are encoded by complex genomic signatures on looping chromatin

Sean WhalenRebecca M Truty & Katherine S Pollard
Nature Genetics 2016; 48:488–496
    
    doi:10.1038/ng.3539

Discriminating the gene target of a distal regulatory element from other nearby transcribed genes is a challenging problem with the potential to illuminate the causal underpinnings of complex diseases. We present TargetFinder, a computational method that reconstructs regulatory landscapes from diverse features along the genome. The resulting models accurately predict individual enhancer–promoter interactions across multiple cell lines with a false discovery rate up to 15 times smaller than that obtained using the closest gene. By evaluating the genomic features driving this accuracy, we uncover interactions between structural proteins, transcription factors, epigenetic modifications, and transcription that together distinguish interacting from non-interacting enhancer–promoter pairs. Most of this signature is not proximal to the enhancers and promoters but instead decorates the looping DNA. We conclude that complex but consistent combinations of marks on the one-dimensional genome encode the three-dimensional structure of fine-scale regulatory interactions.

Read Full Post »

Effect of mitochondrial stress on epigenetic modifiers

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Early Mitochondrial Stress Alters Epigenetics, Secures Lifelong Health Benefits

GEN 5/3/2016  http://www.genengnews.com/gen-news-highlights/early-mitochondrial-stress-alters-epigenetics-secures-lifelong-health-benefits/81252685/

A little adversity builds character, or so the saying goes. True or not, the saying does seem an apt description of a developmental phenomenon that shapes gene expression. While it knows nothing of character, the gene expression apparatus appears to respond well to short-term mitochondrial stress that occurs early in development. In fact, transient stress seems to result in lasting benefits. These benefits, which include improved metabolic function and increased longevity, have been observed in both worms and mice, and may even occur—or be made to occur—in humans.

Gene expression is known to be subject to reprogramming by epigenetic modifiers, but such modifiers generally affect metabolism or lifespan, not both. A new set of epigenetic modifiers, however, has been found to trigger changes that do just that—both improve metabolism and extend lifespan.

Scientists based at the University of California, Berkeley, and the École Polytechnique Fédérale de Lausanne (EPFL) have discovered enzymes that are ramped up after mild stress during early development and continue to affect the expression of genes throughout the animal’s life. When the scientists looked at strains of inbred mice that have radically different lifespans, those with the longest lifespans had significantly higher expression of these enzymes than did the short-lived mice.

“Two of the enzymes we discovered are highly, highly correlated with lifespan; it is the biggest genetic correlation that has ever been found for lifespan in mice, and they’re both naturally occurring variants,” said Andrew Dillin, a UC Berkeley professor of molecular and cell biology. “Based on what we see in worms, boosting these enzymes could reprogram your metabolism to create better health, with a possible side effect of altering lifespan.”

Details of the work, which appeared online April 29 in the journal Cell, are presented in a pair of papers. One paper (“Two Conserved Histone Demethylases Regulate Mitochondrial Stress-Induced Longevity”) resulted from an effort led by Dillin and the EPFL’s Johan Auwerx. The other paper (“Mitochondrial Stress Induces Chromatin Reorganization to Promote Longevity and UPRmt”) resulted from an effort led by Dillin and his UC Berkeley colleague Barbara Meyer.

According to these papers, mitochondrial stress activates enzymes in the brain that affect DNA folding, exposing a segment of DNA that contains the 1500 genes involved in the work of the mitochondria. A second set of enzymes then tags these genes, affecting their activation for much or all of the lifetime of the animal and causing permanent changes in how the mitochondria generates energy.

The first set of enzymes—methylases, in particular LIN-65—add methyl groups to the DNA, which can silence promoters and thus suppress gene expression. By also opening up the mitochondrial genes, these methylases set the stage for the second set of enzymes—demethylases, in this case jmjd-1.2 and jmjd-3.1—to ramp up transcription of the mitochondrial genes. When the researchers artificially increased production of the demethylases in worms, all the worms lived longer, a result identical to what is observed after mitochondrial stress.

“By changing the epigenetic state, these enzymes are able to switch genes on and off,” Dillin noted. This happens only in the brain of the worm, however, in areas that sense hunger or satiety. “These genes are expressed in neurons that are sensing the nutritional status of the animal, and these signals emanate out to the periphery to change peripheral metabolism,” he continued.

When the scientists profiled enzymes in short- and long-lived mice, they found upregulation of these genes in the brains of long-lived mice, but not in other tissues or in the brains of short-lived mice. “These genes are expressed in the hypothalamus, exactly where, when you eat, the signals are generated that tell you that you are full. And when you are hungry, signals in that region tell you to go and eat,” Dillin explained said. “These genes are all involved in peripheral feedback.”

Among the mitochondrial genes activated by these enzymes are those involved in the body’s response to proteins that unfold, which is a sign of stress. Increased activity of the proteins that refold other proteins is another hallmark of longer life.

These observations suggest that the reversal of aging by epigenetic enzymes could also take place in humans.

“It seems that, while extreme metabolic stress can lead to problems later in life, mild stress early in development says to the body, ‘Whoa, things are a little bit off-kilter here, let’s try to repair this and make it better.’ These epigenetic switches keep this up for the rest of the animal’s life,” Dillin stated.

 

Two Conserved Histone Demethylases Regulate Mitochondrial Stress-Induced Longevity

Carsten Merkwirth6, Virginija Jovaisaite6, Jenni Durieux,…., Reuben J. Shaw, Johan Auwerx, Andrew Dillin

Highlights
  • H3K27 demethylases jmjd-1.2 and jmjd-3.1 are required for ETC-mediated longevity
  • jmjd-1.2 and jmjd-3.1 extend lifespan and are sufficient for UPRmt activation
  • UPRmt is required for increased lifespan due to jmjd-1.2 or jmjd-3.1 overexpression
  • JMJD expression is correlated with UPRmt and murine lifespan in inbred BXD lines

Across eukaryotic species, mild mitochondrial stress can have beneficial effects on the lifespan of organisms. Mitochondrial dysfunction activates an unfolded protein response (UPRmt), a stress signaling mechanism designed to ensure mitochondrial homeostasis. Perturbation of mitochondria during larval development in C. elegans not only delays aging but also maintains UPRmt signaling, suggesting an epigenetic mechanism that modulates both longevity and mitochondrial proteostasis throughout life. We identify the conserved histone lysine demethylases jmjd-1.2/PHF8 and jmjd-3.1/JMJD3 as positive regulators of lifespan in response to mitochondrial dysfunction across species. Reduction of function of the demethylases potently suppresses longevity and UPRmt induction, while gain of function is sufficient to extend lifespan in a UPRmt-dependent manner. A systems genetics approach in the BXD mouse reference population further indicates conserved roles of the mammalian orthologs in longevity and UPRmt signaling. These findings illustrate an evolutionary conserved epigenetic mechanism that determines the rate of aging downstream of mitochondrial perturbations.

Figure thumbnail fx1

 

Mitochondrial Stress Induces Chromatin Reorganization to Promote Longevity and UPRmt
Ye Tian, Gilberto Garcia, Qian Bian, Kristan K. Steffen, Larry Joe, Suzanne Wolff, Barbara J. Meyer, Andrew Dillincorrespondence
http://dx.doi.org/10.1016/j.cell.2016.04.011             Publication stage: In Press Corrected Proof
Highlights
  • LIN-65 accumulates in the nucleus in response to mitochondrial stress
  • Mitochondrial stress-induced chromatin changes depend on MET-2 and LIN-65
  • LIN-65 and DVE-1 exhibit interdependence in nuclear accumulation
  • met-2 and atfs-1 act in parallel to affect mitochondrial stress-induced longevity

Organisms respond to mitochondrial stress through the upregulation of an array of protective genes, often perpetuating an early response to metabolic dysfunction across a lifetime. We find that mitochondrial stress causes widespread changes in chromatin structure through histone H3K9 di-methylation marks traditionally associated with gene silencing. Mitochondrial stress response activation requires the di-methylation of histone H3K9 through the activity of the histone methyltransferase met-2 and the nuclear co-factor lin-65. While globally the chromatin becomes silenced by these marks, remaining portions of the chromatin open up, at which point the binding of canonical stress responsive factors such as DVE-1 occurs. Thus, a metabolic stress response is established and propagated into adulthood of animals through specific epigenetic modifications that allow for selective gene expression and lifespan extension

 Siddharta Mukherjee’s Writing Career Just Got Dealt a Sucker Punch
Author: Theral Timpson

Siddharha Mukherjee won the 2011 Pulitzer Prize in non-fiction for his book, The Emperor of All Maladies.  The book has received widespread acclaim among lay audience, physicians, and scientists alike.  Last year the book was turned into a special PBS series.  But, according to a slew of scientists, we should all be skeptical of his next book scheduled to hit book shelves this month, The Gene, An Intimate History.

Publishing an article on epigenetics in the New Yorker this week–perhaps a selection from his new book–Mukherjee has waltzed into one of the most active scientific debates in all of biology: that of gene regulation, or epigenetics.

Jerry Coyne, the evolutionary biologist known for keeping journalists honest, has published a two part critique of Mukherjee’s New Yorker piece.  The first part–wildly tweeted yesterday–is a list of quotes from Coyne’s colleagues and those who have written in to the New Yorker, including two Nobel prize winners, Wally Gilbert and Sidney Altman, offering some very unfriendly sentences.

Wally Gilbert: “The New Yorker article is so wildly wrong that it defies rational analysis.”

Sidney Altman:  “I am not aware that there is such a thing as an epigenetic code.  It is unfortunate to inflict this article, without proper scientific review, on the audience of the New Yorker.”

The second part is a thorough scientific rebuttal of the Mukherjee piece.  It all serves as a great drama about one of the most contested ideas in biology and also as a cautionary tale to journalists, even experienced writers such as Mukherjee, about the dangers of wading into scientific arguments.  Readers may remember that a few years ago, science writer, David Dobbs, similarly skated into the same topic with his piece, Die, Selfish Gene, Die, and which raised a similar shitstorm, much of it from Coyne.

Mukherjee’s mistake is in giving credence to only one side of a very fierce debate–that the environment causes changes in the genome which can be passed on; another kind of evolution–as though it were settled science.   Either Mukherjee, a physicisan coming off from a successful book and PBS miniseries on cancer, is setting himself up as a scientist, or he has been a truly naive science reporter.   If he got this chapter so wrong, what does it mean about an entire book on the gene?

Coyne quotes one of his colleagues who raised some questions about the New Yorker’s science reporting, one particular question we’ve been asking here at Mendelspod.  How do we know what we know?  Does science now have an edge on any other discipline for being able to create knowledge?

Coyne’s colleague is troubled by science coverage in the New Yorker, and goes so far as to write that the New Yorker has been waging a “war on behalf of cultural critics and literary intellectuals against scientists and technologists.”

From my experience, it’s not quite that tidy.  First of all, the New Yorker is the best writing I read each week.  Period.  Second, I haven’t found their science writing to have the slant claimed in the quote above.  For example, most other mainstream outlets–including the New York Times with the Amy Harmon pieces–have given the anti-GMO crowd an equal say in the mistaken search for a “balance” on whether GMOs are harmful.  (Remember John Stewart’s criticism of Fox News?  That they give a false equivalent between two sides even when there is no equivalent on the other side?)

But the New Yorker has not fallen into this trap on GMOs and most of their pieces on the topic–mainly by Michael Specter–have been decidedly pro science and therefore decided pro GMO.

So what led Mukherjee to play scientist as well as journalist?  There’s no question about whether I enjoy his prose.  His writing beautifully whisks me away so that I don’t feel that I’m really working to understand.  There is a poetic complexity that constantly brings different threads effortlessly together, weaving them into the same light.  At one point he uses the metaphor of a web for the genome, with the epigenome being the stuff that sticks to the web.  He borrows the metaphor from the Hindu notion of “being”, or jaal.

“Genes form the threads of the web; the detritus that adheres to it transforms every web into a singular being.”

There have been a few writers on Twitter defending Mukherjee’s piece.  Tech Review’s Antonio Regalado called Coyne and his colleagues “tedious literalists” who have an “issue with epigenetic poetry.”

At his best, Mukherjee can take us down the sweet alleys of his metaphors and family stories with a new curiosity for the scientific truth.  He can hold a mirror up to scientists, or put the spotlight on their work.   At their worst, Coyne and his scientific colleagues can reek of a fear of language and therefore metaphor.  The always outspoken scientist and author, Richard Dawkins, who made his name by personifying the gene, was quick to personify epigentics in a tweet:   “It’s high time the 15 minutes of underserved fame for “epigenetics” came to an overdue end.”  Dawkins is that rare scientist who has consistently been as comfortable with rhetoric and language as he is with data.

Hats off to Coyne who reminds us that a metaphor–however lovely–does not some science make. If Mukherjee wants to play scientist, let him create and gather data. If it’s the role of science journalist he wants, let him collect all the science he can before he begins to pour it into his poetry.

 

Same but Different  

How epigenetics can blur the line between nature and nurture.

Annals of Science MAY 2, 2016 ISSUE     BY

http://www.newyorker.com/wp-content/uploads/2016/05/160502_r28072-1200.jpg

The author’s mother (right) and her twin are a study in difference and identity. CREDIT: PHOTOGRAPH BY DAYANITA SINGH FOR THE NEW YORKER

October 6, 1942, my mother was born twice in Delhi. Bulu, her identical twin, came first, placid and beautiful. My mother, Tulu, emerged several minutes later, squirming and squalling. The midwife must have known enough about infants to recognize that the beautiful are often the damned: the quiet twin, on the edge of listlessness, was severely undernourished and had to be swaddled in blankets and revived.

The first few days of my aunt’s life were the most tenuous. She could not suckle at the breast, the story runs, and there were no infant bottles to be found in Delhi in the forties, so she was fed through a cotton wick dipped in milk, and then from a cowrie shell shaped like a spoon. When the breast milk began to run dry, at seven months, my mother was quickly weaned so that her sister could have the last remnants.
Tulu and Bulu grew up looking strikingly similar: they had the same freckled skin, almond-shaped face, and high cheekbones, unusual among Bengalis, and a slight downward tilt of the outer edge of the eye, something that Italian painters used to make Madonnas exude a mysterious empathy. They shared an inner language, as so often happens with twins; they had jokes that only the other twin understood. They even smelled the same: when I was four or five and Bulu came to visit us, my mother, in a bait-and-switch trick that amused her endlessly, would send her sister to put me to bed; eventually, searching in the half-light for identity and difference—for the precise map of freckles on her face—I would realize that I had been fooled.

But the differences were striking, too. My mother was boisterous. She had a mercurial temper that rose fast and died suddenly, like a gust of wind in a tunnel. Bulu was physically timid yet intellectually more adventurous. Her mind was more agile, her tongue sharper, her wit more lancing. Tulu was gregarious. She made friends easily. She was impervious to insults. Bulu was reserved, quieter, and more brittle. Tulu liked theatre and dancing. Bulu was a poet, a writer, a dreamer.

….. more

Why are identical twins alike? In the late nineteen-seventies, a team of scientists in Minnesota set out to determine how much these similarities arose from genes, rather than environments—from “nature,” rather than “nurture.” Scouring thousands of adoption records and news clips, the researchers gleaned a rare cohort of fifty-six identical twins who had been separated at birth. Reared in different families and different cities, often in vastly dissimilar circumstances, these twins shared only their genomes. Yet on tests designed to measure personality, attitudes, temperaments, and anxieties, they converged astonishingly. Social and political attitudes were powerfully correlated: liberals clustered with liberals, and orthodoxy was twinned with orthodoxy. The same went for religiosity (or its absence), even for the ability to be transported by an aesthetic experience. Two brothers, separated by geographic and economic continents, might be brought to tears by the same Chopin nocturne, as if responding to some subtle, common chord struck by their genomes.

One pair of twins both suffered crippling migraines, owned dogs that they had named Toy, married women named Linda, and had sons named James Allan (although one spelled the middle name with a single “l”). Another pair—one brought up Jewish, in Trinidad, and the other Catholic, in Nazi Germany, where he joined the Hitler Youth—wore blue shirts with epaulets and four pockets, and shared peculiar obsessive behaviors, such as flushing the toilet before using it. Both had invented fake sneezes to diffuse tense moments. Two sisters—separated long before the development of language—had invented the same word to describe the way they scrunched up their noses: “squidging.” Another pair confessed that they had been haunted by nightmares of being suffocated by various metallic objects—doorknobs, fishhooks, and the like.

The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference.

….. more

But what turns those genes on and off, and keeps them turned on or off? Why doesn’t a liver cell wake up one morning and find itself transformed into a neuron? Allis unpacked the problem further: suppose he could find an organism with two distinct sets of genes—an active set and an inactive set—between which it regularly toggled. If he could identify the molecular switches that maintain one state, or toggle between the two states, he might be able to identify the mechanism responsible for cellular memory. “What I really needed, then, was a cell with these properties,” he recalled when we spoke at his office a few weeks ago. “Two sets of genes, turned ‘on’ or ‘off’ by some signal.”

more…

“Histones had been known as part of the inner scaffold for DNA for decades,” Allis went on. “But most biologists thought of these proteins merely as packaging, or stuffing, for genes.” When Allis gave scientific seminars in the early nineties, he recalled, skeptics asked him why he was so obsessed with the packing material, the stuff in between the DNA.  …. A skein of silk tangled into a ball has very different properties from that same skein extended; might the coiling or uncoiling of DNA change the activity of genes?

In 1996, Allis and his research group deepened this theory with a seminal discovery. “We became interested in the process of histone modification,” he said. “What is the signal that changes the structure of the histone so that DNA can be packed into such radically different states? We finally found a protein that makes a specific chemical change in the histone, possibly forcing the DNA coil to open. And when we studied the properties of this protein it became quite clear that it was also changing the activity of genes.” The coils of DNA seemed to open and close in response to histone modifications—inhaling, exhaling, inhaling, like life.

Allis walked me to his lab, a fluorescent-lit space overlooking the East River, divided by wide, polished-stone benches. A mechanical stirrer, whirring in a corner, clinked on the edge of a glass beaker. “Two features of histone modifications are notable,” Allis said. “First, changing histones can change the activity of a gene without affecting the sequence of the DNA.” It is, in short, formally epi-genetic, just as Waddington had imagined. “And, second, the histone modifications are passed from a parent cell to its daughter cells when cells divide. A cell can thus record ‘memory,’ and not just for itself but for all its daughter cells.”

…..

 

 

The New Yorker screws up big time with science: researchers criticize the Mukherjee piece on epigenetics

Jerry Coyne
https://whyevolutionistrue.wordpress.com/2016/05/05/the-new-yorker-screws-up-big-time-with-science-researchers-criticize-the-mukherjee-piece-on-epigenetics/

Abstract: This is a two part-post about a science piece on gene regulation that just appeared in the New Yorker. Today I give quotes from scientists criticizing that piece; tomorrow I’ll present a semi-formal critique of the piece by two experts in the field.

esterday I gave readers an assignment: read the new New Yorkerpiece by Siddhartha Mukherjee about epigenetics. The piece, called “Same but different” (subtitle: “How epigenetics can blur the line between nature and nurture”) was brought to my attention by two readers, both of whom praised it.  Mukherjee, a physician, is well known for writing the Pulitzer-Prize-winning book (2011) The Emperor of All Maladies: A Biography of Cancer. (I haven’t read it yet, but it’s on my list.)  Mukherjee has a new book that will be published in May: The Gene: An Intimate History. As I haven’t seen it, the New Yorker piece may be an excerpt from this book.

Everyone I know who has read The Emperor of All Maladies gives it high praise. I wish I could say the same for Mukherjee’s New Yorker piece. When I read it at the behest of the two readers, I found his analysis of gene regulation incomplete and superficial. Although I’m not an expert in that area, I knew that there was a lot of evidence that regulatory proteins called “transcription factors”, and not “epigenetic markers” (see discussion of this term tomorrow) or modified histones—the factors emphasized by Mukherjee—played hugely important roles in gene regulation. The speculations at the end of the piece about “Lamarckian evolution” via environmentally induced epigenetic changes in the genome were also unfounded, for we have no evidence for that kind of adaptive evolution. Mukherjee does, however, mention that lack of evidence, though I wish he’d done so more strongly given that environmental modification of DNA bases is constantly touted as an important and neglected factor in evolution.

Unbeknownst to me, there was a bit of a kerfuffle going on in the community of scientists who study gene regulation, with many of them finding serious mistakes and omissions in Mukherjee’s piece.  There appears to have been some back-and-forth emailing among them, and several wrote letters to the New Yorker, urging them to correct the misconceptions, omissions, and scientific errors in “Same but different.” As I understand it, both Mukherjee and the New Yorker simply batted these criticisms away, and, as far as I know, will not publish any corrections.  So today and tomorrow I’ll present the criticisms here, just so they’ll be on the record.

Because Mukherjee writes very well, and because even educated laypeople won’t know the story of gene regulation revealed over the last few decades,  they may not see the big lacunae in his piece. It is, then,  important to set matters straight, for at least we should know what science has told us about how genes are turned on and off. The criticism of Mukherjee’s piece, coming from scientists who really are experts in gene regulation, shows a lack of care on the part of Mukherjee and theNew Yorker: both a superficial and misleading treatment of the state of the science, and a failure of the magazine to properly vet this piece (I have no idea whether they had it “refereed” not just by editors but by scientists not mentioned in the piece).

Let me add one thing about science and the New Yorker. I believe I’ve said this before, but the way the New Yorker treats science is symptomatic of the “two cultures” problem. This is summarized in an email sent me a while back by a colleague, which I quote with permission:

The New Yorker is fine with science that either serves a literary purpose (doctors’ portraits of interesting patients) or a political purpose (environmental writing with its implicit critique of modern technology and capitalism). But the subtext of most of its coverage (there are exceptions) is that scientists are just a self-interested tribe with their own narrative and no claim to finding the truth, and that science must concede the supremacy of literary culture when it comes to anything human, and never try to submit human affairs to quantification or consilience with biology. Because the magazine is undoubtedly sophisticated in its writing and editing they don’t flaunt their postmodernism or their literary-intellectual proprietariness, but once you notice it you can make sense of a lot of their material.

. . . Obviously there are exceptions – Atul Gawande is consistently superb – but as soon as you notice it, their guild war on behalf of cultural critics and literary intellectuals against scientists, technologists, and analytic scholars becomes apparent.

…. more

Researchers criticize the Mukherjee piece on epigenetics: Part 2

Trigger warning: Long science post!

Yesterday I provided a bunch of scientists’ reactions—and these were big names in the field of gene regulation—to Siddhartha Mukherjee’s ill-informed piece in The New Yorker, “Same but different” (subtitle: “How epigenetics can blur the line between nature and nurture”). Today, in part 2, I provide a sentence-by-sentence analysis and reaction by two renowned researchers in that area. We’ll start with a set of definitions (provided by the authors) that we need to understand the debate, and then proceed to the critique.

Let me add one thing to avoid confusion: everything below the line, including the definition (except for my one comment at the end) was written by Ptashne and Greally.

by Mark Ptashne and John Greally

Introduction

Ptashne is The Ludwig Professor of Molecular Biology at the Memorial Sloan Kettering Cancer Center in New York. He wrote A Genetic Switch, now in its third edition, which describes the principles of gene regulation and the workings of a ‘switch’; and, with Alex Gann, Genes and Signals, which extends these principles and ideas to higher organisms and to other cellular processes as well.  John Greally is the Director of the Center for Epigenomics at the Albert Einstein College of Medicine in New York.

 

The New Yorker  (May 2, 2016) published an article entitled “Same But Different” written by Siddhartha Mukherjee.  As readers will have gathered from the letters posted yesterday, there is a concern that the article is misleading, especially for a non-scientific audience. The issue concerns our current understanding of “gene regulation” and how that understanding has been arrived at.

First some definitions/concepts:

Gene regulation refers to the “turning on and off of genes”.  The primary event in turning a gene “on” is to transcribe (copy) it into messenger RNA (mRNA). That mRNA is then decoded, usually, into a specific protein.  Genes are transcribed by the enzyme called RNA polymerase.

Development:  the process in which a fertilized egg (e.g., a human egg) divides many times and eventually forms an organism.  During this process, many of the roughly 23,000 genes of a human are turned “on” or “off” in different combinations, at different times and places in the developing organism. The process produces many different cell types in different organs (e.g. liver and brain), but all retain the original set of genes.

Transcription factors: proteins that bind to specific DNA sequences near specific genes and turn transcription of those genes on and off. A transcriptional ‘activator’, for example, bears two surfaces: one binds a specific sequence in DNA, and the other binds to, and thereby recruits to the gene, protein complexes that include RNA polymerase. It is widely acknowledged that the identity of a cell in the body depends on the array of transcription factors present in the cell, and the cell’s history.  RNA molecules can also recognize specific genomic sequences, and they too sometimes work as regulators.  Neither transcription factors nor these kinds of RNA molecules – the fundamental regulators of gene expression and development – are mentioned in the New Yorker article.

Signals:  these come in many forms (small molecules like estrogen, larger molecules (often proteins such as cytokines) that determine the ability of transcription factors to work.  For example, estrogen binds directly to a transcription factor (the estrogen receptor) and, by changing its shape, permits it to bind DNA and activate transcription.

Memory”:  a dividing cell can (often does) produce daughters that are identical, and that express identical genes as does the mother cell.  This occurs because the transcription factors present in the mother cell are passively transmitted to the daughters as the cell divides, and they go to work in their new contexts as before.  To make two different daughters, the cell must distribute its transcription factors asymmetrically.

Positive Feedback: An activator can maintain its own expression by  positive feedback.  This requires, simply, that a copy of the DNA sequence to which the activator binds is  present  near its own gene. Expression of the activator  then becomes self-perpetuating.  The activator (of which there now are many copies in the cell) activates  other target genes as it maintains its own expression. This kind of ‘memory circuit’, first described  in  bacteria, is found in higher organisms as well.  Positive feedback can explain how a fully differentiated cell (that is, a cell that has reached its developmental endpoint) maintains its identity.

Nucleosomes:  DNA in higher organisms (eukaryotes) is wrapped, like beads on a string, around certain proteins (called histones), to form nucleosomes.  The histones are subject to enzymatic modifications: e.g., acetyl, methyl, phosphate, etc. groups can be added to these structures. In bacteria there are no nucleosomes, and the DNA is more or less ‘naked’.

“Epigenetic modifications: please don’t worry about the word ”epigenetic”; it is misused in any case. What Mukherjee refers to by this term are the histone modifications mentioned above, and a modification to DNA itself: the addition of methyl groups. Keep in mind that the organisms that have taught us the most about development – flies (Drosophila) and worms (C. elegans)—do not have the enzymes required for DNA methylation. That does not mean that DNA methylation cannot do interesting things in humans, for example, but it is obviously not at the heart of gene regulation.

Specificity Development requires the highly specific sequential turning on and off of sets of genes.  Transcription factors and RNA supply this specificity, but   enzymes that impart modifications to histones  cannot: every nucleosome (and hence every gene) appears the same to the enzyme.  Thus such enzymes cannot pick out particular nucleosomes associated with particular genes to modify.  Histone modifications might be imagined to convey ‘memory’ as cells divide – but there are no convincing indications that this happens, nor are there molecular models that might explain why they would have the imputed effects.

Analysis and critique of Mukherjee’s article

The picture we have just sketched has taken the combined efforts of many scientists over 50 years to develop.  So what, then, is the problem with the New Yorker article?

There are two: first, the picture we have just sketched, emphasizing the primary role of transcription factors and RNA, is absent.  Second, that picture is replaced by highly dubious speculations, some of which don’t make sense, and none of which has been shown to work as imagined in the article.

(Quotes from the Mukherjee article are indented and in plain text; they are followed by comments, flush left and in bold, by Ptashne and Greally.)

In 1978, having obtained a Ph.D. in biology at Indiana University, Allis began to tackle a problem that had long troubled geneticists and cell biologists: if all the cells in the body have the same genome, how does one become a nerve cell, say, and another a blood cell, which looks and functions very differently?

The problems referred to were recognized long before 1978.  In fact, these were exactly the problems that the great French scientists François Jacob and Jacques Monod took on in the 1950s-60s.  In a series of brilliant experiments, Jacob and Monod showed that in bacteria, certain genes encode products that regulate (turn on and off) specific other genes.  Those regulatory molecules turned out to be proteins, some of which respond to signals from the environment.  Much of the story of modern biology has been figuring out how these proteins – in bacteria and in higher organisms  – bind to and regulate specific genes.  Of note is that in higher organisms, the regulatory proteins look and act like those in bacteria, despite the fact that eukaryotic DNA is wrapped in nucleosomes  whereas bacterial DNA is not.   We have also learned that certain RNA molecules can play a regulatory role, a phenomenon made possible by the fact that RNA molecules, like regulatory proteins, can recognize specific genomic sequences.

In the nineteen-forties, Conrad Waddington, an English embryologist, had proposed an ingenious answer: cells acquired their identities just as humans do—by letting nurture (environmental signals) modify nature (genes). For that to happen, Waddington concluded, an additional layer of information must exist within a cell—a layer that hovered, ghostlike, above the genome. This layer would carry the “memory” of the cell, recording its past and establishing its future, marking its identity and its destiny but permitting that identity to be changed, if needed. He termed the phenomenon “epigenetics”—“above genetics.”

This description greatly misrepresents the original concept.  Waddington argued that development proceeds not by the loss (or gain) of genes, which would be a “genetic” process, but rather that some genes would be selectively expressed in specific and complex cellular patterns as development proceeds.  He referred to this intersection of embryology (then called “epigenesis”) and genetics as “epigenetic”.We now understand that regulatory proteins work in combinations to turn on and off genes, including their own genes, and that sometimes the regulatory proteins respond to signals sent by other cells.  It should be emphasized that Waddington never proposed any “ghost-like” layer of additional information hovering above the gene.  This is a later misinterpretation of a literal translation of the term epigenetics, with “epi-“ meaning “above/upon” the genetic information encoded in DNA sequence.  Unfortunately, this new and pervasive definition encompasses all of transcriptional regulation and is of no practical value.

…..more

By 2000, Allis and his colleagues around the world had identified a gamut of proteins that could modify histones, and so modulate the activity of genes. Other systems, too, that could scratch different kinds of code on the genome were identified (some of these discoveries predating the identification of histone modifications). One involved the addition of a chemical side chain, called a methyl group, to DNA. The methyl groups hang off the DNA string like Christmas ornaments, and specific proteins add and remove the ornaments, in effect “decorating” the genome. The most heavily methylated parts of the genome tend to be dampened in their activity.

It is true that enzymes that modify histones have been found—lots of them.  A striking problem is that, after all this time, it is not at all clear what the vast majority of these modifications do.  When these enzymatic activities are eliminated by mutation of their active sites (a task substantially easier to accomplish in yeast than in higher organisms) they mostly have little or no effect on transcription.  It is not even clear that histones are the biologically relevant substrates of most of these enzymes.  

 In the ensuing decade, Allis wrote enormous, magisterial papers in which a rich cast of histone-modifying proteins appear and reappear through various roles, mapping out a hatchwork of complexity. . . These protein systems, overlaying information on the genome, interacted with one another, reinforcing or attenuating their signals. Together, they generated the bewildering intricacy necessary for a cell to build a constellation of other cells out of the same genes, and for the cells to add “memories” to their genomes and transmit these memories to their progeny. “There’s an epigenetic code, just like there’s a genetic code,” Allis said. “There are codes to make parts of the genome more active, and codes to make them inactive.”

By ‘epigenetic code’ the author seems to mean specific arrays of nucleosome modifications, imparted over time and cell divisions, marking genes for expression.  This idea has been tested in many experiments and has been found not to hold.

….. and more

 

Larry H. Bernstein, MD, FCAP

I hope that this piece brings greater clarity to the discussion.  I have heard the use of the term “epigenetics” for over a decade.  The term was never so clear.  I think that the New Yorker article was a reasonable article for the intended audience.  It was not intended to clarify debates about a mechanism for epigenetic based changes in evolutionary science.  I think it actually punctures the “classic model” of the cell depending only on double stranded DNA and transcription, which deflates our concept of the living cell.  The concept of epigenetics was never really formulated as far as I have seen, and I have done serious work in enzymology and proteins at a time that we did not have the technology that exists today.  I have considered with the critics that protein folding, protein misfolding, protein interactions with proximity of polar and nonpolar groups, and the regulatory role of microRNAs that are not involved in translation, and the evolving concept of what is “dark (noncoding) DNA” lend credence to the complexity of this discussion.  Even more interesting is the fact that enzymes (and isoforms of enzymes) have a huge role in cellular metabolic differences and in the function of metabolic pathways.  What is less understood is the extremely fast reactions involved in these cellular reactions.  These reactions are in my view critical drivers.  This is brought out by Erwin Schroedinger in the book What is Life? which infers that there can be no mathematical expression of life processes.

 

 

 

Read Full Post »

DNA-based nanomotor and chemomechanical crosstalk

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

LPBI

 

Nano-walkers take speedy leap forward with first rolling DNA-based motor

“Ours is the first rolling DNA motor, making it far faster and more robust,” says Khalid Salaita, the Emory chemist who led the research. (Photos by Bryan Meltz, Emory Photo/Video.)  https://pharmaceuticalintelligence.com/wp-content/uploads/2016/05/c4943-khalid_salaita.jpg

hysical chemists have devised a rolling DNA-based motor that’s 1,000 times faster than any other synthetic DNA motor, giving it potential for real-world applications, such as disease diagnostics. Nature Nanotechnology is publishing the finding.

“Unlike other synthetic DNA-based motors, which use legs to ‘walk’ like tiny robots, ours is the first rolling DNA motor, making it far faster and more robust,” says Khalid Salaita, the Emory University chemist who led the research. “It’s like the biological equivalent of the invention of the wheel for the field of DNA machines.”

The speed of the new DNA-based motor, which is powered by ribonuclease H, means a simple smart phone microscope can capture its motion through video. The researchers have filed an invention disclosure patent for the concept of using the particle motion of their rolling molecular motor as a sensor for everything from a single DNA mutation in a biological sample to heavy metals in water.

“Our method offers a way of doing low-cost, low-tech diagnostics in settings with limited resources,” Salaita says.

The field of synthetic DNA-based motors, also known as nano-walkers, is about 15 years old. Researchers are striving to duplicate the action of nature’s nano-walkers. Myosin, for example, are tiny biological mechanisms that “walk” on filaments to carry nutrients throughout the human body.

“It’s the ultimate in science fiction,” Salaita says of the quest to create tiny robots, or nano-bots, that could be programmed to do your bidding. “People have dreamed of sending in nano-bots to deliver drugs or to repair problems in the human body.”

So far, however, mankind’s efforts have fallen far short of nature’s myosin, which speeds effortlessly about its biological errands. “The ability of myosin to convert chemical energy into mechanical energy is astounding,” Salaita says. “They are the most efficient motors we know of today.”

Some synthetic nano-walkers move on two legs. They are essentially enzymes made of DNA, powered by the catalyst RNA. These nano-walkers tend to be extremely unstable, due to the high levels of Brownian motion at the nano-scale. Other versions with four, and even six, legs have proved more stable, but much slower. In fact, their pace is glacial: A four-legged DNA-based motor would need about 20 years to move one centimeter.

Kevin Yehl, a post-doctoral fellow in the Salaita lab, had the idea of constructing a DNA-based motor using a micron-sized glass sphere. Hundreds of DNA strands, or “legs,” are allowed to bind to the sphere. These DNA legs are placed on a glass slide coated with the reactant: RNA.

The DNA legs are drawn to the RNA, but as soon as they set foot on it they destroy it through the activity of an enzyme called RNase H. As the legs bind and then release from the substrate, they guide the sphere along, allowing more of the DNA legs to keep binding and pulling.

“It’s called a burnt-bridge mechanism,” Salaita explains. “Wherever the DNA legs step, they trample and destroy the reactant. They have to keep moving and step where they haven’t stepped in order to find more reactant.”

The combination of the rolling motion, and the speed of the RNase H enzyme on a substrate, gives the new DNA motor its stability and speed.

“Our DNA-based motor can travel one centimeter in seven days, instead of 20 years, making it 1,000 times faster than the older versions,” Salaita says. “In fact, nature’s myosin motors are only 10 times faster than ours, and it took them billions of years to evolve.”

http://2.bp.blogspot.com/-tB7Dtk9txtY/Vl3R_IInh3I/AAAAAAAAK7g/Kf3lSVSHzr8/s400/smart-phone_setup.jpg

Emory post-doctoral fellow Kevin Yehl sets up a smart-phone microscope to get a readout for the particle motion of the rolling DNA-based motor.

The researchers demonstrated that their rolling motors can be used to detect a single DNA mutation by measuring particle displacement. They simply glued lenses from two inexpensive laser pointers to the camera of a smart phone to turn the phone into a microscope and capture videos of the particle motion.

“Using a smart phone, we can get a readout for anything that’s interfering with the enzyme-substrate reaction, because that will change the speed of the particle,” Salaita says. “For instance, we can detect a single mutation in a DNA strand.”

This simple, low-tech method could come in handy for doing diagnostic sensing of biological samples in the field, or anywhere with limited resources.

The proof that the motors roll came by accident, Salaita adds. During their experiments, two of the glass spheres occasionally became stuck together, or dimerized. Instead of making a wandering trail, they left a pair of straight, parallel tracks across the substrate, like a lawn mower cutting grass. “It’s the first example of a synthetic molecular motor that goes in a straight line without a track or a magnetic field to guide it,” Salaita says.

In addition to Salaita and Yehl, the co-authors on the Nature Nanotechnology paper include Emory researchers Skanda Vivek, Yang Liu, Yun Zhang, Megzhen Fan, Eric Weeks and Andrew Mugler (who is now at Purdue University).

Related:
Chemists reveal the force within you
Molecular beacons shine light on how cells ‘crawl’

 

High-speed DNA-based rolling motors powered by RNase H

Kevin YehlAndrew MuglerSkanda VivekYang LiuYun ZhangMengzhen FanEric R. Weeks & Khalid Salaita
Nature Nanotechnology   11;184–190 (30 Nov 2016)    doi:10.1038/nnano.2015.259

DNA-based machines that walk by converting chemical energy into controlled motion could be of use in applications such as next-generation sensors, drug-delivery platforms and biological computing. Despite their exquisite programmability, DNA-based walkers are challenging to work with because of their low fidelity and slow rates (∼1 nm min–1). Here we report DNA-based machines that roll rather than walk, and consequently have a maximum speed and processivity that is three orders of magnitude greater than the maximum for conventional DNA motors. The motors are made from DNA-coated spherical particles that hybridize to a surface modified with complementary RNA; the motion is achieved through the addition of RNase H, which selectively hydrolyses the hybridized RNA. The spherical motors can move in a self-avoiding manner, and anisotropic particles, such as dimerized or rod-shaped particles, can travel linearly without a track or external force. We also show that the motors can be used to detect single nucleotide polymorphism by measuring particle displacement using a smartphone camera.

 

 

http://www.nature.com/nnano/journal/v11/n2/carousel/nnano.2015.259-f1.jpg

 

http://www.nature.com/nnano/journal/v11/n2/carousel/nnano.2015.259-f2.jpg

 

http://www.nature.com/nnano/journal/v11/n2/full/nnano.2015.259.html

 

T cells use ‘handshakes’ to sort friends from foes

http://esciencecommons.blogspot.ca/2016/05/t-cells-use-handshakes-to-sort-friends.html

 

A 3-D rendering of a fluorescence image mapping the piconewton forces applied by T cells. The height and color indicates the magnitude of the applied force. (Microscopy image by Yang Liu.)

By Carol Clark

 

T cells, the security guards of the immune system, use a kind of mechanical “handshake” to test whether a cell they encounter is a friend or foe, a new study finds.

The Proceedings of the National Academy of Sciences (PNAS) published the study, led by Khalid Salaita, a physical chemist at Emory University who specializes in the mechanical forces of cellular processes.

“We’ve provided the first direct evidence that a T cell gives precise mechanical tugs to other cells,” Salaita says. “And we’ve shown that these tugs are central to a T cell’s process of deciding whether to mount an immune response. A tug that releases easily, similar to a casual handshake, signals a friend. A stronger grip indicates a foe.”

Salaita, from Emory’s Department of Chemistry, collaborated on the research with Brian Evavold in the Emory School of Medicine’s Department of Microbiology and Immunology.

T cells continuously patrol through the body in search of foreign invaders. They have molecules known as T-cell receptors (TCR) that can recognize specific antigenic peptides on the surface of a pathogenic or cancerous cell. When a T cell detects an antigen-presenting cell (APC), its TCR connects to a ligand, or binding molecule, of the APC. If the T cell determines the ligand is foreign, it becomes activated and starts pumping calcium. The calcium is part of a signaling chain that recruits other cells to come and help mount an immune response.

Scientists have known about this process for decades, but they have not fully understood how the T cell distinguishes small modifications to the antigenic ligand and how it decides to respond to it. “If you view this T cell response purely as a chemical process, it does not fully explain the remarkable specifity of the binding,” Salaita says. “When you take the two components – the TCR and the ligand on the surface of cells – and just let them chemically bind in a solution, for example, you can’t predict what will trigger a strong or a weak immune response.”

The researchers hypothesized that mechanical strain might also play a role in a T cell response, since the T cell continues to move even as it locks into a bind with an antigenic ligand.

To test this idea, the Salaita lab developed DNA-based gold nanoparticle tension sensors that light up, or fluoresce, in response to a miniscule mechanical force of a piconewton – about one million-millionth the weight of an apple.

The researchers designed experiments using T cells from a mouse and allowed them to test ligands containing eight amino acid peptides that had slight mutations.

“We swapped out the fourth amino acid position to create really subtle chemical changes in the ligand that would be very difficult to distinguish without a mechanical component,” Salaita says.

Some of the mutated ligands were given a firmer anchor to give them a tighter “grip” to the moving TCR.

Through the experiments, captured on microscopy video, the researchers were able to see, record and measure the responses of the T cells as they moved across the ligands.

“As a T cell moves across a cell’s surface and encounters a ligand, it pulls on it,” Salaita explains. “It doesn’t pull very hard, it’s a very precise and tiny tug that is not sustained. The T cell pulls and stops, pulls and stops, all across the surface. It’s like the T cell is doing a mechanical test of the ligand.”

During the experiments, the T cells did not activate fully when they encountered ligands with weak anchors. In contrast, when a T cell encountered a ligand with a firm anchor, the T cell became activated, showing that it experienced a piconewton level of resistance.

The amount of force that was applied by the T cell was mapped by using tension probes of different stiffness. Probes that responded to 19 piconewtons did not fluoresce, while softer, 12-piconewton probes produced high signal.

Following the fluorescence of the probe, the T cells switched on their calcium pumps and increased the calcium concentration within the cell, indicating that the T cell is mounting an immune response.

“We were able to map out the order of the cascade of chemical and mechanical reactions,” Salaita says. “First, the T cell uses a very specific and finely tuned mechanical tug to distinguish friend from foe. And when it senses a precise, piconewton level of force in response to that tug, the T cell realizes that it has encountered a foreign body and gives the signal for attack.”

The discovery could help in the search for treatments of auto-immune diseases and the development of immune therapies for cancer.

“Cancer cells have an extra molecule that can make T cell security guards ‘drunk’ or ‘sleepy’ so that they are not able to function properly,” Salaita says. “Learning more about the mechanical forces involved in an effective immune response may help us develop ways to evade this defense system of cancer cells.”

Co-authors on the study include Yang Liu, Victor Pui-Yan Ma, Kornelia Galior and Zheng Liu (from the Salaita lab); and Lori Blanchfield and Rakieb Andargachew (from the Evavold lab).

Related:
Chemists reveal the force within you
Molecular beacons shine light on how cells ‘crawl’
Nano-walkers take speedy leap forward with first rolling DNA-based motor

 

DNA-based nanoparticle tension sensors reveal that T-cell receptors transmit defined pN forces to their antigens for enhanced fidelity

Yang LiuaLori BlanchfieldbVictor Pui-Yan MaaRakieb AndargachewbKornelia GalioraZheng LiuaBrian Evavoldb, and Khalid Salaitaa,c,1
P
NAS May 2016; http://dx.doi.org:/10.1073/pnas.1600163113     

Significance

T cells protect the body against pathogens and cancer by recognizing specific foreign peptides on the cell surface. Because antigen recognition occurs at the junction between a migrating T cell and an antigen-presenting cell (APC), it is likely that cellular forces are generated and transmitted through T-cell receptor (TCR)-ligand bonds. Here we develop a DNA-based nanoparticle tension sensor producing the first molecular maps of TCR-ligand forces during T cell activation. We find that TCR forces are orchestrated in space and time, requiring the participation of CD8 coreceptor and adhesion molecules. Loss or damping of TCR forces results in weakened antigen discrimination, showing that T cells harness mechanics to optimize the specificity of response to ligand.

T cells are triggered when the T-cell receptor (TCR) encounters its antigenic ligand, the peptide-major histocompatibility complex (pMHC), on the surface of antigen presenting cells (APCs). Because T cells are highly migratory and antigen recognition occurs at an intermembrane junction where the T cell physically contacts the APC, there are long-standing questions of whether T cells transmit defined forces to their TCR complex and whether chemomechanical coupling influences immune function. Here we develop DNA-based gold nanoparticle tension sensors to provide, to our knowledge, the first pN tension maps of individual TCR-pMHC complexes during T-cell activation. We show that naïve T cells harness cytoskeletal coupling to transmit 12–19 pN of force to their TCRs within seconds of ligand binding and preceding initial calcium signaling. CD8 coreceptor binding and lymphocyte-specific kinase signaling are required for antigen-mediated cell spreading and force generation. Lymphocyte function-associated antigen 1 (LFA-1) mediated adhesion modulates TCR-pMHC tension by intensifying its magnitude to values >19 pN and spatially reorganizes the location of TCR forces to the kinapse, the zone located at the trailing edge of migrating T cells, thus demonstrating chemomechanical crosstalk between TCR and LFA-1 receptor signaling. Finally, T cells display a dampened and poorly specific response to antigen agonists when TCR forces are chemically abolished or physically “filtered” to a level below ∼12 pN using mechanically labile DNA tethers. Therefore, we conclude that T cells tune TCR mechanics with pN resolution to create a checkpoint of agonist quality necessary for specific immune response.

  1. Crystal structure of a complete ternary complex of T-cell receptor, peptide-MHC, and CD4.
    Yiyuan Yin et al., Proc Natl Acad Sci U S A, 2012
  2. T-cell antagonism by short half-life pMHC ligands can be mediated by an efficient trapping of T-cell polarization toward the APC.
    Leandro J Carreño et al., Proc Natl Acad Sci U S A, 2009
  3. T cell receptor binding kinetics required for T cell activation depend on the density of cognate ligand on the antigen-presenting cell.
    Pablo A González et al., Proc Natl Acad Sci U S A, 2005
  4. T-cell triggering thresholds are modulated by the number of antigen within individual T-cell receptor clusters.
    Boryana N Manz et al., Proc Natl Acad Sci U S A, 2011
  5. Quantum dot/peptide-MHC biosensors reveal strong CD8-dependent cooperation between self and viral antigens that augment the T cell response.
    Nadia Anikeeva et al., Proc Natl Acad Sci U S A, 2006
  6. Rachel A Gottschalk et al., Proc Natl Acad Sci U S A, 2012 
  7. Stage-dependent reactivity of thymocytes to self-peptide–MHC complexes.
    Qibin Leng et al., Proc Natl Acad Sci U S A, 2007  
  8. Maxim N Artyomov et al., Proc Natl Acad Sci U S A, 2010  
  9. Scott R Burrows et al., Proc Natl Acad Sci U S A, 2010  
  10. Andrej Kosmrlj et al., Proc Natl Acad Sci U S A, 2008
Larry H. Bernstein, MD, FCAP, Curator
LPBI
The two articles above are connected in an interesting way by the fact that cellular forces are generated and transmitted through T-cell receptor (TCR)-ligand bonds. The T-cell receptor (TCR) encounters its antigenic ligand, the peptide-major histocompatibility complex (pMHC), on the surface of antigen presenting cells (APCs).  The movement detected by the fluorescent sensor may be based on only a single amino acid at the cell surface ligand. The result is chemomechanical crosstalk between TCR and LFA-1 receptor signaling

Read Full Post »

CD-4 Therapy for Solid Tumors

Curator: Larry H. Bernstein, MD, FCAP

 

CD4 T-cell Immunotherapy Shows Activity in Solid Tumors

Alexander M. Castellino, PhD

http://www.medscape.com/viewarticle/862095

For the first time, treatment with genetically engineered T-cells has used CD4 T-cells instead of the CD8 T-cells, which are used in the chimeric antigen receptor (CAR) T-cell approach. Early data suggest that this CD4 T-cell approach has activity against solid tumors, whereas the CAR T-cell approach so far has achieved dramatic success in hematologic malignancies.

In the new approach, CD4 T-cells were genetically engineered to target MAGE-A3, a protein found on many tumor cells. The treatment was found to be safe in patients with metastatic cancers, according to data from a phase 1 clinical study presented here at the American Association for Cancer Research (AACR) 2016 Annual Meeting.

“This is the first trial testing an immunotherapy using genetically engineered CD4 T-cells,” senior author Steven A. Rosenberg, MD, PhD, chief of the Surgery Branch at the National Cancer Institute (NCI), told Medscape Medical News.

Most approaches use CD8 T-cells. Although CD8 T-cells are known be cytotoxic and CD4 T-cells are normally considered helper cells, CD4 T-cells can induce tumor regression, he said.

Louis M. Weiner, MD, director of the Lombardi Comprehensive Cancer Center at Georgetown University, in Washington, DC, indicated that in contrast with CAR T-cells, these CD4 T-cells target proteins on solid tumors. “CAR T-cells are not tumor specific and do not target solid tumors,” he said.

Engineering CD4 Cells

Immunotherapy with engineered CD4 T-cells was personalized for each patient whose tumors had not responded to or had recurred following treatment with least one standard therapy. The immunotherapy was specific for patients in whom a specific human leukocyte antigen (HLA) — HLA-DPB1*0401 — was found to be expressed on their cells and whose tumors expressed MAGE-A3.

MAGE-A3 belongs to a class of proteins expressed during fetal development. The expression is lost in normal adult tissue but is reexpressed on tumor cells, explained presenter Yong-Chen William Lu, PhD, a research fellow in the Surgery Branch of the NCI.

Targeting MAGE-A3 is relevant, because it is frequently expressed in a variety of cancers, such as melanoma and urothelial, esophageal, and cervical cancers, he pointed out.

 Researchers purified CD4 T-cells from the peripheral blood of patients. Next, the CD4 T-cells were genetically engineered with a retrovirus carrying the T-cell receptor (TCR) gene that recognizes MAGE-A3. The modified cells were grown ex vivo and were transferred back into the patient.

Clinical Results

Dr Lu presented data for 14 patients enrolled into the study: eight patients received cell doses from 10 million to 30 billion cells, and six patients received up to 100 billion cells.

This was similar to a phase 1 dose-finding study, except the researchers were seeking to determine the maximum number of genetically engineered CD4 T-cells that a patient could safely receive.

One patient with metastatic cervical cancer, another with metastatic esophageal cancer, and a third with metastatic urothelial cancer experienced partial objective responses. At 15 months, the response is ongoing in the patient with cervical cancer; after 7 months of treatment, the response was durable in the patient with urothelial cancer; and a response lasting 4 months was reported for the patient with esophageal cancer.

Dr Lu said that a phase 2 trial has been initiated to study the clinical responses of this T-cell receptor therapy in different types of metastatic cancers.

In his discussion of the paper, Michel Sadelain, MD, of the Memorial Sloan Kettering Cancer Center, New York City, said, “Although therapy with CD4 cells has been evaluated using endogenous receptor, this is the first study using genetically engineered CD4 T-cells.”

Although the study showed that therapy with genetically engineered T-cells is safe and efficacious at least in three patients, the mechanism of cytotoxicity remains unclear, Dr Sadelain indicated.

Comparison With CAR T-cells

CAR T-cells act in much the same way. CARs are chimeric antigen receptors that have an antigen-recognition domain of an antibody (the V region) and a “business end,” which activates T-cells. In this case, CD8 T-cells from the patients are used to genetically engineer T-cells ex vivo. In the majority of cases, dramatic responses have been seen in hematologic malignancies.

CARs, directed against self-proteins, result in on-target, off-tumor effects, Gregory L. Beatty, MD, PhD, assistant professor of medicine at the University of Pennsylvania, in Philadelphia, indicated when he reported the first success story of CAR T-cells in a solid pancreatic cancer tumor.

Side effects of therapy with CD4 T-cells targeting MAGE-A3 were different and similar to side effects of chemotherapy, because patients received a lymphodepleting regimen of cyclophosphamide and fludabarine. Toxicities included high fever, which was experienced by the majority of patients (12/14). The fever lasted 1 to 2 weeks and was easily manageable.

High levels of the cytokine interleukin-6 (IL-6) were detected in the serum of all patients after treatment. However, the elevation in IL-6 levels was not considered to be a cytokine release syndrome, because no side effects occurred that correlated with the syndrome, Dr Liu indicated.

He also indicated that future studies are planned that will employ genetically engineered CD4 T-cells in combination with programmed cell death protein 1–blocking antibodies.

This study was funded by Intramural Research Program of the National Institutes of Health. The NCI’s research and development of T-cell receptor therapy targeting MAGE-A3 are supported in part under a cooperative research and development agreement between the NCI and Kite Pharma, Inc. Kite has an exclusive, worldwide license with the NIH for intellectual property relating to retrovirally transduced HLA-DPB1*0401 and HLA A1 T-cell receptor therapy targeting MAGE-A3 antigen. Dr Lu and Dr Rosenberg have disclosed no relevant financial relationships.

American Association for Cancer Research (AACR) 2016 Annual Meeting: Abstract CT003, presented April 17, 2016.

 

Searches Related to immunotherapy using genetically engineered CD4 T-cells

 

Genetic engineering of T cells for adoptive immunotherapy

To be effective for the treatment of cancer and infectious diseases, T cell adoptive immunotherapy requires large numbers of cells with abundant proliferative reserves and intact effector functions. We are achieving these goals using a gene therapy strategy wherein the desired characteristics are introduced into a starting cell population, primarily by high efficiency lentiviral vector-mediated transduction. Modified cells are then expanded using ex vivo expansion protocols designed to minimally alter the desired cellular phenotype. In this article, we focus on strategies to (1) dissect the signals controlling T cell proliferation; (2) render CD4 T cells resistant to HIV-1 infection; and (3) redirect CD8 T cell antigen specificity.
Adoptive T cell therapy is a form of transfusion therapy involving the infusion of large numbers of T cells with the aim of eliminating, or at least controlling, malignancies or infectious diseases. Successful applications of this technique include the infusion of CMV-or EBVspecific CTLs to protect immunosuppressed patients from these transplantation-associated diseases [1,2]. Furthermore, donor lymphocyte infusions of ex vivo-expanded allogeneic T cells have been used to successfully treat hematological malignancies in patients with relapsed disease following allogeneic hematopoietic stem cell transplant [3]. However, in many other malignancies and chronic viral infections such as HIV-1, adoptive T cell therapy has achieved inconsistent and/or marginal successes. Nevertheless, there are compelling reasons for optimism on this strategy. For example, the existence of HIV-positive elite non-progressors [4], as well as the correlation between the presence of intratumoral T cells and a favorable prognosis in malignancies such as ovarian [5,6] and colon carcinoma [7,8], provides in vivo evidence for the critical role of the immune system in controlling both HIV and cancer.
The key to successful adoptive immunotherapy strategies appears to consist of (1) using the “right” T cell type(s) and (2) obtaining therapeutically effective numbers of these cells without compromising their effector functions or their ability to engraft within the host. This article is focused on strategies employed in our laboratory to generate the “right” cell through genetic engineering approaches, with an emphasis on redirecting the antigen specificity of CD8 T cells, and rendering CD4 T cells resistant to HIV-1 infection. The article by Paulos et al. describes the evolving process of how to best obtain therapeutically effective numbers of the “right” cells by optimizing ex vivo cell expansion strategies.
Our laboratory’s overall strategy and flow plan for development and evaluation of engineered T cells is depicted in Fig. 1. We work almost exclusively with primary human T cells; little or no work is performed with conventional established cell lines. Thus, we benefit substantially from our close association with the UPenn Human Immunology Core. The Core performs leukaphereses on healthy donors 2–3 times a week, and provides purified peripheral blood mononuclear cell subsets, ensuring a constant influx of fresh human T cells into our laboratory. We have extensive experience in developing both bead- and cell-based artificial antigen presenting cells (aAPCs), as described in detail in the article by Paulos et al. The ability to genetically modify T cells at high efficiency is critical for virtually every project within the laboratory. We have adapted the lentiviral vector system described by Dull [15] for most, but not all, of the engineering applications in our laboratory.
CD4 T cells are the primary target of HIV-1, and decreasing CD4 T cell numbers is a hallmark of advancing HIV-1 disease [34]. Thus, strategies that protect CD4 T cells from HIV-1 infection in vivo would conceivably provide sufficient immunological help to control HIV-1 infection. Our early observations that CD3/CD28 costimulation resulted in improved ex vivo expansion of CD4 T cells from both healthy and HIV-infected donors, as well as enhanced resistance to HIV-1 infection [35,36], ultimately led to the first-in-human trial of lentiviral vector-modified CD4 T cells [37]. In this trial, CD4 T cells from HIV-positive subjects who had failed antiretroviral therapy were transduced with a lentiviral vector encoding an antisense RNA that targeted a 937 bp region in the HIV-1 envelope gene. Preclinical studies demonstrated that this antisense region, directed against the HIV-1NL4-3 envelope, provided robust protection from a broad range of both R5-and X4-tropic HIV-1 isolates [38]. One year after administration of a single dose of the gene-modified cells, four of the five enrolled patients had increased peripheral blood CD4 T cell counts, and in one subject, a 1.7 log decrease in viral load was observed. Finally, in two of the five patients, persistence of the gene-modified cells was detected one year post-infusion.
Since its identification as the primary co-receptor involved in HIV transmission, CCR5 has attracted considerable attention as a target for HIV therapy [42,43]. Indeed, “experiments of nature” have shown that individuals with a homozygous CCR5 Δ32 deletion are highly resistant to HIV-1 infection. Thus, we hypothesized that knocking out the CCR5 locus would generate CD4 T cells permanently resistant to infection by R5 isolates of HIV-1. To test this hypothesis we took advantage of zinc-finger nuclease (ZFN) technology [44]. ZFNs introduce sequencespecific double-strand DNA breakage, which is imperfectly repaired by non-homologous endjoining. This results in the permanent disruption of the genomic target, a process termed genome editing (Fig. 3).
Genetic modification of T cells to redirect antigen specificity is an attractive strategy compared to the lengthy process of growing T cell lines or CTL clones for adoptive transfer. Genetically modified, adoptively transferred T cells are capable of long-term persistence in humans [37, 46,47], demonstrating the feasibility of this approach. When compared to the months it can take to generate an infusion dose of antigen-specific CTL lines or clones from a patient, a homogeneous population of redirected antigen-specific cells can be expanded to therapeutically relevant numbers in about two weeks [3]. Several strategies are being explored to bypass the need to expand antigen-specific T cells for adoptive T cell therapy. The approaches currently studied in our laboratory involve the genetic transfer of chimeric antigen receptors and supraphysiologic T cell receptors.
Chimeric antigen receptors (CARs or T-bodies) are artificial T cell receptors that combine the extracellular single-chain variable fragment (scFv) of an antibody with intracellular signaling domains, such as CD3ζ or Fc(ε)RIγ [48–50]. When expressed on T cells, the receptor bypasses the need for antigen presentation on MHC since the scFv binds directly to cell surface antigens. This is an important feature, since many tumors and virus-infected cells downregulate MHCI, rendering them invisible to the adaptive immune system. The high-affinity nature of the scFv domain makes these engineered T cells highly sensitive to low antigen densities. In addition, new chimeric antigen receptors are relatively easy to produce from hybridomas. The key to this approach is the identification of antigens with high surface expression on tumor cells, but reduced or absent expression on normal tissues.  Since one can redirect both CD4 and CD8 T cells, the T-body approach to immunotherapy represents a near universal “off the shelf” method to generate large numbers of antigen-specific helper and cytotoxic T cells.
Many T-bodies targeting diverse tumors have been developed [51], and four have been evaluated clinically [52–55]. Three of the four studies were characterized by poor transgene expression and limited T-body engraftment. However, in a study of metastatic renal cell carcinoma using a T-body directed against carbonic anhydrase IX [55], T-body-expressing cells were detectable in the peripheral blood for nearly 2 months post-administration.
The major goals in the T-body field currently are to optimize their engraftment and maximize their effector functions. Our laboratory is addressing both problems simultaneously through an in-depth study of the requirements for T-body activation. We hypothesize that their limited persistence is due to incomplete cell activation due to the lack of costimulation. While naïve T cells depend on costimulation through CD28 ligation to avoid anergy and undergo full activation in response to antigen, it is recognized that effector cells also require costimulation to properly proliferate and produce cytokines [56]. Previous studies have shown that providing CD28 costimulation is crucial for the antitumoral function of adoptively transferred T cells and T-bodies [57–59]. Unlike conventional T cell activation, which requires two discrete signals, T-bodies can be engineered to provide both costimulation and CD3 signaling through one binding event.
A different approach for redirecting specificity to T cells for adoptive immunotherapy involves the genetic transfer of full-length TCR genes. A T cell’s specificity for its cognate antigen is solely determined by its TCR. Genes encoding the α and β chains of a T cell receptor (TCR) can be isolated from a T cell specific for the antigen of interest and restricted to a defined HLA allele, inserted into a vector, and then introduced into large numbers of T cells of individual patients that share the restricting HLA allele as well as the targeted antigen. In 1999, Clay and colleagues from Rosenberg’s group at the National Cancer Institute were the first to report the transfer of TCR genes via a retroviral vector into human lymphocytes and to show that T cells gained stable reactivity to MART-1 [67]. To date, many others have shown that the same approach can be used to transfer specificity for multiple viral and tumor associated antigens in mice and human systems. These T cells gain effector functions against the transferred TCR’s cognate antigen, as defined by proliferation, cytokine production, lysis of targets presenting the antigen, trafficking to tumor sites in vivo, and clearance of tumors and viral infection.
In 2006, Rosenberg’s group redirected patients’ PBLs with the naturally occurring, MART-1- specific TCR reported in 1999 by Clay. In the first clinical trial to test TCR-transfer immunotherapy, these modified T cells were infused into melanoma patients [68]. While the transduced T cells persisted in vivo, only two of the 17 patients had an objective response to this therapy. One issue revealed by the study was the poor expression of the transgenic TCRs by the transferred T cells. Nonetheless, the results from this trial showed the potential of TCR transfer immunotherapy as a safe form of therapy for cancer and highlighted the need to optimize such therapy to attain maximum potency.
The adoptive immunotherapy field is advancing by a tried-and-true method: learning from disappointments and moving forward. Our ability to fully realize the therapeutic potential of adoptive T cell therapy is tied to a more complete understanding of how human T cells receive signals, kill targets, and modulate effective immune responses. Our goal is to perform labbased experiments that provide insight into how primary T cells function in a manner that will facilitate and enable adoptive T cell therapy clinical trials. Our ability to efficiently modify (and expand) T cells ex vivo provides the opportunity to deliver sufficient immune firepower where it has heretofore been lacking. Sustained transgene expression, coupled with enhanced in vivo engraftment capability, will move adoptive immunotherapy into a realm where longterm therapeutic benefits are the norm rather than the exception.
Genetic Modification of T Lymphocytes for Adoptive Immunotherapy

Claudia Rossig1 and Malcolm K. Brenner2
Molecular Therapy (2004) 10, 5–18;   http://dx.doi.org:/10.1016/j.ymthe.2004.04.014      http://www.nature.com/mt/journal/v10/n1/full/mt20041193a.html

Adoptive transfer of T lymphocytes is a promising therapy for malignancies—particularly of the hemopoietic system—and for otherwise intractable viral diseases. Efforts to broaden the approach have been limited by the physiology of the T cells themselves and by a range of immune evasion mechanisms developed by tumor cells. In this review we show how genetic modification of T cells is being used preclinically and in patients to overcome these limitations, by incorporation of novel receptors, resistance mechanisms, and control genes. We also discuss how the increasing safety and effectiveness of gene transfer technologies will lead to an increase in the use of gene-modified T cells for the treatment of a wider range of disorders.

That gene transfer could be used to improve the effectiveness of T lymphocytes was apparent from the beginning of clinical studies in the field. T cells were the very first targets for genetic modification in human gene transfer experiments. Rosenberg’s group marked tumor-infiltrating lymphocytes ex vivo with a Moloney retroviral vector encoding neomycin phosphotransferase before reinfusing them and attempting to demonstrate selective accumulation at tumor sites. Shortly thereafter, Blaese and Anderson led a group that infused corrected T cells into two children with severe combined immunodeficiency due to ADA deficiency. While neither study was completely successful in terms of outcome, both showed the feasibility of ex vivo gene transfer into human cells and set the stage for many of the studies that followed. More recently, a second wave of interest in adoptive T cell therapies has developed, based on their success in the prevention and treatment of viral infections such as EBV and cytomegalovirus (CMV) and on their apparent ability to eradicate hematologic and perhaps solid malignancies1,2,3,4,5,6. There has been a corresponding increase in studies directed toward enhancing the antineoplastic and antiviral properties of the T cells. In this article we will review how gene transfer may be used to produce the desired improvements focusing on vectors and genes that have had clinical application.

Currently available viral and nonviral vector systems lack a pattern of biodistribution that would favor T cell transduction in vivo—as occurs, for example, with adenovectors and the liver or liposomal vectors and the lung. This lack of favorable biodistribution cannot yet be compensated for by the introduction of specific T-cell-targeting ligands into vectors. Hence, all T cell gene transfer studies conducted to date have used ex vivo transduction followed by adoptive transfer of gene-modified cells. This approach is inherently less attractive for commercial development than directin vivo gene transfer and has probably restricted interest in developing clinical applications using these cells. On the other hand, ex vivo transduction may be more readily controlled, characterized, and standardized than in vivo efforts and may ultimately produce a better defined final product (the transduced cell).

The gene products of suicide and coexpressed resistance genes are highly immunogenic and may induce immune-mediated rejection of the transduced cells. In one study, the persistence of adoptively transferred autologous CD8+ HIV-specific CTL clones modified to express the hygromycin phosphotransferase (Hy) gene and the herpesvirus thymidine kinase gene as a fusion gene was limited by the induction of a potent CD8+ class I MHC-restricted CTL response specific for epitopes derived from the Hy-tk protein126. Less immunogenic suicide and selection marker genes, preferably of human origin, may reduce the immunological inactivation of genetically modified donor lymphocytes. Human-derived prodrug-activating systems include the human folylpolyglutamate synthetase/methotrexate127, the deoxycytidine/cytosine arabinoside128, or the carboxylesterase/irinotecan129 systems. These systems do not activate nontoxic prodrugs but are based on enhancement of already potent chemotherapeutic agents. The administration of methotrexate to treat severe GVHD may not only kill transduced donor lymphocytes but may also have additional inhibitory activity on nontransduced but activated T cells.

Finally, endogenous proapoptotic molecules have been proposed as nonimmunogenic suicide genes. A chimeric protein that contains the FK506-binding protein FKBP12 linked to the intracellular domain of human Fas130 was recently introduced. Addition of the dimerizing prodrug induces Fas crosslinking with subsequent triggering of an apoptotic death signal.

Genetic engineering of T lymphocytes should help deliver on the promise of immunotherapies for cancer, infection, and autoimmune disease. Improvements in transduction, selection, and expansion techniques and the development of new viral vectors incapable of insertional mutagenesis will reduce the risks and further enhance the integration of T cell and gene therapies. Nonetheless, successful application of the proposed modifications to the clinical setting still requires many iterative studies to allow investigators to optimize the individual components of the approach.

Genetically modified T cells in cancer therapy: opportunities and challenges
Michaela Sharpe, Natalie Mount

 

The feasibility of T-cell adoptive transfer was first reported nearly 20 years ago (Walter et al., 1995) and the field of T-cell therapies is now poised for significant clinical advances. Recent clinical trial successes have been achieved through multiple small advances, improved understanding of immunology and emerging technologies. As the key challenges of T-cell avidity, persistence and ability to exert the desired anti-tumour effects as well as the identification of new target antigens are addressed, a broader clinical application of these therapies could be achieved. As the clinical data emerges, the challenge of making these therapies available to patients shifts to implementing robust, scalable and cost-effective manufacture and to the further evolution of the regulatory requirements to ensure an appropriate but proportionate system that is adapted to the characteristics of these innovative new medicines.

 

 

Read Full Post »

A Concise Review of Cardiovascular Biomarkers of Hypertension

Curator: Larry H. Bernstein, MD, FCAP

LPBI

Revised 5/25/2016

 

Introduction

While a large body of work had been done on cholesterol synthesis, HDL and LDL cholesterol, triglycerides, and lipoproteins for a quarter century, and the concept of metabolic syndrome was emerging, there was neither a unifying concept nor a sufficient multivariable approach to apply the use of laboratory markers to clinical practice.  The mathematical foundation for such an evaluation of the biological markers and the computational tools were maturing at the turn of the 20th century, and the interest in outcomes research for improved healthcare practice was maturing. In addition, there was now heavy investment in health information systems that would support emerging health networks of a rapidly consolidating patient base.  This has become important for the pharmaceutical industry and for allied health sciences to enable a suitable method of measuring the effectiveness of drug and of lifestyle changes to improve the population health.

The importance of finding biomarkers for hypertension is significant as stated above. I refer to observations in a lecture by Teresa Seeman, Ph.D., Professor, UCLA Geffen School of Medicine (1).
The missed cased of hypertension in the U.S. alone has been examined by the NHANES studies. Table  I
shows the poor identification of this serious chronic condition. The next table (Table II)*, also from NHANES  (Seeman study) looks at Allostatic Load for biomarkers using component biomarker measurement criterion cutpoints.  Table III* gives the odds ratios for mortality by Allostatic Load Score.

An explanatory problem for our difficulty with diagnosis of a number of hypertension disease “subsets” is that there is peripheral hypertension that might be idiopathic, or it might be related to coexisting diseases with both inflammatory and vascular structural dynamics nature.  In addition, this may be concurrent with pulmonary hypertension, systemic hypertension, and progressive renal disease.  This discussion is reserved for later.  As stated, the late or missed diagnosis of systemic or essential idiopathic hypertension is illustrated in the three Seeman tables (1).

 

Table 1

Table 2

Table 3

 

 

 

 

Table 1*. Missed cases by “self report”

Self-reports

vs undiagnosed

study NHANES 88-94 NHANES 99-2004 NHANES 2005-08
Hypertension %unaware  BP > 140/90 42.7 43.5 39.06
SR-controlled
SR-high

Unaware

  7.45

10

13.88

8.35

10.85

16.12

6.5

10.18

19.98

High cholesterol Chol > 220 g/dl 55.93 49.3 47.05
SR-controlled
SR- high
Unaware
  11.02
8.68
12.12
8.47
8.72
18.5
7.22
8.12
23.46
Diabetes HgA1C > 6.4%      
SR-controlled

SR- high

Unaware

  2.41

3.43

1.64

1.76

5.01

3.09

2.11
5.51
3.09

*modified from Seeman

 

 

 

 

 

 

 

 

 

Table II* USHANES: Allostatic Load – component cutpoints

Biomarker Total N High Risk Percent (%) Cutpoint
DBP (mm Hg) 15,489 1,180   7.62    90
SBP (mm Hg) 15,491 3,461 22.34  140
Pulse Rate 15,117 1,009   6.67    90
HgA1C (%) 15,441 1,482   9.60    6.4
WHR 14,824 6,778 45.72    0.94
HDL Cholesterol (mg/dl) 15,187 3,440 22.65     40
Total Cholesterol

(mg/dl)

15,293 3,196  20.90    240

*From  T. Seaman, UCLA Geffen SOM

 

Table III*. Odds of mortality by Allostatic Load Score.

ALS Odds Ratio
7-8 5
6 2.6
5 2.3
4 2.1
3 1.8
2 1.5
1 1.4

 

*From  T. Seaman, UCLA Geffen SOM

 

I refer to cardiovascular diseases in reference to an aggregate of diseases affecting the heart, the circulatory system from large artery to the capillary, the lungs and kidneys, excluding the lymphatics.
These major disease entities are both separate and interrelated, not necessarily found in the same combinations. However, they account for a growing proportion of illness, apart from cancers, that affect the aging population of western societies. In the discussion that follows, I shall construct a picture of the pathophysiology of cardiovascular diseases, describe the major biomarkers for the assessment of these, point out the relationship of these to hypertension, and try to develop a more targeted approach to the assessment of hypertension and related disorders.

Chronic kidney disease (CKD) is defined as persistent kidney damage accompanied by a reduction in the glomerular filtration rate (GFR) and the presence of albuminuria. The rise in incidence of CKD is attributed to an aging populace and increases in hypertension (HTN), diabetes, and obesity within the U.S. population. CKD is associated with a host of complications including electrolyte imbalances, mineral and bone disorders, anemia, dyslipidemia, and HTN. It is well known that CKD is a risk factor for cardiovascular disease (CVD), and that a reduced GFR and albuminuria are independently associated with an increase in cardiovascular and all-cause mortality.

The relationship between CKD and HTN is cyclic, as CKD can contribute to or cause HTN (3). Elevated BP leads to damage of blood vessels within the kidney, as well as throughout the body. This damage impairs the kidney’s ability to filter fluid and waste from the blood, leading to an increase of fluid volume in the blood—thus causing an increase in BP.

 

A cursory description of the blood circulation

The full circulation involves the heart as a pump, and the arteries and veins, comprising small and large vessels, and capillaries at the point of delivery of oxygen and capture of carbon dioxide, and of transfer of substrates to tissues.  The brain, liver, pancreas and spleen, and endocrines are not further considered here, except for a consideration on neuro-humoral peptides that have emerged in the regulation of blood pressure and are essential to the stress response. The lung and the liver are both important with respect to the exchange of air and metabolites, and both have secondary circulations, the pulmonary and the portal vascular circulations.  In the case of the lungs, the vena cava flows into the right atrium, which delivers unoxygenated blood to the lungs via the right ventricle and right pulmonary artery, which returns to the left atrium by way of the right pulmonary vein.  The blood from the left atrium that flows into the left ventricle is ejected into the aorta.  The coronary arteries that nourish the heart are at the base of the aorta.  The heart muscle is a syncytium, unlike striated muscle, and it is densely packed with mitochondria, suitable for continuous contraction under vasovagal control. This is the anatomical construct, but the physiology is still being clarified because normal function and disease are both a matter of regulatory control.

In order to understand hypertension, we have to view the heart functioning over a long period of time.
In a still frame picture, we envision the left ventricle contracts emptying the oxygenated blood into the circulation. The ejection of blood into the aorta is called systole, by which the blood is delivered by the force of contraction into the circulation.  The filling pressure is called diastole.  So we have a filling and an emptying, and heard by the stethoscope is a lub-dub, synchronously repeated.   A normal systolic blood pressure is below 120. A systolic blood pressure of 120 to 139 means you have prehypertension, or borderline high blood pressure. Even people with prehypertension are at a higher risk of developing heart disease. A systolic blood pressure number of 140 or higher is considered to be hypertension, or high blood pressure. The diastolic blood pressure number or the bottom number indicates the pressure in the arteries when the heart rests between beats. A normal diastolic blood pressure number is less than 80. A diastolic blood pressure between 80 and 89 indicates prehypertension. A diastolic blood pressure number of 90 or higher is considered to be hypertension or high blood pressure. So now we have identified a systolic and a diastolic high blood pressure. Systolic pressure increases with vigorous activity, and becomes normal when the activity resides.  The systolic blood pressure increases with age. Over time, consistently high blood pressure weakens and damages the blood vessels so affected. Moreover, changes in the body’s normal functions may cause high blood pressure, including changes to kidney fluid and salt balances, the renin-angiotensin-aldosterone system, sympathetic nervous system activity, and blood vessel structure and function.

 

Starling’s Law of the Heart

Two principal intrinsic mechanisms, namely the Frank-Starling mechanism and rate induced regulation, enable the myocardium to adapt to changes in hemodynamic conditions. The Frank-Starling mechanism (also referred to as Starling’s law of the heart), is invoked in response to changes in the resting length of the myocardial fibers. Rate-induced regulation is invoked in response to changes in the frequency of the heartbeat.  (3-9).

Frank and Starling (3, 4) showed that an increase in diastolic volume caused an increase in systolic performance. The stretch effect persists across a range of myocardial contractile states, but during exercise it plays only a lesser role augmenting ventricular function maximal exercise. This is because in healthy human subjects adrenergic reflex mechanisms modulate myocardial performance, heart rate, vascular impedance and coronary flow during exercise and changes in these variables can overshadow the effect of fiber stretch or even prevent an increase in end-diastolic volume during stress (5). (See you- tube (6).

According to Lakatta muscle length modulates the extent of myofilament calcium ion (Ca2+) activation (7-9).   Similarly, the fiber length during a contraction, which is determined in part by the load encountered during shortening, also determines the extent of myofilament Ca2+ activation. Therefore, the terms preload, afterload and myocardial contractile state lose part of their significance in light of current knowledge.

 

Biology and High Blood Pressure

Researchers continue to study how various changes in normal body functions cause high blood pressure. The key functions affected in high blood pressure include (10):

Kidney Fluid and Salt Balances

The kidneys normally regulate the body’s salt balance by retaining sodium and water and excreting potassium. Imbalances in this kidney function can expand blood volumes, which can cause high blood pressure.

Renin-Angiotensin-Aldosterone System

The renin-angiotensin-aldosterone system makes angiotensin and aldosterone hormones. Angiotensin narrows or constricts blood vessels, which can lead to an increase in blood pressure. Aldosterone controls how the kidneys balance fluid and salt levels. Increased aldosterone levels or activity may change this kidney function, leading to increased blood volumes and high blood pressure.

Sympathetic Nervous System Activity

The sympathetic nervous system has important functions in blood pressure regulation, including heart rate, blood pressure, and breathing rate. Researchers are investigating whether imbalances in this system cause high blood pressure.

Blood Vessel Structure and Function

Changes in the structure and function of small and large arteries may contribute to high blood pressure. The angiotensin pathway and the immune system may stiffen small and large arteries, which can affect blood pressure.

Two or more types of hypertension

Systemic hypertension

Idiopathic hypertension

Hypertension from chronic renal disease

Pulmonary artery hypertension

Hypertension associated with systemic chronic inflammatory disease (rheumatoid arthritis and other collagen vascular diseases)

Genetic Causes of High Blood Pressure

Much of the understanding of the body systems involved in high blood pressure has come from genetic studies. High blood pressure often runs in families. Years of research have identified many genes and other mutations associated with high blood pressure, some in the renal salt regulatory and renin-angiotensin-aldosterone pathways. However, these known genetic factors only account for 2 to 3 percent of all cases. Emerging research suggests that certain DNA changes during fetal development also may cause the development of high blood pressure later in life.

Environmental Causes of High Blood Pressure

Environmental causes of high blood pressure include unhealthy lifestyle habits, being overweight or obese, and medicines.

Other medical causes of high blood pressure include other medical conditions such as chronic kidney disease, sleep apnea, thyroid problems, or certain tumors.

The common complications of hypertension and their signs and symptoms include:

http://www.nhlbi.nih.gov/health/health-topics/topics/hbp/causes10

 

Pulse Pressure and Stroke Volume

The  pulse pressure is the difference between systolic (the upper number) and diastolic (the lower number) (11).

Systemic pulse pressure = Psystolic – Pdiastolic

The pulse pressure is 40 mmHg for a typical blood pressure reading of 120/80 mmHg.

Pulse pressure (PP) is proportional to stroke volume (SV), the amount of blood pumped from the heart in one beat, and inversely proportional to the compliance or flexibility of the blood vessels, mainly the aorta.

A low (also called narrow) pulse pressure means that not much blood is being expelled from the heart, and can be caused by a number of factors, including severe blood loss due to trauma, congestive heart failure, shock, a narrowing of the valve leading from the heart to the aorta (stenosis), and fluid accumulating around the heart (tamponade).

High (or wide) pulse pressures occur during exercise, as stroke volume increases and the overall resistance to blood flow decreases. It can also occur for many reasons, such as hardening of the arteries (which can have numerous causes), various deficiencies in the aorta (mainly) or other arteries, including leaksfistulas, and a usually-congenital condition known as AVM, pain/anxiety, fever, anemia, pregnancy, and more. Certain medications for high blood pressure can widen pulse pressure, while others narrow it. A chronic increase in pulse pressure is a risk factor for heart disease, and can lead to the type of arrhythmia called atrial fibrillation or A-Fib.

 

Hypertension Background and Definition

The prevalence of CKD has steadily increased over the past two decades, and was reported to affect over 13% of the U.S. population in 2004.  In 2009, more than 570,000 people in the United States were classified as having end-stage renal disease (ESRD), including nearly 400,000 dialysis patients and over 17,000 transplant recipients.  A patient is determined to have ESRD when he or she requires replacement therapy, including dialysis or kidney transplantation. A National Health Examination Survey (NHANES) spanning 2005-2006 showed that 29% of US adults 18 years of age and older were hypertensive, and of those with high blood pressure (BP), 78% were aware they were hypertensive, 68% were being treated with antihypertensive agents, and only 64% of treated individuals had controlled hypertension (12, 13). In addition, data from NHANES 1999-2006 estimated that 30% of adults 20 years of age and older have prehypertension, defined as an untreated SBP of 120-139 mm Hg or untreated DBP of 80-89 mmHg (12, 13).

Hypertension is the most important modifiable risk factor for coronary heart disease (the leading cause of death in North America), stroke (the third leading cause), congestive heart failure, end-stage renal disease, and peripheral vascular disease. The 2010 Institute for Clinical Systems Improvement (ICSI) guideline (14) on the diagnosis and treatment of hypertension indicates that systolic blood pressure (SBP) should be the major factor to detect, evaluate, and treat hypertension In adults aged 50 years and older. The 2013 joint European Society of Hypertension (ESH) (15) and the European Society of Cardiology (ESC) (16) guidelines recommend that ambulatory blood-pressure monitoring (ABPM) be incorporated into the assessment of cardiovascular risk factors and hypertension.

The JNC 7 (17) identifies the following as major cardiovascular risk factors:

  • Hypertension: component of metabolic syndrome
  • Tobacco use, particularly cigarettes, including chewing tobacco
  • Elevated LDL cholesterol (or total cholesterol ≥240 mg/dL) or low HDL cholesterol: component of metabolic syndrome
  • Diabetes mellitus: component of metabolic syndrome
  • Obesity (BMI ≥30 kg/m 2): component of metabolic syndrome
  • Age greater than 55 years for men or greater than 65 years for women: increased risk begins at the respective ages; the Adult Treatment Panel III used earlier age cut points to suggest the need for earlier action
  • Estimated glomerular filtration rate less than 60 mL/min
  • Microalbuminuria
  • Family history of premature cardiovascular disease (men < 55 years; women < 65 years)
  • Lack of exercise

The Eighth Report of the JNC (JNC 8), released in December 2013 no longer recommends just thiazide-type diuretics as initial therapy in most patients. In essence, the JNC 8 recommends treating to 150/90 mm Hg in patients over age 60 years; for everybody else, the goal BP is 140/90 (18).

Biomarkers Associated with Hypertension

The biomarkers associated with hypertension are for the most part derived from features that characterize the disordered physiology. We might first consider the measurement of blood pressure. Then it becomes necessary to analyze the physiological elements that largely contribute to blood pressure. Finally, there are several biomarkers that have loomed large as measures are myocardial function or myocardial cell death, and are also not independent of renal function, that are indicators of short term and long term cardiovascular status. Having already indicated the importance of measurement of pulse, diastolic and systolic blood pressure in the routine examination of physical status, which is related to cardiac output we shall pay attention to the pulse pressure and pulse wave velocity.    These were defined in the preceding discussion.  They are critically related to the development of hypertension and in the long term, they emerge significantly earlier than either congestive heart failure, chronic kidney disease, acute coronary syndrome, stroke, or cardio-renal syndrome.

Even though cardiovascular disease (CVD), the leading cause of death in developed countries, is not predicted by classic risk factors, there are elements of the risk factor association that need further exploration and will be dissected, such as activity level, obesity, lipids, diabetes mellitus, family history and stress.  Further analysis will point to endocrine and/or metabolic factors that drive cardiovascular risk.

In taking into account the blood pressure measurements, we consider the pulse pressure (PP) and the pulse wave velocity (PWV).  If we refer back to the stroke volume and the Law of the Heart, the systolic blood pressure (SBP) is increased with increased left ventricular output that raises the left ventricular (LV) afterload. This coincides with a decrease in diastolic pressure (DBP) that accompanies a change in coronary artery perfusion (CAP).  Thus, many studies point to increased SBP as a strong risk factor for stroke and CVD.  However, there are sufficient studies that indicate the brachial artery pulse pressure (PP) is a strong determinant of CVD and stroke, and these two elements, SBP and brachial artery PP, may be an indicator of increased arterial stiffness in hypertensive patients and the general population. Brachial PP is also a determinant of recurrent events after acute coronary syndrome (ACS) or with left ventricular hypertrophy (LVH), or the risk of CHF in the aging population, and of all-cause-mortality in the general population.  In addition, the aortic PWV calculated from the Framingham equations was a suitable predictor of CVD risk. In a classic study of arterial stiffness and of CVD and all-cause mortality in an essential hypertension cohort at the Broussais Hospital between 1980 and 1996 (19), the carotid-femoral PWV was measured as an indicator of aortic stiffness, and it was found to be significantly associated with all-cause and CVD mortality independent of previous CVD, age, and diabetes. They tested the hypothesis that aortic stiffness is a predictor of cardiovascular and all-cause mortality in hypertensive patients based on the consideration that the elastic properties of the aorta and central arteries are the major determinants of systemic arterial impedance, and the PWV measured along the aortic and aorto-iliac pathway is the most clinically relevant. They assessed arterial stiffness by measuring the PWV using  the Moens-Korteweg equation based on the increase of the square root of the elasticity modulus in stiffer arteries (20).

PWV as a Diagnostic Test

To assess the performance of PWV considered as a diagnostic test, with the use of receiver operating characteristic (ROC) curves, they calculated sensitivities, specificities, positive predictive values, and negative predictive values of PWV at different cutoff values, first to detect the presence of AA in the overall population and second to detect patients with high 10-year cardiovascular mortality risk in the subgroup of 462 patients without AA with age range from 30 to 74 years. Optimal cutoff values of PWV were defined as the maximization of the sum of sensitivity and specificity.

The main finding of the study was that PWV was a strong predictor of cardiovascular risks as determined by the Framingham equations in a population of treated or untreated subjects with essential hypertension (21). They measured the PWV from foot-to-foot transit time in the aorta for a noninvasive evaluation of regional aortic stiffness, which allows an estimate of the distance traveled by the pulse. The presence of a PWV > 13 m/s, taken alone, appeared as a strong predictor of cardiovascular mortality with high performance values (21). Their work and other studies (22, 23) established increased pulse pressure, the major hemodynamic consequence of increased aortic PWV, as a strong independent predictor of cardiac mortality, mainly MI, in populations of normotensive and hypertensive subjects.

In addition to the findings above, the PWV was found to be an independent predictor of future increase in SBP and of incident hypertension in the Baltimore study (21). The authors reported that in a subset of 306 subjects who were normotensive at baseline, hypertension developed in 105 (34%) during a median follow-up of 4.3 years (range 2 to 12 years). PWV was also an independent predictor of incident hypertension (hazard ratio 1.10 per 1 m/s increase in PWV, 95% confidence interval 1.00 to 1.30, p = 0.03) in individuals with a follow-up duration greater than the median. The authors (21) concluded that carotid-femoral PWV measured using nondirectional transcutaneous Doppler probes (model 810A, 9 to 10-Mhz probes, Parks Medical Electronics, Inc., Aloha, Oregon) could be done to identify normotensive individuals who should be targeted for the implementation of interventions aimed at preventing or delaying the progression of subclinical arterial stiffening and the onset of hypertension.  They reported that age, BMI, and MAP were independently associated with higher SBP on the last visit (Table IV); in addition, PWV was also independently associated with higher SBP on the last visit, and explained 4% of its variance. As shown in Table V, age, BMI, and MAP (p = 0.09, p = 0.009, p < 0.0001 respectively for the interaction terms with time) were predictors of the longitudinal changes in SBP. In addition, PWV was also an independent predictor of the longitudinal increase in SBP (p = 0.003 for the interaction term with time).

In addition, they report that in the group with follow-up duration greater than the median (in which all subjects remained normotensive for the first 4.3 years), beyond age (hazard ratio [HR] 1.02 per 1 year, 95% confidence interval [CI] 0.99 to 1.04, p = 0.2) and SBP (HR 1.05 per 1 mm Hg, 95% CI 1.01 to 1.09, p = 0.006), both HDL (HR 0.96 per 1 mg/dl, 95% CI 0.93 to 0.99, p = 0.02) and PWV (HR 1.10 per 1 m/s, 95% CI 1.00 to 1.30, p = 0.03) (Fig. 1) were independent predictors of incident HTN.

Their findings in a longitudinal projection indicate that PWV, a marker of central arterial stiffening, is an independent determinant of longitudinal SBP increase in healthy BLSA volunteers, and an independent risk factor for incident hypertension among normotensive subjects followed up for longer than 4 years. The study was accompanied by a commentary in the same journal that states: “Pulse wave velocity (PWV) is a simple measure of the time taken by the pressure wave to travel over a specific distance. By virtue of its intrinsic relation to the mechanical properties of the artery by the Moens–Kortweg formula (PWV=√(Eh/2)Rρ; where E is the Young’s Modulus of the arterial wall, h the wall thickness, R the end- diastolic radius and ρ is the density of blood)(20), and buoyed a number of longitudinal studies that reported on the independent predictive value of PWV measurement for cardiovascular events and mortality in various populations, PWV is now widely accepted as the ‘gold standard’ measure of arterial stiffness.

 

 

 

Table IV Multiple Regression Analysis Evaluating the Predictors of Last Visit SBP 21

Variable Parameter
Estimate
Standard
Error
p Value
Age (yrs) 0.32 0.06 <0.0001
Gender (men) 0.65 1.78 0.71
Race (white) −1.22 2.00 0.54
Smoking (ever) 2.48 1.61 0.12
BMI (kg/m2)* 0.61 0.22 0.006
MAP (mm Hg)* 0.60 0.08 <0.0001
PWV (m/s)* 1.56 0.38 <0.0001
Heart rate (beats/min) 0.08 0.06 0.20
Total cholesterol (mg/dl) −0.005 0.02 0.83
Triglycerides (mg/dl) −0.009 0.01 0.50
HDL cholesterol (mg/dl) −0.001 0.07 0.98
Glucose (mg/dl) −0.02 0.06 0.75

 

 

 

 

 

 

 

 

Table V Predictors of Longitudinal SBP Derived From a Linear Mixed-Effects Regression Model 21

Variable Coefficient Standardized

Coefficient

95% Confidence

Interval

p Value
Time (yrs) 3.14 0.14 0.61 to 5.66 0.02
Age (yrs) −0.37 0.25 −0.68 to −0.06 0.02
Age2 (yrs2)* 0.006 0.08 0.002 to 0.008 <0.0001
Gender (men) 0.61 0.03 −1.26 to 2.47 0.52
BMI (kg/m2)* 0.25 0.11 −0.01 to 0.50 0.06
MAP (mmHg)* 1.03 0.47 0.93 to 1.12 <0.0001
PWV (m/s) 0.29 0.12 −0.16 to 0.74 0.21
Time × age* 0.02 0.04 −0.002 to 0.038 0.09
Time × BMI* 0.10 0.06 0.02 to 0.183 0.009
Time × MAP* −0.08 −0.12 −0.11 to −0.05 <0.0001
Time × PWV* 0.22 0.08 0.07 to 0.36 0.003

 

 

Figure 1 21

http://content.onlinejacc.org/data/Journals/JAC/23115/10065_gr1.jpeg

Figure 2.21

http://content.onlinejacc.org/data/Journals/JAC/23115/10065_gr2.jpeg

The interest in this physiological measure is illustrated by the increasing number and diversity of research publications in this arena related to human hypertension, relating PWV to pathophysiological processes (for example, homocysteine, inflammation and extracellular matrix turnover and disorders related to hypertension, such as sleep apnea). The epidemiology, genetic associations and prognostic implications of PWV (and arterial stiffness) have also been reported as has the relationship to hemodynamics, cardiac structure and function.” (24) Furthermore, arterial stiffening may be “characterized by an increase in (central) PP and changes in the morphology of the arterial waveform, both of which can now be measured non-invasively using tonometers from commercially available devices. Wave reflection is typically characterized by aortic pressure augmentation (ΔP) and the augmentation index (ΔP/PP) (Figure 3)(24). Higher augmented pressure, as an index of wave reflection, has been linked to adverse clinical outcomes in different populations.

Figure 3.24

Analysis of the pressure waveform. The initial systolic pressure is labelled as P1 and augmented pressure ( P) is typically measured as the difference between peak pressure (P2) and P1. Augmentation index is  P/PP. PP, pulse pressure.    http://www.nature.com/jhh/journal/v22/n10/images/jhh200847f1.gif 24

A review by Payne et al. (25) states that aortic stiffness and arterial pulse wave reflections determine elevated central systolic pressure and are associated with risk of adverse cardiovascular outcomes. This is because an impaired compensatory mechanism through matrix metalloproteinases of remodeling to compensate for changes in wall stress, possibly related to angiotensin II and inhibition of the vascular adhesion protein semicarbazide-sensitive amine oxidase, related to reduced elastin fiber cross-linking. This has implications for pharmacological agents that target age-related advanced glycation end-product cross-links. This also brings into consideration NO playing a considerable role. But they caution that the endogenous NO synthase inhibitors asymmetric dimethylarginine and L-NG-monomethyl arginine associated with clinical atherosclerosis don’t appear to be associated with arterial stiffening. The matter leaves much to be explained.  The mechanisms underlying arterial stiffness could well require insights into inflammation, calcification, vascular growth and remodeling, and endothelial dysfunction. Nevertheless, arterial stiffness is independently associated with cardiovascular outcome in most of the situations where it has been examined.  Given this train of thinking, O’Rourke (26) considers a progressive arterial dilatation with repeated cycles of stress that leads to degeneration of the arterial wall and increases the pressure wave impulse and wave velocity, augmenting the pressure in late systole. Drugs may reduce wave reflection, but have no direct effect on arterial stiffness.  However, reduction in wave reflection decreases aortic systolic pressure augmentation.  DK Arnett (26) depicts the effect of persistently elevated blood pressure in the following diagram (Figure 4).

 

Figure 4.26  Both transient and sustained stiffening of the artery are likely to be present in hypertension.

An initial elevation in blood pressure may establish a positive feedback in which hypertension biomechanically increases arterial stiffness without any structural change. This elevated blood pressure   might later lead to additional vascular hypertrophy and hyperplasia, collagen deposition, and atherosclerosis, and fixed elevations in arterial stiffness.  As to a genetic factor, she refers to a gene contributing to pulse pressure on chromosome 8 located at 32 cM, which also contains the lipoprotein lipase (LPL) gene which has been associated with hypertension. LPL may be an important candidate gene for pulse pressure.  She specifically identifies a relationship between genetic regions contributing to aortic compliance in African American sibships ascertained for hypertension in Figure 5 (27).  These results suggest there may be influential genetic regions contributing to aortic compliance in African American sibships ascertained for hypertension (27). Collectively, these two studies, the first to our knowledge, indicate the presence of genetic factors influencing hypertension.

Other authors state that PWV has a direct relationship to intrinsic elasticity of the arterial wall, and it is an independent predictor of CVD related morbidity and mortality, but it is not associated with classical risk factors for atherosclerosis (28).  They point out that PWV doesn’t increase during early stages of atherosclerosis, as measured by intima-media thickness and non-calcified atheroma, but it does increase in the presence of aortic calcification that occurs with advanced atherosclerotic plaque. Age-related
PWV measurement. Carotid-to-femoral PWV is calculated by dividing the distance (d) between the two arterial sites by the difference in time of pressure wave arrival between the carotid (t1) and femoral artery (t2) referenced to the R wave of the electrocardiogram.

Figure 5. Linkage of arterial compliance on chromosome 2: HyperGEN27

Widening of the pulse pressure is the major cause of age-related increase in prevalence of hypertension and is related to arterial stiffening. (28)  Commonly used points for measuring the PWV are the carotid and femoral artery because they are superficial and easy to access. Arterial distensibility is measured by the Bramwell and Hill equation (29): PWV = √(V × ΔP/ρ × ΔV), where ρ is blood density. This is shown in Figure 6.

 

Figure 6 28

 

View larger version:

 

Furthermore, these authors (28) report arterial stiffness increases with age by approximately 0.1 m/s/y in East Asian populations with low prevalences of atherosclerosis, but some authors have found accelerated stiffening between 50 and 60 years of age. In contrast, stiffness of peripheral arteries increases less or not at all with increasing age. Again, ageing of the arterial media is associated with increased expression of matrix metalloproteinases (MMP), which are members of the zinc-dependent endopeptidase family and are involved in degradation of vascular elastin and collagen fibers. Several different types of MMP exist in the vascular wall, but in relation to arterial stiffness, much interest has focused on MMP-2 and MMP-9.  This concludes the discussion of PP and PWV in the evolution of hypertension.

 

Diagnostic Biomarkers of essential hypertension.

Ioannidis and Tzoulaki (30) reviewed the literature on 10 popular ‘‘new’’ biomarkers and found that each one had accrued more than 6000 publications.1 The predictive effects of these popular blood biomarkers for coronary heart disease in the general population are listed in Table VI (31).

 

Table VI.* Predictive Value of New Biomarkers 30,31

Biomarker Adjusted Relative Risk (95% C.I.)
Triglycerides 0.99 (0.94–1.05)
C-reactive protein 1.39 (1.32–1.47)
Fibrinogen 1.45 (1.34–1.57)
Interleukin 6 1.27 (1.19–1.35)
BNP or NT-proBNP 1.42 (1.24–1.63)
Serum albumin 1.2 (1.1–1.3)
ICAM-1 (0.75–1.64)
Homocysteine 1.05 (1.03–1.07)
Uric acid 1.09 (1.03–1.16)

*Ionnidis and Tzoulaki from Giles
The majority of these biomarkers show small effects, if any, even in combination.  Giles (31) points out that an elevated homocysteine level might be of great importance to a young person with a myocardial infarction and a positive family history of similar occurrences. Emerging biomarkers, eg, asymmetric and symmetric dimethylarginine and galectin-3, are promising more specific biomarkers based on pathophysiologies for cardiovascular disease. Even then, blood pressure remains the biomarker par excellence for hypertension and for many other cardiovascular entities.

The importance of blood pressure was highlighted by the report of the cardiovascular lifetime risk pooling project.(10) Starting at 55 years of age, 61,585 men and women were followed over an average of 14 years, ie, 700,000 person-years. Individuals who maintained or decreased their blood pressure to normal levels had the lowest remaining lifetime risk for cardiovascular disease (22–41%) compared with individuals who had or developed hypertension by 55 years of age (42–69%). The study indicated that efforts should continue to emphasize the importance of lowering blood pressure and avoiding or delaying the incidence of hypertension to reduce the lifetime risk for cardiovascular disease

A small study involving 120 hypertensive patients with or without heart failure tried to establish a multi-biomarker approach to heart failure (HF) in hypertensive patients using N-terminal pro BNP (32). The following biomarkers were included in the study: Collagen III N-terminal propeptide (PIIINP), cystatin C (CysC), lipocalin-2/NGAL, syndecan-4, tumor necrosis factor-α (TNF-α), interleukin 1 receptor type I (IL1R1), galectin-3, cardiotrophin-1 (CT-1), transforming growth factor β (TGF-β) and N-terminal pro-brain natriuretic peptide (NT-proBNP). The highest discriminative value for HF was observed for NT-proBNP (area under the receiver operating characteristic curve (AUC) = 0.873) and TGF-β (AUC = 0.878). On the basis of ROC curve analysis they found that CT-1 > 152 pg/mL, TGF-β < 7.7 ng/mL, syndecan > 2.3 ng/mL, NT-proBNP > 332.5 pg/mL, CysC > 1 mg/L and NGAL > 39.9 ng/mL were significant predictors of overt HF. There was only a small improvement in predictive ability of the multi-biomarker panel including the four biomarkers with the best performance in the detection of HF (NT-proBNP, TGF-β, CT-1, CysC) compared to the panel with NT-proBNP, TGF-β and CT-1 (absent  CysC). The biomarkers with different pathophysiological backgrounds (NT-proBNP, TGF-β, CT-1) give additive prognostic value for incident compared to NT-proBNP alone.

Inflammation has been associated with pathophysiology of hypertension and vascular damage. Resistant hypertensive patients (RHTN) have unfavorable prognosis due to poor blood pressure control and higher prevalence of target organ damage. Endothelial dysfunction and arterial stiffness are involved in such condition. Previous studies showed that RHTN patients have higher arterial stiffness and endothelial dysfunction than controlled hypertensive and normotensive subjects. The relationship between high blood pressure levels and arterial stiffness may be explained in part, by inflammatory pathways. Previous studies also found that hypertensive subjects have higher levels of inflammatory cytokines including TNF-α, IL-10, IL-1β and CRP. Moreover, IL-1β correlates with arterial stiffness and levels of blood pressure, which are particularly high in patients with resistant hypertension. Increased inflammatory cytokines levels might be related to the development of vascular damage and to the higher cardiovascular risk of resistant hypertensive patients. Elevated BP may cause cardiovascular structural and functional alterations leading to organ damage such as left ventricular hypertrophy, arterial and renal dysfunction. TNF-α inhibition reduced systolic BP and endothelial inflammation in SHR [33]. They also found that IL-1β correlates with arterial stiffness and levels of blood pressure, even after adjust for age and glucose [33]. These investigators then demonstrated that isoprostane levels, an oxidative stress marker, were associated with endothelial dysfunction in these patients [33].

Chao et al. carried out studies of kallistatin (34-36). Kallistatin is an endogenous protein in human plasma as a tissue Kallikrein-Binding Protein (KBP). Tissue kallikrein is a serine protease that releases vasodilating kinin peptides from kininogen substrate. The tissue kallikrein-kinin system is involved in mediating beneficial effects in hypertension as well as cardiac, cerebral and renal injury. KBP was later identified as a serine protease inhibitor (serpin) because of its ability to inhibit tissue kallikrein activity, and was subsequently named “kallistatin”. Kallistatin is mainly expressed in the liver, but is also present in the heart, kidney and blood vessel. Kallistatin protein contains two structural elements: an active site and a heparin-binding domain. The active site of kallistatin is crucial for complex formation with tissue kallikrein, and thus tissue kallikrein inhibition.

Kallistatin is expressed in tissues relevant to cardiovascular function, and has consequently been shown to have vasodilating properties.  Kallistatin has pleiotropic effects in vasodilation and inhibition of inflammation, angiogenesis, oxidative stress, fibrosis, and cancer progression. Injection of a neutralizing Kallistatin antibody into hypertensive rats aggravates cardiovascular and renal injury in association with increased inflammation, oxidative stress and tissue remodeling.  Neither the blood pressure-lowering effect nor the vasorelaxation ability of kallistatin is abolished by icatibant (Hoe140, a kinin B2 receptor antagonist), indicating that kallistatin-mediated vasodilation is unrelated to the tissue kallikrein-kinin system.

The findings reported indicate that kallistatin exerts beneficial effects against hypertension and organ damage. Kallistatin levels in circulation, body fluids or tissues were lower in patients with liver disease, septic syndrome, diabetic retinopathy, severe pneumonia, inflammatory bowel disease, and cancer of the colon and prostate. In addition, reduced plasma kallistatin levels are associated with adiposity and metabolic risk in apparently healthy African American youths. Considered a negative acute-phase protein, circulating kallistatin levels as well as hepatic expression are rapidly reduced within 24 hours after Lipopolysaccharide (LPS) induced endotoxemia in mice. Similarly, circulating kallistatin levels are markedly decreased in patients with septic syndrome and liver disease. Taking together, the studies indicate that kallistatin exhibits potent anti-inflammatory activity.

The pathogenesis of hypertension and cardiovascular and renal diseases is tightly linked to increased oxidative stress and reduced NO bioavailability (37-39). Time-dependent elevation of circulating oxygen species are associated with reduced kallistatin levels in animal models of hypertension and cardiovascular and renal injury. Stimulation of NO formation by kallistatin may lead to inhibition of oxidative stress and thus multi-organ damage. On the other hand, endogenous kallistatin depletion by neutralizing antibody increased oxidative stress and aggravated cardiovascular and renal damage.

A human kallistatin gene polymorphism has been shown to correlate with a decreased risk of developing acute kidney injury during septic shock. Kallistatin levels are markedly reduced in both humans and mice with sepsis syndrome. However, kallistatin administration protects against lethality and organ injury in animal models of toxic septic shock. Moreover, kallistatin levels are decreased in patients with liver disease, septic shock, inflammatory bowel disease, severe pneumonia and acute respiratory distress syndrome. Taken together, the results indicate that kallistatin has the potential to be a molecular biomarker for patients with sepsis, cardiovascular and metabolic disorders.

Pulmonary hypertension (PH) is defined as a mean pulmonary artery pressure of .25 mmHg at rest or .30 mmHg with exercise. Right heart catheterization is required for the definitive diagnosis. Subsequent investigations are instituted to further characterize the disease. The 6-min walk test (6MWT), a measure of exercise capacity, and the New York Heart Association (NYHA)/World Health Organization (WHO) functional classification, a measure of severity, are used to follow the clinical course while receiving treatment, and these both correlate with disease severity and prognosis (43).

Pulmonary arterial hypertension (PAH) is a progressive disease of the pulmonary vasculature that leads to exercise limitation, right heart failure, and death. There is a need for biomarkers that can aid in early detection, disease surveillance, and treatment monitoring in PAH. Several potential molecules have been investigated; however, only brain natriuretic peptide is currently recommended at diagnosis and for follow-up of PAH patients.

ANP is released from storage granules in atrial tissue, while BNP is secreted from ventricular tissue in a constitutive fashion. ANP secretion is stimulated by atrial stretch caused by atrial volume overload; BNP is released in response to ventricular stretch. Natriuretic peptides act on the kidney, causing natriuresis and diuresis, and relax vascular smooth muscle, causing arterial and venous dilatation, leading to reduced blood pressure and ventricular preload. ANP and BNP are released as prohormones and then cleaved into the active peptide and an inactive N-terminal fragment (43).

Natriuretic peptide precursors are released in response to atrial and ventricular stretch, cleaved into active molecules and inactive precursors and convert guanosine 59-triphosphate (GTP) to cyclic guanosine monophosphate (cGMP), leading to their various physiological actions.

There are a number of confounding factors in the interpretation of natriuretic peptide levels, including left heart disease, sex, age and renal dysfunction. Since most studies exclude patients with left heart disease and renal dysfunction, it becomes problematic extrapolating these results to an unselected population (43).

Endothelin-1 (ET-1) is a peptide found in abundance in the human lung and, through action of endothelin receptors (ETA and ETB) on vascular smooth muscle cells, is implicated in the pathogenesis of PAH. Endothelin receptor antagonists are approved for the treatment of PAH. Levels of circulating ET-1 and related molecules are logical biomarkers of interest in PAH. ET-1 is elevated in PAH compared to controls, and correlates with pulmonary hemodynamic parameters. In addition, higher ET-1 levels are associated with increased mortality in patients treated for PAH. ET-1’s precursor, big-ET-1, has a longer half-life and hence is more stable than ET-1.

Endothelin-1 ET-1 is a potent endogenous vasoconstrictor and proliferative cytokine. The ET-1 gene is translated to prepro-ET-1 which is then cleaved, by the action of an intracellular endopeptidase, to form the biologically inactive big ET-1. ET-converting enzymes further cleave this to form functional ET-1 . There are two ET receptor isoforms, termed type A (ETA), located predominantly on vascular smooth muscle cells, and type B (ETB), predominantly expressed on vascular endothelial cells but also on arterial smooth muscle. Activation of both receptor subtypes, when located on vascular smooth muscle, results in vasoconstriction and cell proliferation. In addition, the endothelial ETB receptor mediates vasodilatation and clearance of ET-1 (43).

Prepro-ET-1 is cleaved to inactive big ET-1 and then further cleaved to form active ET-1. This acts on vascular smooth muscle via the ETA and ETB receptors, causing vasoconstriction and cell proliferation, and on endothelial cells via ETB receptors, releasing nitric oxide (NO) and prostacyclin (PGI2), causing vasorelaxation.

As a biomarker, ADMA has been evaluated in several different classes of PH (43, 44). In IPAH, plasma levels are significantly higher than in healthy, matched controls. In such patients, plasma ADMA correlates positively with right atrial pressure, and negatively with mixed venous oxygen saturation, stroke volume, cardiac index and survival. On stepwise multiple regression analysis, ADMA is an independent predictor of mortality and, using Kaplan–Meier survival curves, patients with supramedian ADMA levels have significantly worse survival than those with inframedian levels.

Patients with idiopathic PAH, plasma levels of Ang-1 and Ang-2 were higher in PAH patients as compared to healthy controls.  Moreover, higher plasma levels of Ang-2 were associated with lower CI and mixed venous oxygen saturation (SvO2) and higher PVR, and, with therapy initiation, changes in Ang-2 correlated with changes in hemodynamics (45, 46).

Endostatin is an antiangiogenic peptide. It is synthesized by myocardium, is detectable in the peripheral circulation of patients with decompensated heart failure, and predicts mortality.48 In PAH, reduced RV myocardial oxygen delivery is felt to contribute to a transition from RV adaptation to failure (46).

Cyclic guanosine monophosphate (cGMP) is an intracellular second messenger of nitric oxide and an indirect marker of natriuretic peptide production (46).

Human pentraxin 3 (PTX3) is a protein synthesized by vascular cells that regulates angiogenesis, inflammation, and cell proliferation (46).

N-terminal propeptide of procollagen III (PIIINP), carboxy-terminal telopeptide of collagen I (CITP), matrix metalloproteinase-9 (MMP-9), and tissue inhibitor of metalloproteinase I (TIMP-1)(46).

Osteopontin (OPN) is a matricellular protein that mediates cell migration, adhesion, remodeling, and survival of the vascular and inflammatory cells (46).

F2-isoprostane is a marker of lipid peroxidation of arachidonic acid, which stimulates endothelial cell proliferation and ET-1 synthesis and may play a role in the pathogenesis of PAH (46).

Circulating fibrocytes are bone marrow-derived cells (CD45 /collagen I ) that contribute to organ fibrosis and extracellular matrix deposition (46).

Circulating miRs (46)

Despite many other substances being investigated as potential biomarkers in PAH, more research is needed to validate the results of small studies and assess their clinical utility. Widespread clinical use of current investigational biomarkers will require validated clinical laboratory techniques and increased knowledge of levels in the healthy population as well as other disease states.

Here are important tests in clinical practice (47):

 

6-min walk distance

Cardiac index

WHO FC

PIIINP

Higher tertiles associated with worse disease

worse renal function

higher right atrial pressure (RAP)

CITP – vascular remodeling

 

Recent guidelines (17, 18) encourage the use of screening examinations, such as an echocardiogram (UCG), in high-risk populations for the early detection of PAH . To detect PAH in patients with connective tissue disease (CTD), the obvious screening tests are an UCG and spirometry, including assessment of the diffusing capacity of the lung for carbon monoxide (DLCO). Previous studies have suggested that B-type natriuretic peptide (BNP) and its N-terminal prohormone (NT-proBNP) are potential biomarkers for PAH. However, neither BNP nor NT-pro BNP are specific biomarkers of the degeneration of the pulmonary artery; rather, they are biomarkers of cardiac burden resulting from right heart failure.

Human pentraxin 3 (PTX3) is a specific biomarker for PAH, reflecting pulmonary vascular proteins. They are divided into short and long pentraxins on the basis of their primary structure.
C-Reactive protein (CRP) and serum amyloid P are the classic short pentraxins that are produced in the liver in response to systemic inflammatory cytokines (48). In contrast, PTX3 is one of the long pentraxins. It is synthesized by local vascular cells, such as smooth muscle cells, endothelial cells and fibroblasts, as well as innate immunity cells at sites of inflammation. PTX3 plays a key role in the regulation of cell proliferation and angiogenesis (49).

Increased plasma PTX3 levels have been reported in patients with acute myocardial injury in the
24 h after admission to hospital, with levels returning to normal after 3 days. Similarly, PTX3 levels are higher in patients with unstable angina pectoris, with the changes in PTX3 levels found to be independent of other coronary risk factors, such as obesity and diabetes mellitus. Finally, high serum PTX3 levels have been reported in patents with vasculitis, such as small-vessel vasculitis  and Takayasu aortitis.

Mean plasma PTX3 concentrations in the CTD-PAH and CTD patients were 5.02+0.69 ng/mL (range 1.82–12.94 ng/mL) and 2.40+0.14 ng/mL (range 0.70–4.29 ng/mL), respectively (Table 2). Log transformation of the data revealed significantly higher PTX3 levels in CTD-PAH than in CTD patients (1.49+0.12 vs. 0.82+0.06 log ng/mL, respectively; P = 0.001).(not shown)(50)

Figure 1. Serum pentraxin 3 (PTX3) concentrations in 50 patients with pulmonary arterial hypertension (PAH) and 100 healthy controls, and their correlation with serum concentrations of other biomarkers. A: Comparison of PTX3 concentrations in PAH patients and healthy controls. Mean plasma PTX3 concentrations were 4.4060.37 and 1.94+0.09 ng/mL in the controls and PAH patients, respectively. B: Distribution of log-transformed PTX3 concentrations in PAH patients and healthy controls. C: Log-transformed PTX3 concentrations were significantly higher in patients with PAH than in healthy controls (1.34+0.07 vs. 0.55+0.05 log ng/mL, respectively; P,0.001). D, E: There was no correlation between plasma concentrations of PTX3 and either B-type natriuretic peptide (BNP; r=0.33, P=0.02) or C-reactive protein (CRP; r=0.21, P=0.14) in PAH patients. (not shown) (50)

 

Table 2. Clinical characteristics and biomarkers in patients with connective tissue disease, with or without pulmonary arterial hypertension.

CTD-PAH ( n =17)                CTD alone ( n =34)       P -value

Age (years)                                 56.3+4.6                                 56.3+2.7               0.990

No. women (%)                         15 (88)                                      31(91)                  0.745

No. with SSc (%)                       10 (59)                                      20 (59)                    1

No. with heart failure (%)          1 (6)                                         0                            –

No. being treated for PAH (%)   17 (100)                                  0                           –

Serum PTX3 (mg/dL)                   5.02+0.69                          2.40+0.14             0.001

Serum CRP (mg/dL)                   0.24+0.09                            0.22+0.04             0.936

Serum BNP (pg/mL)                 189.3+74.                            4 49.3+12.1            0.014

…..  CTD, connective tissue disease; PAH, pulmonary arterial hypertension; SSc, scleroderma;

Figure 3. Receiver operating characteristic (ROC) curves for pentraxin 3 (PTX3) and other biomarkers in patients with connective tissue disease (CTD). The areas under the ROC curve (AUCROC) for PTX3 was 0.866 (95% confidence interval (CI) 0.757–0.974). The star indicates the threshold concentration of 2.85 ng/mL PTX3 that maximized true-positive and false-negative results (sensitivity 94.1%, specificity 73.5%). The AUCROC for C-reactive protein (CRP) was 0.518 (95% CI 0.333–0.704), whereas that for B-type natriuretic peptide (BNP) was 0.670 (95% CI 0.497–0.842). (50)  http://dx.doi.org:/10.1371/journal.pone.0045834.g003

This study was to determine whether PTX3, the regulation of which is independent of that of the systemic inflammatory marker CRP, is a useful biomarker for diagnosing PAH. The investigators found that PTX3 may be a more sensitive biomarker for PAH than BNP, which is, to date, the most established biomarker for PAH, especially in patients with CTD-PAH. Their findings suggest that PTX3 does not reflect the cardiac burden due to the pulmonary hypertension, but rather the activity of pulmonary vascular degeneration because PTX3 levels were significantly decreased after active treatment specifically for PAH (50). PLoS ONE 7(9): e45834. http://dx.doi.org:/10.1371/journal.pone.0045834.

Pharmacologic treatment for pulmonary arterial hypertension (PAH) remains suboptimal and mortality rates are still high, even with pulmonary vasodilator therapy. In addition, we have only an incomplete understanding of the pathobiology of PAH, which is characterized at the tissue level by fibrosis, hypertrophy and plexiform remodeling of the distal pulmonary arterioles. Novel therapeutic approaches that might target pulmonary vascular remodeling, rather than pulmonary vaso-reactivity, require precise patient phenotyping both in terms of clinical status and disease subtype. However, current risk stratification models are cumbersome and not precise enough for choosing or assessing the results of therapeutic intervention. Biomarkers used in patients with left heart failure, such as troponin-T and N-terminal pro-B-type natriuretic peptide (NT-proBNP) are elevated in PAH patients but tend to simply reflect increased circulating plasma volumes and elevated right heart pressure, rather than conveying information about disease mechanism.

In this issue of Heart, Calvier and colleagues (see page 390) (51)propose galectin-3 as a useful biomarker in PAH. The rationale for this hypothesis is that elevated aldosterone levels induce an increase in serum levels of galectin-3, a β-galactoside-binding lectin expressed by circulating myocytes, endothelial cells and other cardiovascular cell types. Among other effects, activation of the aldosterone/galactin-3 pathway promotes fibrosis (51), suggesting that elevated levels will correlate with the severity of PAH due to increased pulmonary arteriolar remodeling. To test this hypothesis, serum levels were measured in a total of 57 patients – 41 with idiopathic PAH (iPAH) and 16 with PAH associated with a connective tissue disorder (CTD). The magnitude of elevation in serum levels of aldosterone, galectin-3 and NT-proBNP each correlated with the severity of PAH. However, as shown in figure 1, although serum levels of galectin-3 were elevated in both iPAH and PAH-CTD patients, aldosterone was elevated only in those with iPAH.

In addition, elevated vascular cell adhesion molecule 1 (VCAM-1) and proinflammatory, anti-angiogenic interleukin 12 (IL-12) in were elevated only in PAH-CTD patients, not in those in iPAH. These data suggest that aldosterone and galectin-3 can be used as biomarkers “in tandem” that reflect both the severity and cause of PAH (52).

In the accompanying editorial, Maron (see page 335) summarizes the knowledge gaps in PAH and concludes: “Taken together, Calvier and colleagues provide a key contribution to an underdeveloped area of pulmonary vascular medicine and in doing so identify galectin-3/aldosterone as promising biomarker(s) for informing both disease pathobiology and clinical status in PAH. The rationale of this pursuit in PAH was based, in part, on lessons earned from left heart failure in which the importance of systemically circulating vasoactive factors to clinical trajectory is well established. In this regard, the current work not only develops a novel scientific avenue worthy of further investigation, but also adds to the evolving body of evidence implicating a role for neurohumoral activation in the pathophysiology of PAH”.

Rheumatoid arthritis (RA) affects about 1% of the population and is known to be a significant risk factor for cardiovascular disease, with a 3-fold increased risk of myocardial infarction, a 2-fold increased risk of sudden death and a 50% increase in cardiovascular mortality rates. However, outcomes after PCI in RA patients have not been well characterized and there is little data on the possible effects of disease modifying therapy for RA on risk of restenosis after percutaneous coronary intervention (PCI). In a single center retrospective cohort study, Sintek and colleagues (53)(see page 363) compared the primary endpoint of repeat target vessel revascularization (TVR) in 143 RA patients matched to 541 other.

Pathophysiological targets of differing imaging modalities, demonstrate targets for tracers/contrast agents/pharmacotherapy used in SPECT, PET, MRI and echocardiography to assess myocardial viability.  (Not shown. Adapted from Schuster et al., J Am Coll Cardiol 2012; 59:359–70.)

Ischemic cardiomyopathy implies significant left ventricular systolic dysfunction with an underlying pathophysiology that includes myocardial scarring, hibernation and stunning, or a combination of these disease states. The role of imaging in assessment of myocardial viability is emphasized (not shown) (54) with brief summaries of the role of echocardiography, single photon emission computed tomography (SPECT), positron emission tomography (PET), and magnetic resonance imaging (MRI). The effects of revascularization in patients with ischemic cardiomyopathy remain controversial. Instead, the key elements of evidence based therapy for ischemic cardiomyopathy are standard medical therapy for heart failure combined with implantable cardiac defibrillation (ICD) and/or biventricular pacing device therapy in appropriate patients.

The relationship between the heart and the kidney in hypertension and heart failure

Hypertension is undoubtedly a factor in the treatment of chronic kidney disease because of the relationship between kidney function and BP components that have been studied in people with CKD, diabetes, and hypertension.  Cystatin C was used to evaluate the association between kidney function and both SBP and DBP and 24-h creatinine clearance (CrCl) among 906 participants in the Heart and Soul Study.  (56).  The study investigators hypothesized that although both creatinine and cystatin C are freely filtered at the glomerulus, a major difference between them is that creatinine is secreted by renal tubules, whereas cystatin C is metabolized by the proximal tubule and only a small fraction appears in the urine. In addition, Cystatin C has also been shown to be a stronger predictor of adverse outcomes than serum creatinine. Based on the more linear relationship of cystatin C with GFR, they hypothesized that cystatin C would have a stronger association with SBP than conventional measures of kidney function. Their results found that SBP was linearly associated with cystatin C concentrations (1.19 ± 0.55 mm Hg increase per 0.4 mg/L cystatin C, P = .03) across the range of kidney functions, but only in subjects with CrCl <60 mL/min (6.4 ± 2.13 mm Hg increase per 28 mL/min, P = .003), not >60 mL/min. Further, the DBP was not associated with cystatin C or CrCl. However, PP was linearly associated with both cystatin C (1.28 ± 0.55 mm Hg per 0.4 mg/L cystatin, P = .02) and CrCl <60 mL/min (7.27 ± 2.16 mm Hg per 28 mL/min, P = .001). The relationship between SBP and cystatin C by decile is shown in Figure 7 and Table 3.

Figure 7.

Mean systolic blood pressure (SBP) and diastolic blood pressure (DBP) by decile of kidney function measured as cystatin C. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2771570/bin/nihms-153474-f0001.jpg

 

 

Table 3

Linear regression of systolic blood pressure by kidney function (N = 906)

Age-adjusted Multivariable adjusted*
Measure N β coefficient P β coefficient P
Cystatin-C (per 0.4 mg/L [SD] increase) 1.75 ± 0.72 .01 1.19 ± 0.55 .03
    Overall
    >1.0 551 2.23 ± 0.07 .03 1.23 ± 0.03 .04
    <1.0 355 1.59 ± 0.04 .71 0.54 ± 0.01 .87
Spline P value for difference in slopes .85
24-h CrCl (per 28 mL/min [SD] decrease)
    Overall 1.96 ± 0.76 .01 0.91 ± 0.61 .14
    <60 222 11.20 ± 2.74 <.001 6.40 ± 2.13 .003
    >60 684 0.31 ± 0.99 .42 0.36 ± 0.77 .64
    Spline P-value for difference in slopes .01

The results for both Cystatin C and for eGFR are in agreement with incidence rates for heart failure (57)categorized by ejection fraction (EF) and kidney function over 1992−2000 in the Cardiovascular Health Study. Estimated glomerular filtration rate (mL/min per 1.73 m2) is labeled as “eGFR”. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2258307/bin/nihms-39968-f0002.jpg).

The association of cystatin C with risk for SHF appeared linear across quartiles of cystatin C (57) and slightly stronger at the highest categories of cystatin C, whereas the lower three quartiles of cystatin C had similar risks for DHF. Participants with an estimated GFR ≥ 60 mL/min per 1.73 m2 had an equal likelihood of developing DHF or SHF, whereas participants with an estimated GFR < 60 mL/min per 1.73 m2 had a greater likelihood of developing SHF.

When an interaction term for HF type (SHF or DHF) was inserted into a fully adjusted standard Cox proportional hazards model with HF with either type of EF as the outcome, the association of continuous cystatin C with SHF was significantly greater than the association of cystatin C with DHF ( P value for interaction < 0.001). The association of estimated GFR and SHF compared with DHF was weaker (P value for interaction = 0.06 for the fully adjusted model).

Ascending quartiles of cystatin C were associated with increasing adjusted risk for the development of “unclassified” HF, defined by the absence of a point-of-care EF measurement. The magnitude of the fully adjusted hazard ratios for the association between cystatin C and risk of unclassified HF were intermediate between those described for DHF and SHF [hazard ratios (95% confidence intervals) for each higher quartile of cystatin C 1.00 (reference), 1.12 (0.80−1.57), 1.84 (1.34−2.51), 2.18 (1.58−3.00)]. The authors state that increased left atrial filling pressures trigger the release of atrial natriuretic peptide and inhibition of vasopressin, which leads to decreased renal sympathetic tone and diuresis early in the pathogenesis of HF (57).  They suggest that even relatively small decrements in k58idney function contribute to the risk of SHF.

Aldosterone plays a key role in homeostatic control and maintenance of blood pressure (BP) by regulation of extracellular volume, vascular tone, and cardiac output. Taking this assumption further, a study unrelated to that above explored the magnitude of the effect of relative aldosterone excess in predicting peripheral as well as aortic blood pressure in a cohort of patients undergoing coronary angiography.  (58) They found that mean peripheral systolic blood pressure (SBP) and diastolic blood pressure (DBP) of the entire cohort were 141 ± 24 mm Hg and 81 ± 11 mm Hg, respectively. Median SBP and aortic SBP increased steadily and significantly from aldosterone/renin ratio (ARR), respectively; p < 0.0001 for both) after multivariate adjustment for parameters potentially influencing BP. ARR emerged as the second most significant independent predictor (after age) of mean SBP and as the most important predictor of mean DBP in this patient cohort.  The authors stress the importance of the ARR in modulating BP over a much wider range than is currently appreciated, as it was already known that the ARR was positively associated with pulse wave velocity in young normotensive healthy adults, indicating that relative aldosterone excess might affect arterial remodeling and precede BP rise as a result of increased vascular stiffness. In this study the ARR was calculated as the PAC/PRC ratio (pg/ml/pg/ml). An ARR >50 pg/ml had a sensitivity and specificity of ARR of 89% and 96%, respectively, for primary aldosteronism. The ARR was modeled as a continuous ratio (with log-transformed values).  The study carried out a multivariate stepwise regression analysis for predictors of BP (not shown). They illustrate (not shown) that marked increases in PRC are a major characteristic of lower ARR categories, and that  across a broad range of ARR values, inappropriately elevated aldosterone levels exert a strong effect on BP values and constitute the most important and second-most important predictor of DBP and SBP, respectively.

Cystatin C may be ordered when a health practitioner is not satisfied with the results of other tests, such as a creatinine or creatinine clearance, or wants to check for early kidney dysfunction, particularly in the elderly, and/or wants to monitor known impairment over time. In diverse populations it has been found to improve the estimate of GFR when combined in an equation with blood creatinine. A high level in the blood corresponds to a decreased glomerular filtration rate (GFR) and hence to kidney dysfunction. Since cystatin C is produced throughout the body at a constant rate and removed and broken down by the kidneys, it should remain at a steady level in the blood if the kidneys are working efficiently and the GFR is normal.

Chronic kidney disease (CKD) is defined as the presence of: persistent and usually progressive reduction in GFR (GFR <60 mL/min/1.73 m2) and/or albuminuria (>30 mg of urinary albumin per gram of urinary creatinine), regardless of GFR. Cystatin C is an index of GFR, especially in patients where serum creatinine may be misleading (eg, very obese, elderly, or malnourished patients); for such patients, use of CKD-EPI cystatin C equation is recommended to estimate GFR. Cystatin C eGFR may have advantages over creatinine eGFR in certain patient groups in whom muscle mass is abnormally high or low (for example quadriplegics, very elderly, or malnourished individuals). Blood levels of cystatin C also equilibrate more quickly than creatinine, and therefore, serum cystatin C may be more accurate than serum creatinine when kidney function is rapidly changing (59) (for example amongst hospitalized individuals).

It is a low molecular weight (13,250 kD) cysteine proteinase inhibitor that is produced by all nucleated cells and found in body fluids, including serum. Since it is formed at a constant rate and freely filtered by the kidneys, its serum concentration is inversely correlated with the glomerular filtration rate (GFR); that is, high values indicate low GFRs while lower values indicate higher GFRs, similar to creatinine. While both cystatin C and creatinine are freely filtered by glomeruli, cystatin C is reabsorbed and metabolized by proximal renal tubules. Thus, under normal conditions, cystatin C does not enter the final excreted urine to any significant degree, and the serum concentration is unaffected by infections, inflammatory or neoplastic states, or by body mass, diet, or drugs.  GFR can be estimated (eGFR) from serum cystatin C utilizing an equation which includes the age and gender of the patient (CKD-EPI cystatin C equation, developed by Inker et al. (59) It demonstrated good correlation with measured iothalamate clearance in patients with all common causes of kidney disease, including kidney transplant recipients.

According to the National Kidney Foundation Kidney Disease Outcome Quality Initiative (K/DOQI) classification, among patients with CKD, irrespective of diagnosis, the stage of disease should be assigned based on the level of kidney function:

Table 4

Stage Description GFR mL/min/BSA
1 Kidney damage with normal or  increased GFR 90
2 Kidney damage with mild decrease in  GFR 60-89
3 Moderate decrease in GFR 30-59
4 Severe decrease in GFR 15-29
5 Kidney failure <15 (or dialysis)

(http://www2.kidney.org/professionals/kdoqi/guidelines_ckd/p4_class_g1.htm)

In a study to evaluate cystatin C as a measure of renal function in comparison to serum creatinine, 500 patients had cystatin C measured by nephelometry and glomerular filtration rate (GFR) measured by nonradiolabeled iothalamate clearance (59). In addition, serum creatinine was measured and the patients’ medical records reviewed. The correlation of 1/cystatin C with GFR (r=0.90) was significantly superior than 1/creatinine (r=0.82, p<0.05) with GFR. The superior correlation of 1/cystatin C with GFR was observed in the various clinical subgroups of patients studied (ie, subjects with no suspected renal disease, renal transplant patients, recipients of some other transplant, patients with glomerular disease, and patients with non-glomerular renal disease). The findings indicated that cystatin C may be superior to serum creatinine for the assessment of GFR in a wide spectrum of patients (59). Others have similarly found that cystatin C correlates better than serum creatinine for assessment of GFR. (60)

Patients were screened for 3 chronic kidney disease (CKD) studies in the United States (n = 2,980) and a clinical population in Paris, France (n = 438)(61).   GFR was measured by using urinary clearance of iodine125-iothalamate in the US studies and chromium51-EDTA in the Paris study. GFR was calculated using the 4 new equations based on serum cystatin C alone, serum cystatin C, serum creatinine, or both with age, sex, and race. New equations were developed by using linear regression with log GFR as the outcome in two thirds of data from US studies. Internal validation was performed in the remaining one third of data from US CKD studies; external validation was performed in the Paris study.

Mean mGFR, serum creatinine, and serum cystatin C values were 48 mL/min/1.73 m2 (5th to 95th percentile, 15 to 95), 2.1 mg/dL, and 1.8 mg/L, respectively. For the new equations, coefficients for age, sex, and race were significant in the equation with serum cystatin C, but 2- to 4-fold smaller than in the equation with serum creatinine (62, 63). Measures of performance in new equations were consistent across the development and internal and external validation data sets. Percentages of estimated GFR within 30% of mGFR for equations based on serum cystatin C alone, serum cystatin C, serum creatinine, or both levels with age, sex, and race were 81%, 83%, 85%, and 89%, respectively. The equation using serum cystatin C level alone yields estimates with small biases in age, sex, and race subgroups, which are improved in equations including these variables. It is concluded that Serum cystatin C level alone provides GFR estimates not linked to muscle mass, and that an equation including serum cystatin C level in combination with serum creatinine level, age, sex, and race provides the most accurate estimates.
The authors report that absence of urinary excretion has made it difficult to rigorously evaluate cystatin C as a filtration marker and to examine its non-GFR determinants. They also point out that a high level of variation in the cystatin C assay (64, 65), and standardization and calibration of clinical laboratories will be important to obtain accurate GFR estimation using cystatin C, as has been shown for creatinine.

The study reported above was followed by a major study by Inker LA, et al. (59). Their findings are summarized as follows. Mean measured GFRs were 68 and 70 ml per minute per 1.73 m2 of body-surface area in the development and validation data sets, respectively. In the validation data set, the creatinine–cystatin C equation performed better than equations that used creatinine or cystatin C alone. Bias was similar among the three equations, with a median difference between measured and estimated GFR of 3.9 ml per minute per 1.73 m2 with the combined equation, as compared with 3.7 and 3.4 ml per minute per 1.73 m2 with the creatinine equation and the cystatin C equation (P=0.07 and P=0.05), respectively. Precision was improved with the combined equation (interquartile range of the difference, 13.4 vs. 15.4 and 16.4 ml per minute per 1.73 m2, respectively [P=0.001 and P<0.001]), and the results were more accurate (percentage of estimates that were >30% of measured GFR, 8.5 vs. 12.8 and 14.1, respectively [P<0.001 for both comparisons]). In participants whose estimated GFR based on creatinine was 45 to 74 ml per minute per 1.73 m2, the combined equation improved the classification of measured GFR as either less than 60 ml per minute per 1.73 m2 or greater than or equal to 60 ml per minute per 1.73 m2 (net reclassification index, 19.4% [P<0.001]) and correctly reclassified 16.9% of those with an estimated GFR of 45 to 59 ml per minute per 1.73 m2 as having a GFR of 60 ml or higher per minute per 1.73 m2.

Other studies have established the importance of cystatin C levels(66, 67) and the factors influencing cystatin C levels on renal function measurement (68), including an implication that cystatin C, an alternative measure of kidney function, was a stronger predictor of the risk of cardiovascular events and death than either creatinine or the estimated GFR (69). This includes the Dallas Heart Study (30) finding that cystatin C was independently associated with a specific cardiac phenotype of concentric hypertrophy, including increased LV mass, concentricity, and wall thickness, but it was not associated with LV systolic function or volume. This association was particularly robust in hypertensives and blacks. The Cystatin C concentrations within stages of CKD are shown in Table 5 (70).

Table 5

      Cystatin C level
Stage a Description GFR range a (ml/min/1.73 m2) Native kidney disease b Transplant recipient c
1 Normal or increased GFR 90 0.80 0.87
2 Mildly decreased GFR 60 to 89 0.80 to 1.09 0.87 to 1.23
3 Moderately decreased GFR 30 to 59 1.10 to 1.86 1.24 to 2.24
4 Severely decreased GFR 15 to 29 1.87 to 3.17 2.25 to 4.10
5 Kidney Failure <15 >3.17 >4.10

a GFR estimates and CKD stage will be inaccurate if there is a calibration difference with the Dade-Behring BN II Nephelometer assay used in this study.

b Using the prediction equation: GFR=66.8 (cystatin C)-1.30.

c Using the prediction equation: GFR=76.6 (cystatin C)-1.16.

 

Copeptin, a novel marker

Urinary albumin excretion is a powerful predictor of progressive cardiovascular and renal disease. Copeptin is the inactive C-terminal fragment of the vasopressin precursor. It is a reliable marker of vasopressin secretion serves as a useful substitute for circulating vasopressin concentration. This allows  for the indirect measurement of vasopressin in epidemiological studies. Moreover, it has been shown that copeptin is a candidate biomarker for pneumonia 32), a predictor of outcome in heart failure, and is a powerful predictor of renal disease associated with albumin excretion (71).  Figure 8 shows the association between copeptin and 24-hour urinary volume, 24-h urinary osmolality and osmolality (71).

 

Figure 8

 

Association between quintiles of copeptin and median 24-h UAE (upper panel) and prevalence of microalbuminuria (lower panel) for males and females. Differences between the quintiles were tested by Kruskal–Wallis test. UAE, urinary albumin excretion.

 

 

Table 6 shows the association between copeptin concentration and urinary albumin excretion (UAE) in a log-log plot (71).

 

Model Corrected for β 95% CI for β P
Males        
 1 − (Crude) 0.25 0.20–0.30 <0.001
 2 As 1+age 0.21 0.16–0.26 <0.001
 3 As 2+MAP, BMI, smoking, glucose, cholesterol, CRP, and eGFR 0.10 0.05–0.16 <0.001
 4 As 3+diuretics and ACEi/ARB. 0.09 0.04–0.15 0.001
         
Females
 1 − (Crude) 0.19 0.15–0.23 <0.001
 2 As 1+age 0.17 0.14–0.22 <0.001
 3 As 2+MAP, BMI, smoking, glucose, cholesterol, CRP, and eGFR 0.16 0.11–0.21 <0.001
 4 As 3+diuretics and ACEi/ARB. 0.17 0.12–0.21 <0.001

ACEi, angiotensin-converting enzyme inhibitor; ARB, angiotensin-II-receptor blocker; BMI, body mass index; CHD, coronary heart disease; CI, confidence interval; CRP, C-reactive protein; eGFR, estimated glomerular filtration rate; MAP, mean arterial pressure.

Log copeptin concentration was entered in the regression analyses as independent and log UAE as the dependent variable. Copeptin was associated with UAE in all age groups, but this association is the strongest when subjects are older. Twenty-four-hour urinary volume and 24-h urinary osmolarity were significantly different, with 24-h urinary volume being higher and 24-h urinary osmolarity being lower in the oldest age group when compared with the youngest age group. In both males and females, high copeptin concentration (a surrogate for vasopressin) is associated with low 24-h urinary volume and high 24-h urinary osmolarity. However, urinary osmolarity was independently associated with UAE, but it was weaker than that between copeptin and UAE.  This might indicate that induction of specific glomerular hyperfiltration or decreased tubular albumin reabsorption are associated with this relationship. In addition, subjects with higher levels of copeptin had lower renal function.  These investigators concluded that copeptin (a reliable substitute for vasopressin) is associated with UAE and microalbuminuria, consistent with the hypothesis that vasopressin induces UAE (72).  Other studies indicated that copeptin levels are increased in patients with pulmonary artery hypertension (73), and
higher serum copeptin levels, a surrogate for arginine vasopressin (AVP) release, are associated not only with systolic and diastolic blood pressure but also with several components of metabolic syndrome (74) including obesity, elevated concentration of triglycerides, albuminuria, and serum uric acid level.

 

 

Natriuretic peptides in the evaluation of heart failure

The brain type natriuretic peptide (BNP) and the N-terminal pro B-type natriuretic peptide (NT proBNP), but not yet the atrial natriuretic peptide have gained prominence in the evaluation of patients with CHF, which may be with or without preserved ejection fraction . Richards et al. (75)  make the following points.

 

  • Threshold values of B-type natriuretic peptide (BNP) and N-terminal prohormone B-type natriuretic peptide (NT-proBNP) validated for diagnosis of undifferentiated acutely decompensated heart failure (ADHF) remain useful in patients with heart failure with preserved ejection fraction (HFPEF), with minor loss of diagnostic performance.

 

  • BNP and NT-proBNP measured on admission with ADHF are powerfully predictive of in-hospital mortality in both HFPEF and heart failure with reduced EF (HFREF), with similar or greater risk in HFPEF as in HFREF associated with any given level of either peptide.

 

  • In stable treated heart failure, plasma natriuretic peptide concentrations often fall below cut-point values used for the diagnosis of ADHF in the emergency department; in HFPEF, levels average approximately half those in HFREF.

 

  • BNP and NT-proBNP are powerful independent prognostic markers in both chronic HFREF and chronic HFPEF, and the risk of important clinical adverse outcomes for a given peptide level is similar regardless of left ventricular ejection fraction.

 

  • Serial measurement of BNP or NT-proBNP to monitor status and guide treatment in chronic heart failure may be more applicable in HFREF than in HFPEF.

 

In addition, they point out the following:

 

BNP and NT-proBNP fall below ADHF thresholds in stable HFREF in approximately 50% and 20% of cases, respectively. Levels in stable HFPEF are even lower, approximately half those in HFREF.

 

Whereas BNPs have 90% sensitivity for asymptomatic LVEF of less than 40% in the community (a precursor state for HFREF), they offer no clear guide to the presence of early community based HFPEF.

 

Guidelines recommend BNP and NT-proBNP as adjuncts to the diagnosis of acute and chronic HF and for risk stratification. Refinements for application to HFPEF are needed.

 

The prognostic power of NPs is similar in HFREF and HFPEF. Defined levels of BNP and NT-proBNP correlate with similar short-term and long-term risks of important clinical adverse outcomes in both HFREF and HFPEF.

 

They provide a diagnostic algorithm for suspected heart failure (75)(Figure 9).

 

Figure 9

Diagnostic algorithm for suspected heart failure presenting either acutely or nonacutely

 

 

Diagnostic algorithm for suspected heart failure presenting either acutely or nonacutely. a In the acute setting, mid-regional pro–atrial natriuretic peptide may also be used (cutoff point 120 pmol/L; ie, <120 pmol/L 5 heart failure unlikely). b Other causes of elevated natriuretic peptide levels in the acute setting are an acute coronary syndrome, atrial or ventricular arrhythmias, pulmonary embolism, and severe chronic obstructive pulmonary disease with elevated right heart pressures, renal failure, and sepsis. Other causes of an elevated natriuretic level in the nonacute setting are old age (>75 years), atrial arrhythmias, left ventricular hypertrophy, chronic obstructive pulmonary disease, and chronic kidney disease. c Exclusion cutoff points for natriuretic peptides are chosen to minimize the false-negative rate while reducing unnecessary referrals for echocardiography. Treatment may reduce natriuretic peptide concentration, and natriuretic peptide concentrations may not be markedly elevated in patients with heart failure with preserved ejection fraction.

 

Patients with acute pulmonary symptoms and with acute myocardial infarct present with dyspnea to the Emergency Department.  The evaluation is made particularly difficult in a patient for whom there is no prior history. Maisel et al. (76) presented the utility of the midregion proadrenomedullin (MR-proADM) in all patients presenting with acute shortness of breath.  They found that MR-proADM was superior to BNP or troponin for predicting 90-day all-cause mortality in patients presenting with acute dyspnea (c index = 0.755, p < 0.0001). Furthermore, MR-proADM added significantly to all clinical variables (all adjusted hazard ratios: HR=3.28), and it was also superior to all other biomarkers.

 

There is a large body of recent work that has enlarged our view of hypertension, kidney disease, cardiovascular disease, including heart failure with (HFpEF) or without preserved ejection fraction. I shall here refer to my review in Leaders in Pharmaceutical Innovation  (78).  The piece contains a study that I published  (79) with collaborators in Brooklyn, Bridgeport and Philadelphia that is no longer available from the publisher.

 

The natriuretic peptides, B-type natriuretic peptide (BNP) and NT-proBNP that have emerged as tools for diagnosing congestive heart failure (CHF) are affected by age and renal insufficiency (RI).  NTproBNP is used in rejecting CHF and as a marker of risk for patients with acute coronary syndromes. This observational study was undertaken to evaluate the reference value for interpreting NT-proBNP concentrations. The hypothesis is that increasing concentrations of NT-proBNP are associated with the effects of multiple co-morbidities, not merely CHF,

resulting in altered volume status or myocardial filling pressures.

 

NT-proBNP was measured in a population with normal trans-thoracic echocardiograms
(TTE) and free of anemia or renal impairment. Exclusion conditions were the following
co-morbidities:

 

 

  • anemia as defined by WHO,
  • atrial fibrillation (AF),
  • elevated troponin T exceeding 0.070 mg/dl,
  • systolic or diastolic blood pressure exceeding 140 and 90 respectively,
  • ejection fraction less than 45%,
  • left ventricular hypertrophy (LVH),
  • left ventricular wall relaxation impairment, and
  • renal insufficiency (RI) defined by creatinine clearance < 60ml/min using
    the MDRD formula .

Study participants were seen in acute care for symptoms of shortness of breath suspicious for CHF requiring evaluation with cardiac NTproBNP assay. The median NT-proBNP for patients under 50 years is 60.5 pg/ml with an upper limit of 462 pg/ml, and for patients over 50 years the median was 272.8 pg/ml with an upper limit of 998.2 pg/ml.

We suggested that NT-proBNP levels can be more accurately interpreted only after removal of the major co-morbidities that affect an increase in this  peptide in serum. The PRIDE study guidelines (http://www.pridestudy.org/)  should be applied until presence or absence of comorbidities is diagnosed. With no comorbidities, the reference range for normal over 50 years of age remains steady at ~1000 pg/ml. The effect shown in previous papers likely is due to increasing concurrent comorbidity with age.

We observed the following changes with respect to NTproBNP and age:

(i) Sharp increase in NT-proBNP at over age 50

(ii) Increase in NT-proBNP at 7% per decade over 50

(iii) Decrease in eGFR at 4% per decade over 50

(iv) Slope of NT-proBNP increase with age is related to proportion of patients with eGFR less than 90

(v) NT-proBNP increase can be delayed or accelerated based on disease comorbidities

The mean and 95% CI of NTproBNP (CHF removed) by the National Kidney Foundation staging for eGFR interval (eGFR scale: 0, > 120; 1, 90 to 119;2, 60 to 89; 3, 40 to 59; 4, 15 to 39; 5, under 15 ml/min). We created a new variable to minimize the effects of age and eGFR variability by correcting these large effects in the whole sample population.

Adjustment of the NT-proBNP for  both eGFR and for age over 50 differences. We have carried out a normalization to adjust for both eGFR and for age over 50:

(i) Take Log of NT-proBNP and multiply by 1000
(ii) Divide the result by eGFR (using MDRD9 or Cockroft Gault10)
(iii) Compare results for age under 50, 50-70, and over 70 years
(iv) Adjust to age under 50 years by multiplying by 0.66 and 0.56.

Figure 10

 

 

NKF staging by GFRe interval and NT-proBNP (CHF removed).

 

 

The equation does not require weight because the results are reported normalized

to 1.73 m2 body surface area, which is an accepted average adult surface area.

 

This is illustrated in Figure 11.

Figure 11

 

Plot of 1000*log (NT-proBNP)/GFR vs age at  eGFR over 90  and 60 ml/min

Figure 12 compares the reference ranges for NTproBNP before and after adjustment.

  • before adjustment; b) after adjustment. c) the scatterplot for 1000xlog(NT proBNP) versus 1000xlog(NT-proBNP/eGFR). Superimposed scatterplot and regression line with centroid and

confidence interval for 1000*log(NT-proBNP)/eGFR vs age (anemia removed)

at eGFR over 40 and 90 ml/min. (Black: eGFR > 90, Blue:  eGFR > 40)

 

More recent work is enlightening.  Hijazi et al. (80) studied the incremental value of measuring N-terminal pro–B-type natriuretic peptide (NT-proBNP) levels in addition to established risk factors (including the CHA2DS2VASc [heart failure, hypertension, age 75 years and older, diabetes, and previous stroke or transient ischemic attack, vascular disease, age 65 to 74 years, and sex category) for the prediction of cardiovascular and bleeding events. They concluded that NT-proBNP levels are often elevated in atrial fibrillation (AF) and it is independently associated with an increased risk for stroke and mortality. NT-proBNP improves risk stratification beyond the CHA2DS2VASc score and might be a novel tool for improved stroke prediction in AF. The

efficacy of apixaban compared with warfarin was independent of the NT-proBNP level. Moreover, natriuretic peptides are regulatory hormones associated with cardiac remodeling, namely, left ventricular hypertrophy and systolic/diastolic dysfunction. Another study reported that the risk of death of patients with plasma NT-proBNP 133 pg/mL (third tertile of the distribution) was 3.3 times that of patients with values 50.8 pg/mL (first tertile; hazard ratio: 3.30 [95% CI: 0.90 to 12.29]). This predictive value was independent of, and superior to, that of 2 ECG indexes of left ventricular hypertrophy, the Sokolov-Lyon index and the amplitude of the R wave in lead aVL and it persisted in patients without ECG left ventricular hypertrophy (81).
Many patients presenting with acute dyspnea (including those with ADHF) have multiple coexisting medical disorders that may complicate their diagnosis and management. These patients presenting with acute dyspnea may have longer hospital length of stay and are at high risk for repeat hospitalization or death. In this presentation testing for brain natriuretic peptide (BNP) or NT-proBNP has been shown to be valuable for an accurate and efficient diagnosis and prognostication of HF (82).

 

The biological activity of BNP, the product of an intracellular peptide (proBNP108) that is converted to NT-proBNP, includes stimulation of natriuresis and vasorelaxation; inhibition of renin, aldosterone, and sympathetic nervous activity; inhibition of fibrosis; and improvement in myocardial relaxation.

 

Figure 13

 

Biology of the natriuretic peptide system. BNP indicates brain natriuretic peptide; NT-proBNP, amino-terminal pro-B-type natriuretic peptide; and DPP-IV, dipeptidyl peptidase-4.

The authors remind us that approximately 20% of patients with acute dyspnea have BNP or NT-proBNP levels that are above the cutoff point to exclude HF but too low to definitively identify it (82). Knowledge of the differential diagnosis of non-HF elevation of NP, as well as interpretation of the BNP or NT-proBNP value in the context of a clinical assessment is essential.  Across all stages of HF, elevated BNP or NT-proBNP concentrations are at least comparable prognostic predictors of mortality and cardiovascular events relative to traditional predictors of outcome in this setting, with increasing NP concentrations predicting worse prognosis in a linear fashion. This prognostic value may be used to stratify patients at the highest risk of adverse outcomes (see Figure 2 In this page). Age-adjusted Kaplan-Meier survival curve of mortality at 1 year associated with an elevated amino-terminal pro-B-type natriuretic peptide    (NT-proBNP) concentration at emergency department presentation with dyspnea in those with acutely decompensated heart failure. Reproduced from Januzzi et al22. (82)

The importance of determining diastolic and systolic function and for measurement of pulmonary artery pressure by echocardiography is clear, as NT-proBNP levels may be increased with increase in pulmonary pressure as well as conditions that increase cardiac output. Although Hijazi et al. used the Cockcroft-Gault (CG) equation to determine the glomerular filtration rate (GFR) the CG equation may find higher eGFR in older individuals (80). In addition, elevated NT-proBNP independently predicts all-cause mortality and morbidity of patients with AF. A prominent disease with elevated NT-proBNP is a respiratory system disease, such as chronic obstructive pulmonary disease, pulmonary embolism, and interstitial lung disease, in which B-type natriuretic peptide levels are elevated in response to the pressure of the right side of the heart. The authors conclude that one should keep in mind that NT-proBNP alone may be inadequate.

NT-proBNP level is used for the detection of acute CHF and as a predictor of survival. However, a number of factors, including renal function, may affect the NT-proBNP levels. This study aims to provide a more precise way of interpreting NT-proBNP levels based on GFR, independent of age. This study includes 247 pts in whom CHF and known confounders of elevated NT-proBNP were excluded, to show the relationship of GFR in association with age. The effect of eGFR on NT-proBNP level was adjusted by dividing 1000 x log(NT-proBNP) by eGFR then further adjusting for age in order to determine a normalized NT-proBNP value. The normalized NT-proBNP levels were affected by eGFR independent of the age of the patient. A normalizing function based on eGFR eliminates the need for an age-based reference ranges for NT-proBNP (79).

The routine use of natiuretic peptides in severely dyspneic patients has recently been called into question. We hypothesized that the diagnostic utility of Amino Terminal pro Brain Natiuretic Peptide (NT-proBNP) is diminished in a complex elderly population (83)

We studied 502 consecutive patients in whom NT-proBNP values were obtained to evaluate severe dyspnea in the emergency department (84). The diagnostic utility of NT-proBNP for the diagnosis of congestive heart failure (CHF) was assessed utilizing several published guidelines, as well as the manufacturer’s suggested age dependent cut-off points. The area under the receiver operator curve (AUC) for NT-proBNP was 0.70. Using age-related cut points, the diagnostic accuracy of NT-proBNP for the diagnosis of CHF was below prior reports (70% vs. 83%). Age and estimated creatinine clearance correlated directly with NT-proBNP levels, while hematocrit correlated inversely. Both age > 50 years and to a lesser extent hematocrit < 30% affected the diagnostic accuracy of NT-proBNP, while renal function had no effect. In multivariate analysis, a prior history of CHF was the best predictor of current CHF, odds ratio (OR) = 45; CI: 23-88.

The diagnostic accuracy of NT-proBNP for the evaluation of CHF appears less robust in an elderly population with a high prevalence of prior CHF. Age and hematocrit levels, may adversely affect the diagnostic accuracy off NT-proBNP (85).

Obesity and hypertension.

Obesity is associated with an increased risk of hypertension. In the past 5 years there have been dramatic advances into the genetic and neurobiological mechanisms of obesity with the discovery of leptin and novel neuropeptide pathways regulating appetite and metabolism. In this brief review, we argue that these mounting advances into the neurobiology of obesity have and will continue to provide new insights into the regulation of arterial pressure in obesity. We focus our comments on the sympathetic, vascular, and renal mechanisms of leptin and melanocortin receptor agonists and on the regulation of arterial pressure in rodent models of genetic obesity. Three concepts are proposed (86).

First, the effect of obesity on blood pressure may depend critically on the genetic-neurobiological mechanisms underlying the obesity. Second, obesity is not consistently associated with increased blood pressure, at least in rodent models. Third, the blood pressure response to obesity may be critically influenced by modifying alleles in the genetic background.

Leptin plays an important role in regulation of body weight through regulation of food intake and sympathetically mediated thermogenesis. The hypothalamic melanocortin system, via activation of the melanocortin-4 receptor (MC4-R), decreases appetite and weight, but its effects on sympathetic nerve activity (SNA) are unknown. In addition, it is not known whether sympathoactivation to leptin is mediated by the melanocortin system.

The following study (87) tested the interactions between these systems in regulation of brown adipose tissue (BAT) and renal and lumbar SNA in anesthetized Sprague-Dawley rats. Intracerebroventricular administration of the MC4-R agonist MT-II (200 to 600 pmol) produced a dose-dependent sympathoexcitation affecting BAT and renal and lumbar beds. This response was completely blocked by the MC4-R antagonist SHU9119 (30 pmol ICV). Administration of leptin (1000 m g/kg IV) slowly increased BAT SNA (baseline, 4166 spikes/s; 6 hours, 196628 spikes/s; P50.001) and renal SNA (baseline, 116616 spikes/s; 6 hours, 169626 spikes/s; P50.014).

Intracerebroventricular administration of SHU9119 did not inhibit leptin-induced BAT sympathoexcitation (baseline, 3567 spikes/s; 6 hours, 158634 spikes/s; P50.71 versus leptin alone). However, renal sympathoexcitation to leptin was completely blocked by SHU9119 (baseline, 142617 spikes/s; 6 hours, 146625 spikes/s; P50.007 versus leptin alone). The study (87) demonstrates that the hypothalamic melanocortin system can act to increase sympathetic nerve traffic to thermogenic BAT and other tissues. Our data also suggest that leptin increases renal SNA through activation of hypothalamic melanocortin receptors. In contrast, sympathoactivation to thermogenic BAT by leptin appears to be independent of the melanocortin system.

Troponins

The introduction of the first generation troponins T and I was an important event leading to the declining use of creatine kinase isoenzyme MB because of the short half-life in the circulation of CKMB and the possibility of missing a late presenting ACS. The situation then would call for the measurement of lactate dehydrogenase isoenzyme 1 (H-type), which had a decline in use.  The troponins T and I are proteins associated with the muscle contractile element with high specificity for the cardiomyocyte apparatus, which increased rapidly after ACS and which had estimated diagnostic cutoffs of 0.08 mg/dl and 1 mg/dl respectively.  The choice of marker was largely dependent of the instrument platform.  These biomarkers went through several generations of improvement to improve the diagnostic sensitivity to a cutoff at 2 SD of the lower limit of detection, magnifying confusion in interpretation that had always existed. These cardiospecific markers are elevated in patients with hypertension and specifically, long term CKD. This was clarified by introducing the terms Type 1 and Type 2 myocardial infarct, designating the classic ACS due to plaque rupture as Type 1.  However, the type 2 class might well be non-homogeneous. In any case, these are the best we have in detecting myocardial ischemic damage with biomarker release.

 

Discussion

This discussion has covered a large body of research involving hypertension, the kidney, and cardiovascular humoral mechanisms of control with a broad brush.  The work that has been done is far more than is cited.  There are several biomarkers that we have considered. They are not only laboratory based measurements.  They are: PWV, cystatin C, eGFR, copeptin, BNP or NT-BNP, Midregional prohormone adrenomedullin (MR-ADM), urinary albumin excretion, and the aldosterone/renin ratio.

The preceding discussion reminds us of the story of the blind men palpating an elephant, set in a poem by John Godfrey Saxe. These blind men were asked to tell of their experiences palpating different parts of an elephant, without seeing the entire animal Figure 1. Each of the blind men was able to palpate one part of the elephant, and thus was able to describe it in terms that were “partly in the right.” However, because none of them was able to encompass the entire elephant in their hands, they were also “in the wrong,” in that they failed to identify the whole elephant (88).
The blind men and the elephant. Poem by John Godfrey Saxe (Cartoon originally copyrighted by the authors (88); G. Renee Guzlas, artist). http://www.nature.com/ki/journal/v62/n5/thumbs/4493262f1bth.gif

These authors advanced the “elephant” as the increased oxidative burden in the uremic milieu of patients with chronic kidney disease. I introduce the concept in the diagnostic dilemma about what biomarkers are diagnostically informative in hypertension and ischemic CVD poses a conundrum. In reviewing the full gamut of biomarkers, we have a replay of the Lone Ranger and the silver bullet.  The problem is that there is no “silver” bullet.  We are accustomed to rely on clinical observations that are themselves weak covariates in actual experience.  The studies that have been done to validate the effectiveness of key biomarkers are well designed and show relevance in the populations studied.  However, they are insufficient by themselves in the emergent care population.
 

Impediments to a solution to the problem

Tests are ordered by physicians based on the findings in a clinical history and physical examination. Test that are ordered are reimbursed by insurance carriers, Medicare and Medicaid based on a provisional diagnosis.  The provisional diagnosis generates an ICD10 code, which has been most recently revised with a weighted input from the insurers that is not in favor of considered clinical evidence.  Moreover, the provider of care is graded based on the number of patients seen and the tests performed on a daily basis over any period.  Given this situation, and in addition, the requirement to interact with an outmoded information system that is more helpful to the insurer and less helpful to the provider, it is not surprising that there is a large burnout of the nursing and physician practitioner workforce.  If the diagnosis is inconclusive at the time of patient examination, then the work is not reimbursable based on ICD10 coding requirements that are disease specific.   This problem breaks down into a workload and a reimbursement inconsistency, neither of which makes sense in terms of the original studies on Diagnosis Related Groups (89) at Yale by Robert Fetter’s group.  The problem is made worse by the design and selection of healthcare information systems.

Many have pointed out the flaws in current EHR design that impede the optimum use of data and hinder workflow. Researchers have suggested that EHRs can be part of a learning health system to better capture and use data to improve clinical practice, create new evidence, educate, and support research efforts. The health care system suffers from both inefficient and ineffective use of data. Data are suboptimally displayed to users, undernetworked, underutilized, and wasted. Errors, inefficiencies, and increased costs occur on the basis of unavailable data in a system that does not coordinate the exchange of information, or adequately support its use (90). Clinicians’ schedules are stretched to the limit and yet the system in which they work exerts little effort to streamline and support carefully engineered care processes. Information for decision-making is difficult to access in the context of hurried real-time workflows(91)

 

 

The solution to the problem

The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record by services, by diagnostic method, and by date, to cite examples.  This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons.  This requires that the medical practitioner finds the history, medications, laboratory reports, cardiac imaging and EKGs, and radiology in different workspaces.  The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record.  The advantage of this innovation is obvious.  The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success.

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard (92)( that provides empirical medical reference and suggests quantitative diagnostics options.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates the review of a peripheral smear.  While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the production, release or suppression of the formed elements from the blood-forming organ to the circulation.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

How we frame our expectations is so important that it determines the data we collect to examine the process.   In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.   This has meaning for hospital operations, for nonhospital laboratory operations, for companies in the diagnostic business, and for planning of health systems.

In 1983, a vision for creating the EMR was introduced by Lawrence Weed, expressed by McGowan and Winstead-Fry (93)

The data presented has to be comprehended in context with vital signs, key symptoms, and an accurate medical history.  Consequently, the limits of memory and cognition are tested in medical practice on a daily basis.  We deal with problems in the interpretation of data presented to the physician, and how through better design of the software that presents this data the situation could be improved.  The computer architecture that the physician uses to view the results is more often than not presented as the designer would prefer, and not as the end-user would like.

Eugene Rypka contributed greatly to clarifying the extraction of features (94) in a series of articles, which set the groundwork for the methods used today in clinical microbiology.  The method he describes is termed S-clustering, and will have a significant bearing on how we can view hematology data.  He describes S-clustering as extracting features from endogenous data that amplify or maximize structural information to create distinctive classes.  The method classifies by taking the number of features with sufficient variety to map into a theoretic standard. The mapping is done by a truth table, and each variable is scaled to assign values for each: message choice.  The number of messages and the number of choices forms an N-by N table.  He points out that the message choice in an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Bernstein and colleagues had a series of studies using Kullback-Liebler Distance  (effective information) for clustering to examine the latent structure of the elements commonly used for diagnosis of myocardial infarction (95-97)(CK-MB, LD and the isoenzyme-1 of LD),  protein-energy malnutrition (serum albumin, serum transthyretin, condition associated with protein malnutrition (see Jeejeebhoy and subjective global assessment), prolonged period with no oral intake), prediction of respiratory distress syndrome of the newborn (RDS), and prediction of lymph nodal involvement of prostate cancer, among other studies.   The exploration of syndromic classification has made a substantial contribution to the diagnostic literature, but has only been made useful through publication on the web of calculators and nomograms (such as Epocrates and Medcalc) accessible to physicians through an iPhone.  These are not an integral part of the EMR, and the applications require an anticipation of the need for such processing.

Gil David et al. (90, 92) introduced an AUTOMATED processing of the data available to the ordering physician and can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common causes of hospital admission that carry a high cost and morbidity.  For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia; multiple organ failure and hemodynamic shock; electrolyte/acid base balance disorders; acute and chronic liver disease; acute and chronic renal disease; diabetes mellitus; protein-energy malnutrition; acute respiratory distress of the newborn; acute coronary syndrome; congestive heart failure; disordered bone mineral metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption syndromes; and cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid]. The same approach has also been applied to the problem of hospital malnutrition, but it has not been sufficiently applied to hypertension, cardiovascular diseases, acute coronary syndrome, chronic renal failure.

We have developed (David G, Bernstein L, and Coifman) (92) a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community. The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies in which anomalous subprofiles are extracted and compared to potentially relevant cases. As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise. We anticipate that the effect of implementing this diagnostic amplifier would result in higher physician productivity at a time of great human resource limitations, safer prescribing practices, rapid identification of unusual patients, better assignment of patients to observation, inpatient beds, intensive care, or referral to clinic, shortened length of patients ICU and bed days.

The main benefit is a real time assessment as well as diagnostic options based on comparable cases, flags for risk and potential problems as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Method for data organization and classification via characterization metrics.

The database is organized to enable linking a given profile to known profiles. This is achieved by associating a patient to a peer group of patients having an overall similar profile, where the similar profile is obtained through a randomized search for an appropriate weighting of variables. Given the selection of a patients’ peer group, we build a metric that measures the dissimilarity of the patient from its group. This is achieved through a local iterated statistical analysis in the peer group.

This characteristic metric is used to locate other patients with similar unique profiles, for each of whom we repeat the procedure described above. This leads to a network of patients with similar risk condition. Then, the classification of the patient is inferred from the medical known condition of some of the patients in the linked network.

How do we organize the data and linkages provided in the first place?

Predictors: PWV, cystatin C, creatinine, urea, eGFR, copeptin, BNP or NT-BNP, TnI or TnT, Midregional prohormone adrenomedullin (MR-ADM), urinary albumin excretion, and the aldosterone/renin ratio, homocysteine, transthyretin, glucose, albumin, chol/LDL, LD, Na+, K+,  Cl, HCO3, pH.

Conditions: AMI, CRF, ARF, hypertension, HFpEF, HFcEF, ADHF, obesity, PHT, RVHF, pulmonary edema, PEM

Other variables: sex (M,F), age, BMI. …

Conditioning data: take log transform for large ascending values, OR take deciles of variables, if necessary.  This could apply to NT-proBNP, BNP, TnI, TnT, CK and LD.

Arrange predictor variables in columns and patient-sequence in rows.  This is a bidimentional table.  The problem is to assign diagnoses to each patient-in sequence. There can be more than one diagnosis.

In reality the patient-sequence or identifier is not relevant. Only the condition assignment is.  The condition assignments are made in a column adjacent to the patient, and they fall into rows.
The construct appears to be a 2×2, but it is actually an n-dimensional  matrix.  Each patient position has one or more diagnoses.

Multivariate statistical analysis is used to extend this analysis to two or more predictors.   In this case a multiple linear regression or a linear discriminant function would be used to predict a dependent variable from two or more independent variables.   If there is linear association dependency of the variables is assumed and the test of hypotheses requires that the variances of the predictors are normally distributed.  A method using a log-linear model circumvents the problem of the distributional dependency in a method called ordinal regression.    There is also a relationship of analysis of variance, a method of examining differences between the means of  two or more groups.  Then there is linear discriminant analysis, a method by which we examine the linear separation between groups rather than the linear association between groups.  Finally, the neural network is a nonlinear, nonparametric model for classifying data with several variables into distinct classes. In this case we might imagine a curved line drawn around the groups to divide the classes. The focus of this discussion will be the use of linear regression  and explore other methods for classification purposes (98).

The real issue is how a combination of variables falls into a table with meaningful information.  We are concerned with accurate assignment into uniquely variable groups by information in test relationships. One determines the effectiveness of each variable by its contribution to information gain in the system.  The reference or null set is the class having no information.  Uncertainty in assigning to a classification is only relieved by providing sufficient information.  One determines the effectiveness of each variable by its contribution to information gain in the system.  The possibility for realizing a good model for approximating the effects of factors supported by data used for inference owes much to the discovery of Kullback-Liebler distance or “information” (99), and Akaike (100) found a simple relationship between K-L information and Fisher’s maximized log-likelihood function. A solid foundation in this work was elaborated by Eugene Rypka (101).  Of course, this was made far less complicated by the genetic complement that defines its function, which made more accessible the study of biochemical pathways.  In addition, the genetic relationships in plant genetics were accessible to Ronald Fisher for the application of the linear discriminant function.    In the last 60 years the application of entropy comparable to the entropy of physics, information, noise, and signal processing, has been fully developed by Shannon, Kullback, and others,  and has been integrated with modern statistics, as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and unrelated work by Coifman. Dr. Magidson writes about Latent Class Model evolution:

The recent increase in interest in latent class models is due to the development of extended algorithms which allow today’s computers to perform LC analyses on data containing more than just a few variables, and the recent realization that the use of such models can yield powerful improvements over traditional approaches to segmentation, as well as to cluster, factor, regression and other kinds of analysis.

Perhaps the application to medical diagnostics had been slowed by limitations of data capture and computer architecture as well as lack of clarity in definition of what are the most distinguishing features needed for diagnostic clarification.  Bernstein and colleagues (102-104) had a series of studies using Kullback-Liebler Distance  (effective information) for clustering to examine the latent structure of the elements commonly used for diagnosis of myocardial infarction (CK-MB, LD and the isoenzyme-1 of LD),  protein-energy malnutrition (serum albumin, serum transthyretin, condition associated with protein malnutrition (see Jeejeebhoy and subjective global assessment), prolonged period with no oral intake), prediction of respiratory distress syndrome of the newborn (RDS), and prediction of lymph nodal involvement of prostate cancer, among other studies.   The exploration of syndromic classification has made a substantial contribution to the diagnostic literature, but has only been made useful through publication on the web of calculators and nomograms (such as Epocrates and Medcalc) accessible to physicians through an iPhone.  These are not an integral part of the EMR, and the applications require an anticipation of the need for such processing.

Gil David et al. introduced an AUTOMATED processing of the data (104) available to the ordering physician and can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common causes of hospital admission that carry a high cost and morbidity.  For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia; multiple organ failure and hemodynamic shock; electrolyte/acid base balance disorders; acute and chronic liver disease; acute and chronic renal disease; diabetes mellitus; protein-energy malnutrition; acute respiratory distress of the newborn; acute coronary syndrome; congestive heart failure; disordered bone mineral metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption syndromes; and cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].

Our database organized to enable linking a given profile to known profiles(102-104). This is achieved by associating a patient to a peer group of patients having an overall similar profile, where the similar profile is obtained through a randomized search for an appropriate weighting of variables. Given the selection of a patients’ peer group, we build a metric that measures the dissimilarity of the patient from its group. This is achieved through a local iterated statistical analysis in the peer group.

We then use this characteristic metric to locate other patients with similar unique profiles, for each of whom we repeat the procedure described above. This leads to a network of patients with similar risk condition. Then, the classification of the patient is inferred from the medical known condition of some of the patients in the linked network. Given a set of points (the database) and a newly arrived sample (point), we characterize the behavior of the newly arrived sample, according to the database. Then, we detect other points in the database that match this unique characterization. This collection of detected points defines the characteristic neighborhood of the newly arrived sample. We use the characteristic neighborhood in order to classify the newly arrived sample. This process of differential diagnosis is repeated for every newly arrived point.   The medical colossus we have today has become a system out of control and beset by the elephant in the room – an uncharted complexity.

 

References

  1. http://www.geronet.ucla.edu/research/researchers/385

 

  1. Chronic Kidney Disease and Hypertension: A Destructive Combination. Buffet L, Ricchetti C.  http://www.medscape.com/viewarticle/766696

 

  1. Frank O. Zur Dynamik des Herzmuskels. J Biol. 1895;32:370-447. Translation from German: Chapman CP, Wasserman EB. On the dynamics of cardiac muscle. Am Heart J. 1959; 58:282-317.

 

  1. Starling EH. Linacre Lecture on the Law of the Heart. London, England: Longmans; 1918.

 

  1. ftp://www.softchalk.com/Step1softchalksyllabus/starlings%20Law%20.pdf

 

  1. https://youtu.be/5SO58NndlPI

 

  1. Lakatta EG. Starling’s Law of the Heart Is Explained by an Intimate Interaction of Muscle Length and Myofilament Calcium Activation. J Am Coll Cardiol 1987; 10:1157-64.

 

  1. Lakatta EG, DiFrancesco D. J Mol Cell Cardiol. 2009 Aug; 47(2):157-70.
    http://dx.doi.org:/10.1016/j.yjmcc.2009.03.022. What keeps us ticking: a funny current, a calcium clock, or both?

 

  1. Lakatta EG, Maltsev VA, Vinogradova TM. A coupled SYSTEM of intracellular Ca2+ clocks and surface membrane voltage clocks controls the time keeping mechanism of the heart’s pacemaker. Circ Res.2010 Mar 5; 106(4):659-73. http://dx.doi.org:/10.1161/CIRCRESAHA.109.206078.

 

  1. NHLBI, NIH, Health. http://www.nhlbi.nih.gov/health/health-topics/topics/hbp/causes
  1. Clark CE and Powell RJ. The differential blood pressure sign in general practice: prevalence and prognostic value. Family Practice 2002; 19:439–441.
  2. Madhur  MS. Medscape. http://emedicine.medscape.com/article/241381-treatment
  3. Roger VL, Go AS, Lloyd-Jones DM, et al. Heart disease and stroke statistics–2012 update: a report from the American Heart Association. Circulation. 2012 Jan 3. 125(1):e2-e220. [Medline].
  4. Institute for Clinical Systems Improvement (ICSI). Hypertension diagnosis and treatment. Bloomington, Minn: Institute for Clinical Systems Improvement (ICSI); 2010.
  5. Whelton PK, Appel LJ, Sacco RL, Anderson CA, Antman EM, Campbell N, et al. Sodium, Blood Pressure, and Cardiovascular Disease: Further Evidence Supporting the American Heart Association Sodium Reduction Recommendations. Circulation. 2012 Nov 2. [Medline].
  6. O’Riordan M. New European Hypertension Guidelines Released: Goal Is Less Than 140 mm Hg for All. Medscape [serial online]. Available at http://www.medscape.com/viewarticle/806367. Accessed: June 24, 2013.
  7. Chobanian AV, Bakris GL, Black HR, Cushman WC, Green LA, Izzo JL Jr, et al. Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension. 2003 Dec. 42(6):1206-52. [Medline].
  8. [Guideline] James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2013 Dec 18. [Medline][Full Text].
  9. Laurent S, Boutouyrie P, Asmar R, et al. Aortic stiffness is an independent predictor of all-cause and cardiovascular mortality in hypertensive patients. Hypertension. 2001 May; 37(5):1236-41.
    http://hyper.ahajournals.org/content/37/5/1236.long  http://dx.doi.org:/10.1161/01.HYP.37.5.1236

 

  1. Shahmirzadi D, Li RX, Konofagou EE. Pulse-Wave Propagation in Straight-Geometry Vessels

for Stiffness Estimation: Theory, Simulations, Phantoms and In Vitro Findings. J Biomechanical Engineering Oct 26, 2012; 134(114502): 1-7. http://dx.doi.org:/10.1115/1.4007747

 

  1. Najjar SS, Scuteri A, Shetty V, …, Lakatta EG. Pulse wave velocity is an independent predictor of the longitudinal increase in systolic blood pressure and of incident hypertension in the Baltimore Longitudinal Study of Aging. J Am Coll Cardiol.2008 Apr 8; 51(14):1377-83. http://dx.doi.org:/10.1016/j.jacc.2007.10.065

 

  1. Lim HS and Lip GYH. Arterial stiffness: beyond pulse wave velocity and its measurement. Journal of Human Hypertension (2008) 22, 656–658; http://dx.doi.org:/10.1038/jhh.2008.4
  2. Payne RA, Wilkinson IB, Webb DJ. Arterial Stiffness and Hypertension – Emerging Concepts. Hypertension.2010; 55: 9-14.   http://dx.doi.org:/10.1161/HYPERTENSIONAHA.107.090464
  3. http://www.nature.com/jhh/journal/v22/n10/images/jhh200847f1.gif
  4. O’Rourke M. Arterial stiffness, systolic blood pressure, and logical treatment of arterial hypertension. Hypertension. 1990 Apr; 15(4):339-47. http://www.fac.org.ar/scvc/llave/hbp/arnett/arnetti.htm
  5. Arnett DK1, Tyroler HA, …., Howard G, Heiss G. Hypertension and subclinical carotid artery atherosclerosis in blacks and whites. The Atherosclerosis Risk in Communities Study. ARIC Investigators. Arch Intern Med. 1996 Sep 23; 156(17):1983-9. http://archinte.jamanetwork.com/article.aspx?articleid=622453
  6. Cecelja M, Chowienczyk P. Role of arterial stiffness in cardiovascular disease. JRSM Cardiovascular Disease July 2012; 1(4):11 http://cvd.sagepub.com/content/1/4/11.full
  7. Peralta CA, Whooley MA, Ix JH, and Shlipak MG .  Kidney Function and Systolic Blood Pressure New Insights from Cystatin C: Data from the Heart and Soul Study. Am J Hypertens. 2006 Sep; 19(9):939–946.   http://dx.doi.org:/10.1016/j.amjhyper.2006.02.007
  8. Asmar R, Benetos A, Topouchian J, Laurent P, Pannier B, Brisac AM , Target R, Levy BI.
    Assessment of Arterial Distensibility by Automatic Pulse Wave Velocity Measurement: Validation and Clinical Application Studies. Hypertension. 1995; 26: 485-490. http://dx.doi.org:/10.1161/01.HYP.26.3.485
  9. Ioannidis JPA, Tzoulaki I. Minimal and null predictive effects for the most popular blood biomarkers of cardiovascular disease. Circ Res. 2012; 110:658–662.
  1. Giles T. Biomarkers, Cardiovascular Disease, and Hypertension. J Clin Hypertension 2013; 15(1)   http://dx.doi.org:/10.1111/jch.12014

 

  1. Bielecka-Dabrowa A, Gluba-Brzózka A, Michalska-Kasiczak M, Misztal M, Rysz J and Banach M.   The Multi-Biomarker Approach for Heart Failure in Patients with Hypertension. Int. J. Mol. Sci. 2015; 16: 10715-10733;  http://dx.doi.org:/10.3390/ijms160510715

 

  1. Barbaro N, Moreno H. Inflammatory biomarkers associated with arterial stiffness in resistant hypertension.  Inflamm Cell Signal 2014; 1: e330. http://dx.doi.org:/10.14800/ics.330.

 

  1. Chao J, Schmaier A, Chen LM, Yang Z, Chao L. Kallistatin, a novel human tissue kallikrein inhibitor: levels in body fluids, blood cells, and tissues in health and disease. J Lab Clin Med. 1996 Jun; 127(6):612-20. http://www.ncbi.nlm.nih.gov/pubmed/8648266

 

  1. Chao J, Chao L. Functional Analysis of Human Tissue Kallikrein in Transgenic Mouse Models. Hypertension. 1996; 27: 491-494. http://dx.doi.org://10.1161/01.HYP.27.3.491

 

  1. Chao J, Bledsoe G and Chao L. Kallistatin: A Novel Biomarker for Hypertension, Organ Injury and Cancer. Austin Biomark Diagn. 2015; 2(2): 1019).

 

  1. Touyz RM , Schiffrin EL. Reactive oxygen species in vascular biology: implications in hypertension. Histochem, Cell Biol Oct 2004; 122(4):339-352.  http://link.springer.com/article/10.1007/s00418-004-0696-7

 

  1. Nozik-Grayck E1, Stenmark KR. Role of reactive oxygen species in chronic hypoxia-induced pulmonary hypertension and vascular remodeling. Adv Exp Med Biol. 2007; 618:101-12.
    http://www.ncbi.nlm.nih.gov/pubmed/18269191

 

  1. Paravicin TM and Touyz RM. NADPH Oxidases, Reactive Oxygen Species, and Hypertension. Diabetes Care 2008 Feb; 31(Supplement 2): S170-S180. http://dx.doi.org/10.2337/dc08-s247

 

  1. Schiffrin EL. Endothelin: role in hypertension. Biol Res. 1998; 31(3):199-208. Review.

 

  1. Schiffrin EL. Role of endothelin-1 in hypertension and vascular disease. Am J Hypertens. 2001 Jun;14(6 Pt 2):83S-89S. Review.

 

  1. Schiffrin EL. Endothelin and endothelin antagonists in hypertension. J Hypertens. 1998 Dec; 16(12 Pt 2):1891-5. Review.

 

  1. Warwick G, Thomas PS and Yates DH. Biomarkers in pulmonary hypertension.
    Eur Respir J 2008; 32: 503–512 http://dx.doi.org:/10.1183/09031936.00160307

 

  1. Zhang S, Yang T, Xu X, Wang M, Zhong L, et al. Oxidative stress and nitric oxide signaling related biomarkers in patients with pulmonary hypertension: a case control study. BMC Pulm Med. 2015; 15: 50.    http://dx.doi.org:/1186/s12890-015-0045-8

 

  1. Kierszniewska-Stępień D, Pietras T, Ciebiada M, Górski P, and Stępień H. Concentration of angiopoietins 1 and 2 and their receptor Tie-2 in peripheral blood in patients with chronic obstructive pulmonary disease. Postepy Dermatol Alergol. 2015 Dec; 32(6): 443–448. http://dx.doi.org:/5114/pdia.2014.44008

 

  1. Circulating Biomarkers in Pulmonary Arterial Hypertension. Al-Zamani M, Trammell AW,  Safdar Z.  Adv Pulm Hypertension 2015; 14(1):21-27.

 

  1. Safdar et al. Hea rt  Fai lure  JACC 2014; 2(4): 412– 21.

 

 

  1. Camozzi M, Zacchigna S, Rusnati M, Coltrini D, et al. Pentraxin 3 Inhibits Fibroblast Growth Factor 2–Dependent Activation of Smooth Muscle Cells In Vitro and Neointima Formation In Vivo. Arterioscler Thromb Vasc Biol. 2005; 25:1837-1842. http://atvb.ahajournals.org/content/25/9/1837.full.pdf

 

  1. Presta M, Camozzi M, Salvatori G, and Rusnatia M. Role of the soluble pattern recognition receptor PTX3 in vascular biology. J Cell Mol Med. 2007 Jul; 11(4): 723–738.
    http://dx.doi.org:/1111/j.1582-4934.2007.00061.x

 

  1. Tamura Y, Ono T, Kuwana M, Inoue K, Takei M, Yamamoto T, Kawakami T, et al. Human pentraxin 3 (PTX3) as a novel biomarker for the diagnosis of pulmonary arterial hypertension.
    PLoS One. 2012;7(9):e45834. http://dx.doi.org:/10.1371/journal.pone.0045834.

 

  1. Calvier L, Legchenko E, Grimm L, Sallmon H, Hatch A, Plouffe BD, Schroeder C, et al. Galectin-3 and aldosterone as potential tandem biomarkers in pulmonary arterial hypertension. Heart. 2016 Mar 1; 102(5):390-6. http://dx.doi.org:/10.1136/heartjnl-2015-308365

 

  1. Otto CM. Heart 2016; 102:333–334. Heartbeat: Biomarkers and pulmonary artery hypertension. http://dx.doi.org:/10.1136/heartjnl-2015-309220

 

  1. Sintek MA, Sparrow CT, Mikuls TR, Lindley KJ, Bach RG, et al. Repeat revascularisation outcomes after percutaneous coronary intervention in patients with rheumatoid arthritis. Heart. 2016 Mar 1; 102(5):363-9. http://dx.doi.org:/10.1136/heartjnl-2015-308634.

 

  1. Camici PG, Wijns W, Borgers M, De Silva R, et al. Pathophysiological Mechanisms of Chronic Reversible Left Ventricular Dysfunction due to Coronary Artery Disease (Hibernating Myocardium). Circulation. 1997; 96: 3205-3214. http://dx.doi.org:/10.1161/01.CIR.96.9.3205

 

  1. Rahimtoola SH, Dilsizian V, Kramer CM, Marwick TH, and Jean-Louis J. Vanoverschelde.
    Chronic Ischemic Left Ventricular Dysfunction: From Pathophysiology to Imaging and its Integration into Clinical Practice. JACC Cardiovasc Imaging. 2008 Jul 7; 1(4): 536–555.

http://dx.doi.org:/10.1016/j.jcmg.2008.05.009

 

  1. Peralta CA, Whooley MA, Ix JH, Shlipak MG. Kidney function and systolic blood pressure new insights from cystatin C: data from the Heart and Soul Study. Am J Hypertens. 2006 Sep; 19(9):939-46. http://dx.doi.org:/10.1016/j.amjhyper.2006.02.007

 

  1. Moran A, Katz R, Nicolas L. Smith NL, Fried LF, Sarnak MJ, …, Shlipak, MG. Cystatin C Concentration as a Predictor of Systolic and Diastolic Heart Failure. J Card Fail. 2008 Feb; 14(1): 19–26. http://dx.doi.org:/10.1016/j.cardfail.2007.09.002

 

  1. Tomaschitz A, Maerz W, Pilz S, at al. Aldosterone/Renin Ratio Determines Peripheral and Blood Pressure Values Over a Broad Range. J Am Coll Cardiol 2010-5-11; 55(19):2171-2180.
  2. Inker LA, Schmid CH, Tighiouart H, et al: Estimating glomerular filtration rate from serum creatinine and cystatin C. N Engl J Med 2012 Jul; 367(1):20-29.   http://dx.doi.org:/10.1056/NEJMoa1114248
  3. Buehrig CK, Larson TS, Bergert JH, et al: Cystatin C is superior to serum creatinine for the assessment of renal function. J Am Soc Nephrol 2001; 12:194A
  4. Grubb AO: Cystatin C – properties and use as a diagnostic marker. Adv Clin Chem 2000; 35:63-99
  5. Stevens LA, Coresh J, Schmid CH, Feldman H, et al. Estimating GFR using serum cystatin C alone and in combination with serum creatinine: a pooled analysis of 3,418 individuals with CKD.
    Am J Kidney Dis. 2008 Mar; 51(3):395-406. http://dx.doi.org:/10.1053/j.ajkd.2007.11.018
  6. Levey AS, Coresh J, Greene T, et al. Using standardized serum creatinine values in the modification of diet in renal disease study equation for estimating glomerular filtration rate. Ann Intern Med. 2006;145:247–254.
  7. Stevens LA, Coresh J, Greene T, Levey AS. Assessing kidney function – measured and estimated glomerular filtration rate. N Engl J Med. 2006; 354:2473–2483.
  8. Grubb A, Nyman U, Bjork J, et al. Simple cystatin C-based prediction equations for glomerular filtration rate compared with the Modification of Diet in Renal Disease Prediction Equation for adults and the Schwartz and the Counahan-Barratt Prediction Equations for children. Clin Chem. 2005; 51:1420–1431.
  9. Weir MR.  Improving the Estimating Equation for GFR — A Clinical Perspective. N Engl J Med 2012; 367:75-76.    http://dx.doi.org:/10.1056/NEJMe1204489
  10. Shlipak MG, Sarnak MJ, Katz R, Fried LF, et al. Cystatin C and the Risk of Death and Cardiovascular Events among Elderly Persons. http://journal.9med.net/qikan/article.php?id=216331
  11. Knight EL, Verhave JC, Spiegelman D, et al. Factors influencing serum cystatin C levels other than renal function and the impact on renal function measurementKidney International 2004; 65:1416–1421;  http://dx.doi.org:/10.1111/j.1523-1755.2004.00517.x
  12. Parag C. Patel, Colby R. Ayers, Sabina A. Murphy, et al. Association of Cystatin C with Left Ventricular Structure and Function (The Dallas Heart Study). Circulation: Heart Failure. 2009; 2: 98-104.  http://dx.doi.org:/10.1161/CIRCHEARTFAILURE.108.807271.
  13. Rule AD, Bergstralh EJ, Slezak JM, Bergert J, Larson TS. Glomerular filtraton rate estimated by cystatin C among different clinical presentations. Kidney Int. 2006; 69:399–405. http://dx.doi.org:/10.1038/sj.ki.5000073
  14. Muller B, Morgenthaler N, Stolz D, et al. Circulating levels of copeptin, a novel biomarker, in lower respiratory tract infections. Eur J Clin Invest 2007;37, 145–152.
  15. Stoiser B, Mortl D, Hulsmann M, et al. Copeptin, a fragment of the vasopressin precursor, as a novel predictor of outcome in heart failure.  Eur J Clin Invest Nov 2006; 36(11):771–778.
    http://dx.doi.org:/10.1111/j.1365-2362.2006.01724.x
  16. Meijer E, Bakker SJL, Helbesma N, et al. Copeptin, a surrogate marker of vasopressin, is associated with microalbuminuria in a large population cohort.  Kidney Intl 2010; 77:29–36.
    http://dx.doi.org:/10.1038/ki.2009.397
  17. Nickel NP, Lichtinghagen R, Golpon H, et al. Circulating levels of copeptin predict outcome in patients with pulmonary arterial hypertension. Respir Res. Nov 19, 2013; 14:130. http://dx.doi.org:/10.1186/1465-9921-14-130
  18. Tenderenda-Banasiuk E,  Wasilewska A, Filonowicz R, et al. Serum copeptin levels in adolescents with primary hypertension. Pediatr Nephrol. 2014; 29(3): 423–429.    doi:  10.1007/s00467-013-2683-5
  19. Richards M, Januzzi JL, and Troughton RW. Natriuretic Peptides in Heart Failure with Preserved Ejection Fraction.  Heart Failure Clin 2014; 10:453–470. http://dx.doi.org/10.1016/j.hfc.2014.04.006
  20. Maisel A, Mueller C, Nowak M and Peacock WF, et al. Midregion Prohormone Adrenomedullin and Prognosis in Patients Presenting with Acute Dyspnea Results from the BACH (Biomarkers in Acute Heart Failure) Trial. J Am Coll Cardiol 2011; 58(10):1057–67.  http://dx.doi.org:/10.1016/j.jacc.2011.06.006.
  21. Bernstein LH. Heart-Lung-Kidney: Essential Ties. Leaders in Pharmaceutical Innovation. http://pharmaceuticalinnovations.com
  22. Bernstein LH, Zions MY, Alam ME, et al.  What is the best approximation of reference normal for NT-proBNP? Clinical levels for enhanced assessment of NT-proBNP (CLEAN). J Med Lab and Diag 04/2011; 2:16-21. http://www.academicjournals.org/jmld
  23. Hijazi  Z., Wallentin  L., Siegbahn  A., et al; N-terminal pro-B-type natriuretic peptide for risk assessment in patients with atrial fibrillation: insights from the ARISTOTLE trial (Apixaban for the Prevention of Stroke in Subjects With Atrial Fibrillation. J Am Coll Cardiol. 2013; 61:2274-2284
  24. Paget V, Legedz L, Gaudebout N, et al. N-Terminal Pro-Brain Natriuretic Peptide A Powerful Predictor of Mortality in Hypertension. Hypertension. 2011; 57:702-709   http://hyper.ahajournals.org/content/57/4/702.full.pdf]
  25. Kim Han-Naand  Januzzi JL.  Natriuretic Peptide Testing in Heart Failure. Circulation 2011;  123: 2015-2019. http://dx.doi.org:/10.1161/CIRCULATIONAHA.110.979500
  26. Balta S, Demirkol S, Aydogan M, and Celik T. Higher N-Terminal Pro–B-Type Natriuretic Peptide May Be Related to Very Different Conditions.  J Am Coll Cardiol. 2013; 62(17):1634-1635.   http://dx.doi.org:/10.1016/j.jacc.2013.04.093
  27. Bernstein LH1, Zions MY, Haq SA, et al. Effect of renal function loss on NT-proBNP level variations. Clin Biochem. 2009 Jul; 42(10-11): 1091-8. http://dx.doi.org:/10.1016/j.clinbiochem.2009.02.027
  28. Afaq MA, Shoraki A, Oleg I, Bernstein L, and Stuart W. Zarich.  Validity of Amino Terminal pro-Brain Natiuretic Peptide in a Medically Complex Elderly Population. J Clin Med Res. 2011 Aug; 3(4): 156–163.   doi:  10.4021/jocmr606w
  29. Mark AL, Correia M, MorganDA, et al. New Concepts From the Emerging Biology of Obesity. Hypertension. 1999; 33[part II]:537-541.
  30. Himmelfarb J, Stenvinkel P, Ikizler TA and Hakim RM. The elephant in uremia: Oxidant stress as a unifying concept of cardiovascular disease in uremia. Kidney International (2002) 62, 1524–1538; http://dx.doi.org:/10.1046/j.1523-1755.2002.00600.x  http://www.nature.com/ki/journal/v62/n5/full/4493262a.html
  31. The blind men and the elephant. Poem by John Godfrey Saxe (Cartoon originally copyrighted by the authors; G. Renee Guzlas, artist). http://www.nature.com/ki/journal/v62/n5/thumbs/4493262f1bth.gif
  32. Fetter RB. Diagnosis Related Groups: Understanding Hospital Performance. Interfaces Jan. – Feb., 1991; 21(1), Franz Edelman Award Papers: 6-26
  33. Bernstein LH. Inadequacy of EHRs. Pharmaceutical Intelligence. http://pharmaceuticalintelligence.com/2015/11/05/inadequacy-of-ehrs/
  34. Celi LA,  Marshall JD, Lai Y, Stone DJ. Disrupting Electronic Health Records Systems: The Next Generation.  JMIR  Med Inform 2015 (23.10.15);  3(4) :e34
    http://dx.doi.org:/10.2196/medinform.4192
  35. Realtime Clinical Expert Support. Pharmaceutical Intelligence.  http://pharmaceuticalintelligence.com/2015/05/10/realtime-clinical-expert-support/
  36. McGowan JJ and Winstead-Fry P. Problem Knowledge Couplers: reengineering evidence-based medicine through interdisciplinary development, decision support, and research. Bull Med Libr Assoc. 1999 October;  87(4):462–470.)
  37. Rypka EW and Babb R. Automatic construction and use of an identification scheme. In MEDICAL RESEARCH ENGINEERING Apr 19709; (2):9-19. https://www.researchgate.net/publication/17720773_Automatic_construction_and_use_of_an_identification_scheme
  38. Rudolph, R. A., Bernstein, L. H. and Babb, J. Information induction for predicting acute myocardial infarction. Clinical Chemistry 1988; 34: 2031-2038.
  39. Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.
  40. Bernstein LH, Good IJ, Holtzman, Deaton ML, Babb J. Diagnosis of acute myocardial infarction from two measurements of creatine kinase isoenzyme MB with use of nonparametric probability estimation. Clin Chem 1989; 35(3):444-447.
  41. Bernstein LH. Regression: A richly textured method for comparison and classification of predictor variables. http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-for-comparison-and-classification-of-predictor-variables/
  42. Posada D and Buckley TR. Model Selection and Model Averaging in Phylogenetics: Advantages of Akaike Information Criterion and Bayesian Approaches over Likelihood Ratio Tests. Syst. Biol. 200; 53(5):793–808. http://dx.doi.org:/10.1080/10635150490522304
  1. Kullback S. and Leibler R. On Information and Sufficiency. Ann Math Statistics. Mar 1951; 22(1):79-86. http://www.csee.wvu.edu/~xinl/library/papers/math/statistics/Kullback_Leibler_1951.pdf
  2. Bernstein LH, David G, Rucinski J, Coifman RR. Converting Hematology Based Data Into an Inferential Interpretation. In INTECH Open Access Publisher, 2012. https://books.google.com/books/about/Converting_Hematology_Based_Data_Into_an.html
  3. Bernstein LH, David G, Coifman RR. Generating Evidence Based Interpretation of Hematology Screens via Anomaly Characterization. Open Clin Chem J 2011; 4:10-16
  4. Bernstein LH. Automated Inferential Diagnosis of SIRS, sepsis, septic shock. Medical Informatics View. http://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-of-sirs-sepsis-septic-shock/
  5. Bernstein LH, David G, Coifman RR. The Automated Nutritional Assessment. Nutrition  2013; 29: 113-121

 

Other related articles published in this Open Access Online Scientific Journal include the following: 

Biomarkers and risk factors for cardiovascular events, endothelial dysfunction, and thromboembolic complications

Commentary on Biomarkers for Genetics and Genomics of Cardiovascular Disease: Views by Larry H Bernstein, MD, FCAP

Summary – Volume 4, Part 2: Translational Medicine in Cardiovascular Diseases

Pathophysiological Effects of Diabetes on Ischemic-Cardiovascular Disease and on Chronic Obstructive Pulmonary Disease (COPD)

Assessing Cardiovascular Disease with Biomarkers

Endothelial Function and Cardiovascular Disease

Adenosine Receptor Agonist Increases Plasma Homocysteine

Inadequacy of EHRs

Innervation of Heart and Heart Rate

Biomarker Guided Therapy

Pharmacogenomics

The Union of Biomarkers and Drug Development

Natriuretic Peptides in Evaluating Dyspnea and Congestive Heart Failure

Epilogue: Volume 4 – Translational, Post-Translational and Regenerative Medicine in Cardiology

Atherosclerosis Independence: Genetic Polymorphisms of Ion Channels Role in the Pathogenesis of Coronary Microvascular Dysfunction and Myocardial Ischemia (Coronary Artery Disease (CAD))

Erythropoietin (EPO) and Intravenous Iron (Fe) as Therapeutics for Anemia in Severe and Resistant CHF: The Elevated N-terminal proBNP Biomarker

Biomarkers: Diagnosis and Management, Present and Future

Genetic Analysis of Atrial Fibrillation

Landscape of Cardiac Biomarkers for Improved Clinical Utilization

Fractional Flow Reserve (FFR) & Instantaneous wave-free ratio (iFR): An Evaluation of Catheterization Lab Tools for Ischemic Assessment

Dealing with the Use of the High Sensitivity Troponin (hs cTn) Assays

Cardiotoxicity and Cardiomyopathy Related to Drugs Adverse Effects

Amyloidosis with Cardiomyopathy

Accurate Identification and Treatment of Emergent Cardiac Events

The potential contribution of informatics to healthcare is more than currently estimated

Realtime Clinical Expert Support

Metabolomics is about Metabolic Systems Integration

Automated Inferential Diagnosis of SIRS, sepsis, septic shock

Read Full Post »

Insight into Blood Brain Barrier

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

Gateway to The Brain

This image shows the structural model of critical transporter, Mfsd2a. Source: Duke-NUS Medical School
This image shows the structural model of critical transporter, Mfsd2a. Source: Duke-NUS Medical School.  http://www.dddmag.com/sites/dddmag.com/files/rd1604_brain.jpg

Scientists from Duke-NUS Medical School (Duke-NUS) have derived a structural model of a transporter at the blood-brain barrier called Mfsd2a. This is the first molecular model of this critical transporter, and could prove important for the development of therapeutic agents that need to be delivered to the brain — across the blood-brain barrier. In future, this could help treat neurological disorders such as glioblastoma.

Currently, there are limitations to drug delivery to the brain as it is tightly protected by the blood-brain barrier. The blood-brain barrier is a protective barrier that separates the circulating blood from the central nervous system which can prevent the entry of certain toxins and drugs to the brain. This restricts the treatment of many brain diseases. However, as a transporter at the blood-brain barrier, Mfsd2a is a potential conduit for drug delivery directly to the brain, thus bypassing the barrier.

In this study, recently published in the Journal of Biological Chemistry, first author Duke-NUS MD/PhD student Debra Quek and senior author Professor David Silver used molecular modeling and biochemical analyses of altered Mfsd2a transporters to derive a structural model of human Mfsd2a. Importantly, the work identifies new binding features of the transporter, providing insight into the transport mechanism of Mfsd2a.

“Our study provides the first glimpse into what Mfsd2a looks like and how it might transport essential lipids across the blood-brain barrier,” said Ms Quek. “It also facilitates a structure-guided search and design of scaffolds for drug delivery to the brain via Mfsd2a, or of drugs that can be directly transported by Mfsd2a.”

Currently this information is being used by Duke-NUS researchers to design novel therapeutic agents for direct drug delivery across the blood brain barrier for the treatment of neurological diseases. This initiative by the Centre for Technology and Development (CTeD) at Duke-NUS, is one of many collaborative research efforts aimed at translating Duke-NUS’ research findings into tangible commercial and therapeutic applications for patients.

Ms Quek plans to further validate her findings by purifying the Mfsd2a protein in order to further dissect how it functions as a transporter.

 

J Biol Chem. 2016 Mar 4. pii: jbc.M116.721035. [Epub ahead of print]
Structural insights into the transport mechanism of the human sodium-dependent lysophosphatidylcholine transporter Mfsd2a.

Major Facilitator Superfamily Domain containing 2A (Mfsd2a) was recently characterized as a sodium-dependent lysophosphatidylcholine (LPC) transporter expressed at the blood-brain barrier endothelium. It is the primary route for importation of docosohexaenoic acid and other long-chain fatty acids into foetal and adult brain, and is essential for mouse and human brain growth and function. Remarkably, Mfsd2a is the first identified MFS family member that uniquely transports lipids, implying that Mfsd2a harbours unique structural features and transport mechanism. Here, we present three 3D structural models of human Mfsd2a derived by homology modelling using MelB- and LacY-based crystal structures, and refined by biochemical analysis. All models revealed 12 transmembrane helices and connecting loops, and represented the partially outward-open, outward-partially occluded, and inward-open states of the transport cycle. In addition to a conserved sodium-binding site, three unique structural features were identified: A phosphate headgroup binding site, a hydrophobic cleft to accommodate a hydrophobic hydrocarbon tail, and three sets of ionic locks that stabilize the outward-open conformation. Ligand docking studies and biochemical assays identified Lys436 as a key residue for transport. It is seen forming a salt bridge with the negative charge on the phosphate headgroup. Importantly, Mfsd2a transported structurally related acylcarnitines but not a lysolipid without a negative charge, demonstrating the necessity of a negative charged headgroup interaction with Lys436 for transport. These findings support a novel transport mechanism by which LPCs are flipped within the transporter cavity by pivoting about Lys436 leading to net transport from the outer to the inner leaflet of the plasma membrane.

 

Brain and eye contain membrane phospholipids that are enriched in the omega-3 fatty acid docosohexaenoic acid (DHA). It is widely accepted that DHA is important for brain and eye function and brain development (1,2), although mechanisms for DHA function in these tissues are not well defined.   The mechanism by which DHA and other conditionally essential and essential fatty acids cross the blood-brain barrier (BBB) has been a long-standing mystery. Recently, we identified Major Facilitator Superfamily Domain containing 2a (Mfsd2a, aka NLS1) as the primary transporter by which the brain obtains DHA. Importantly, Mfsd2a does not transport unesterified DHA, but transports DHA in the chemical form of lysophosphatidylcholine (LPC) that are synthesized by the liver and circulate largely on albumin (3). This is consistent with biochemical evidence that the brain does not transport unesterified fatty acids (4) and that LPC is the preferred carrier of DHA to the brain (5,6).   Mfsd2a is a sodium-dependent transporter that is part of the Major Facilitator Superfamily (MFS) of proteins. Members of this family with elucidated structures have 12 transmembrane domains composed of two evolutionarily duplicated 6 transmembrane units (7). Transporting an LPC is a unique feature of Mfsd2a, since most members of this family transport water-soluble and minimally polar substrates such as sugars (GLUT, MelB, LacY), and amino acids (TAT1). Mfsd2a transport is not limited to LPCs containing DHA, as it can transport LPCs containing a variety of fatty acyl chains, with higher specificity for LPCs with unsaturated fatty acyl chains with a minimum chain length of 14 carbons (6,8). Crystal structures have been solved for more than a dozen members of the MFS family, with more than 19 structures, including that of Melibiose permease (MelB) of S. typhimurium (9), Lactose permease (LacY) of Escherichia coli (10), glycerol-3-phosphate transporter of E. coli (11) and the mammalian glucose transporters 1, 3, and 5 (GLUT1, GLUT3, GLUT5) (12-14). A common transport mechanism has emerged from both biochemical and structural analyses of MFSs, in which they transport via a rocker-switch, alternating access mechanism (7,15). In the rocker-switch model, rigid-body relative motion of the N- and C-termini domains renders the substrate-binding site alternatively accessible from either side of the membrane.

Mfsd2a is highly expressed at the bloodbrain barrier in both mouse and human (6,16). Mfsd2a deficient mice (KO) have significantly reduced brain DHA as a result of a 90% reduction in brain uptake of LPC containing DHA as well as other LPCs. The most prominent phenotype of Mfsd2a KO mice is microcephaly, and KO mice additionally exhibit motor dysfunction, and behavioral disorders including anxiety and memory and learning deficits (6). In line with the mouse KO phenotypes, human patients with partially or completely inactivating mutations in Mfsd2a presented with severe microcephaly, intellectual disability, and motor dysfunction (8,16). Plasma LPCs are significantly elevated in both KO mice and human patients with Mfsd2a mutations, consistent with reduced uptake at the blood-brain barrier. Taken together, these findings demonstrate that LPCs are essential for normal brain development and function in mouse and humans.

The fact that Mfsd2a transports a lysolipid, a non-canonical substrate for an MFS protein, might indicate unique structure features and a novel transport mechanism. However, no structural information or mechanism of transport of Mfsd2a is known. Human Mfsd2a is composed of 530 amino acids, with two glycosylation sites at Asn217 and Asn227. Mfsd2a is evolutionarily conserved from teleost fish to humans. Although not a functional ortholog of bacterial MFS transporters, Mfsd2a shares 25% and 26% amino acid sequence identity with S. typhimurium MelB (9,17), and LacY from E. coli (10), respectively. Given the high conservation of the MFS fold, the use of homology modeling to gain insight into the structure of S. typhimurium MelB, for example, has proven to be highly accurate and largely consistent with subsequent X-ray crystal data (9,18). Here, we take advantage of two recently derived high resolution X-ray crystal structures of S. typhimurium MelB (9), and a high resolution X-ray crystal structure of LacY (10) to generate three predictive structural models of human Mfsd2a. These models reveal three unique regions critical for function – an LPC headgroup binding site, a hydrophobic cleft occupied by the LPC fatty acyl tail, and three sets of ionic locks. These structural features indicate a novel mechanism of transport for LPCs.

Mfsd2a is a sodium-dependent lysophosphatidylcholine transporter essential for human brain growth and function (40). Mfsd2a is the only known MFS member or secondary transporter that transports a lipid. In line with its unique function, the current study has identified three unique structural features based on a combination of homology structural modeling and biochemical analysis – (1) a unique headgroup binding site and (2) a hydrophobic cleft for acyl chain binding, and (4) 3 sets of ionic locks that stabilize the outward open conformation. Drawing together these findings with studies of the mechanism of transport of other MFS family members, we propose the following alternatingaccess mechanism for LPC transport (Fig. 6). In the first steps, LPC inserts itself into the outer leaflet of the membrane and diffuses laterally into the transporter’s hydrophobic cleft. As Mfsd2a undergoes conformational changes from the outward open to the inward open conformation, the zwitterionic headgroup is inverted from the outer membrane leaflet to the inner membrane leaflet along a translocation pathway within the transporter, interacting with specific polar and charged residues lining the path. Since LPCs are hydrophobic phospholipids, it is unlikely that they will partition out of the transporter into the aqueous environment of the cytoplasm. We propose that the “flipped” LPC exits the transporter laterally into the membrane environment of the inner leaflet. This model of LPC flipping requires further biochemical proof. Of particular interest is the visualization of the interaction of the negatively charged phosphate headgroup of LPC with Lys436 that is maintained in both outward and inward open conformations. The sidechain of Lys436 is seen to be pointing in the upward direction in the outward open conformation, but pointing downward into the translocation cleft in the inward open conformation. These findings suggest that the Lys436 acts as a tether to push or pivot the headgroup down into the translocation cavity while the N- and C-termini of Mfsd2a rock and switch from outward to inward open.

Interestingly, Lys436 is orthologous to the residue Lys377 in the melibiose transporter of S. typhimurium. Based on the S. typhimurium MelB crystal structure, Lys377 has been predicted to be involved in binding melibiose, and in forming a hydrogen bond with Tyr120, likely separating the sodium binding site from the central hydrophilic cavity (9). In a recent molecular dynamic simulation of E. coli MelB, Lys377 was noted to interact differently with residues involved in the sodium binding site (Asp55, Asp59, and Asp124) in the presence or absence of a sodium ion, and thought to be critical for the spatial organization of the sodium binding site (41). Similarly, in our refined models of Mfsd2a, Lys436 is localized in close proximity to the sodium-binding site residue, Asp93, and the central translocation pathway where it has been identified by docking studies to interact with the charged headgroup of LPC. We hypothesize that Lys436 may shuttle between the two binding sites, communicating and coordinating the occupancy status of the two sites. Interestingly, there is a distinct mobility shift in Mfsd2a bands on SDS-PAGE between wild-type Mfsd2a and the L-3 mutant (R498E, R499E, R500E, K503E, K504E) (Fig. 5I) that is not seen when each of the residues are mutated individually (Fig. S1). These findings are consistent with a conformational change in the L-3 mutant. Given that the L-3 ionic lock is visualized in the outward partially occluded model, we hypothesize that the loss of the L-3 ionic lock results in Mfsd2a being trapped in an energetically more favorable inward open conformation, resulting in the loss of transport function (Fig. 5H).

Patients with the partially inactivating mutation p.(S399L) exhibited significant increases specifically in plasma LPCs having monounsaturated (18:1 – 92%, p=0.004) and polyunsaturated LPCs (18:2, 20:4, 20:3 – 254%, p=0.002; 117%, p=0.007, and 238%, p=0.002), but not in the most abundant LPCs – saturated LPCs (C16:0, C18:0) (8). This is consistent with a greater specificity of Mfsd2a for LPCs with unsaturated fatty acyl chains (6)…A possible explanation for this acyl chain specificity is related to the mobility of the acyl tail in the membrane. It is known that phospholipids with unsaturated fatty acyl chains disrupt the packing of the bilayer, resulting in greater lateral membrane fluidity (42). Therefore, one possible mechanism for LPC specificity is that LPCs with unsaturated fatty acyl chains have greater lateral mobility in the membrane, increasing the Ka for interacting with the transport cleft of Mfsd2a.

Another important structural feature of the physiological ligand, LPC, is a minimum acyl chain length of 14 carbons is required for transport by Mfsd2a. A possible explanation for this requirement is that the hydrocarbon chain must extend beyond the cleft, protruding into the hydrophobic milieu of the phospholipid bilayer core. This interaction of the fatty acyl tail with the acyl chains of the membrane bilayer may provide a hydrophobic force strong enough to pull the molecule through and out of the transporter as the LPC headgroup partitions into the inner leaflet of the membrane. A similar scenario is seen in the Sec translocon where a hydrophobic transmembrane domain of a protein partitions laterally from the Sec61p complex channel into the lipid bilayer (43,44). This proposal that the omega carbon of the fatty acyl chain sticks out of the Mfsd2a pocket is consistent with the observation that Mfsd2a can transport nitrobenzoxadiazole (NBD) or Topfluor when these moieties are attached to the omega carbon of the LPC fatty acyl tail [1].

Other known transmembrane phospholipid transporters include flippases, floppases, and scramblases. Flippases and floppases utilize ATP to drive the uphill transport of aminophospholipids from the outer to the inner leaflet, and specific substrates from the inner to the outer leaflet, respectively (45-47). Scramblases are less well understood, facilitating transport of substrates in either direction down concentration gradients upon activation. While the substrates are similar, several differences make comparisons between Mfsd2a and phospholipid transporters of limited relevance. First, the shapes of the substrates differ in shape and size – lysophospholipids are smaller and conical while phospholipids are cylindrical. Second, unlike flippases and floppases, Mfsd2a is a secondary transporter, utilizing a sodium electrochemical gradient to drive the transport of lysophospholipids from one leaflet to the other. Third, the overall structure of MFS members is different from P4- ATPases and ABC transporters. Consequently, the mechanism of action between Mfsd2a and flippases such as P4-ATPases and ABC transporters, or floppases is expected to differ.

Being expressed at the blood-brain barrier, Mfsd2a is a potential conduit for drug delivery to the brain. The blood-brain barrier is highly impermeable, protecting the brain from bloodderived molecules, pathogens, and toxins. However, its impermeability poses a challenge for pharmacological treatment of brain diseases. It has been predicted that 98% of small molecule drugs are excluded from the brain by the blood-brain barrier (48). Currently, most drugs used to treat brain diseases are lipid soluble small molecules with a molecular weight of less than 400 Da (49). A small number of drugs traverse the blood-brain barrier by carrier-mediated transport. An example of this is Levodopa, a treatment for Parkinson’s Disease, which is a precursor of the neurotransmitter dopamine. Levodopa is transported across the blood-brain barrier by the large neutral amino acid transporter, LAT1 (50). Our findings here provide a further refinement of understanding of the structure-activity relationship of LPCs to their transport, and educates the search and design of drugs that can be transported by Mfsd2a. Candidates for transport, whether as a drug itself or as a LPC scaffold, must have a zwitterionic headgroup, but not necessarily a phosphate, and a minimal threshold of hydrophobic character. As the binding pocket is several times larger than LPC, it is sterically feasible to attach a small molecule drug onto LPC or LPC-like scaffolds for delivery across the blood-brain barrier.

In summary, these studies represent a first structural model of human Mfsd2a based on homology modeling and biochemical interrogation. We expect that this model will serve as a foundation for the future development of X-ray crystal structures of the protein, which would provide further insight into the structure and function of this physiologically important transporter required for human brain growth and function.

REFERENCES

1. Salem, N., Jr., Litman, B., Kim, H. Y., and Gawrisch, K. (2001) Mechanisms of action of docosahexaenoic acid in the nervous system. Lipids 36, 945-959

2. Bazan, N. G. (2009) Neuroprotectin D1-mediated anti-inflammatory and survival signaling in stroke, retinal degenerations, and Alzheimer’s disease. Journal of lipid research 50 Suppl, S400- 405

3. Baisted, D. J., Robinson, B. S., and Vance, D. E. (1988) Albumin stimulates the release of lysophosphatidylcholine from cultured rat hepatocytes. The Biochemical journal 253, 693-701

4. Edmond, J., Higa, T. A., Korsak, R. A., Bergner, E. A., and Lee, W. N. (1998) Fatty acid transport and utilization for the developing brain. Journal of neurochemistry 70, 1227-1234

5. Lagarde, M., Bernoud, N., Brossard, N., Lemaitre-Delaunay, D., Thies, F., Croset, M., and Lecerf, J. (2001) Lysophosphatidylcholine as a preferred carrier form of docosahexaenoic acid to the brain. Journal of molecular neuroscience : MN 16, 201-204; discussion 215-221

6. Nguyen, L. N., Ma, D., Shui, G., Wong, P., Cazenave-Gassiot, A., Zhang, X., Wenk, M. R., Goh, E. L., and Silver, D. L. (2014) Mfsd2a is a transporter for the essential omega-3 fatty acid docosahexaenoic acid. Nature 509, 503-506

7. Law, C. J., Maloney, P. C., and Wang, D. N. (2008) Ins and outs of major facilitator superfamily antiporters. Annual review of microbiology 62, 289-305

8. Alakbarzade, V., Hameed, A., Quek, D. Q. Y., Chioza, B. A., Baple, E. L., Cazenave-Gassiot, A., Nguyen, L. N., Wenk, M. R., Ahmad, A. Q., Sreekantan-Nair, A., Weedon, M. N., Rich, P., Patton, M. A., Warner, T. T., Silver, D. L., and Crosby, A. H. (2015) A partially inactivating mutation in the sodium-dependent lysophosphatidylcholine transporter MFSD2A causes a non-lethal microcephaly syndrome. Nat Genet 47, 814-817

9. Ethayathulla, A. S., Yousef, M. S., Amin, A., Leblanc, G., Kaback, H. R., and Guan, L. (2014) Structure-based mechanism for Na(+)/melibiose symport by MelB. Nature communications 5, 3009

10. Guan, L., Mirza, O., Verner, G., Iwata, S., and Kaback, H. R. (2007) Structural determination of wild-type lactose permease. Proceedings of the National Academy of Sciences of the United States of America 104, 15294-15298

…. more

Read Full Post »

Chemotherapy Benefit in Early Breast Cancer Patients

Larry H Bernstein, MD, FCAP, Curator

LPBI

 

Agendia’s MammaPrint® First and Only Genomic Assay to Receive Level 1A Clinical Utility Evidence for Chemotherapy Benefit in Early Breast Cancer Patients

http://www.b3cnewswire.com/201604191373/agendias-mammaprintr-first-and-only-genomic-assay-to-receive-level-1a-clinical-utility-evidence-for-chemotherapy-benefit-in-early-breast-cancer-patients.

  • Clinical high-risk patients with a low-risk MammaPrint® result, including 48 percent node-positive, had five-year distant metastasis-free survival rate in excess of 94 percent, whether randomized to receive adjuvant chemotherapy or not
  • MammaPrint could change clinical practice by substantially de-escalating the use of adjuvant chemotherapy and sparing many patients an aggressive treatment they will not benefit from
  • Forty-six percent overall reduction in chemotherapy prescription among clinically high-risk patients

April 19, 2016 / B3C newswire / Agendia, Inc., together with the European Organisation for Research and Treatment of Cancer (EORTC) and Breast International Group (BIG), announced results from the initial analysis of the primary objective of the Microarray In Node-negative (and 1 to 3 positive lymph node) Disease may Avoid ChemoTherapy (MINDACT) study at the American Association for Cancer Research Annual Meeting 2016 in New Orleans, LA.

Using the company’s MammaPrint® assay, patients with early-stage breast cancer who were considered at high risk for disease recurrence based on clinical and biological criteria had a distant metastasis-free survival at five years in excess of 94 percent.The MammaPrint test—the first and only genomic assay with FDA 510(k) clearance for use in risk assessment for women of all ages with early stage breast cancer—identified a large group of patients for whom five-year distant metastasis–free survival was equally good whether or not they received adjuvant chemotherapy (chemotherapy given post-surgery).

“The MINDACT trial design is the optimal way to prove clinical utility of a genomic assay,” said Prof. Laura van ’t Veer, CRO at Agendia, Leader, Breast Oncology Program, and Director, Applied Genomics at UCSF Helen Diller Family Comprehensive Cancer Center. “It gives the level 1A clinical evidence (prospective, randomized and controlled) that empowers physicians to clearly and confidently know when chemotherapy is part of optimal early-stage breast cancer therapy.  In this trial, MammaPrint (70-gene assay) was compared to the standard of care physicians use today, to decide what is the best treatment option for an early-stage breast cancer patient.”

The MINDACT trial is the first prospective randomized controlled clinical trial of a breast cancer recurrence genomic assay with level 1A clinical evidence and the first prospective translational research study of this magnitude in breast cancer to report the results of its primary objective.

Among the 3,356 patients enrolled in the MINDACT trial, who were categorized as having a high risk of breast cancer recurrence based on common clinical and pathological criteria (C-high), the MammaPrint assay reduced the chemotherapy treatment prescription by 46 percent.Using the 70-gene assay, MammaPrint, 48 percent of lymph-node positive breast cancer patients considered clinically high-risk (Clinical-high) and genomic low-risk (MammaPrint-low) had an excellent distant metastasis-free survival at five years in excess of 94 percent.

“Traditionally, physicians have relied on clinical-pathological factors such as age, tumor size, tumor grade, lymph node involvement, and hormone receptor status to make breast cancer treatment decisions,” said Massimo Cristofanilli, MD, Associate Director of Translational Research and Precision Medicine at the Robert H. Lurie Comprehensive Cancer Center, Northwestern University in Chicago. “These findings provide level 1A clinical utility evidence by demonstrating that the detection of low-risk of distant recurrence reported by the MammaPrint test can be safely used in the management of thousands of women by identifying those who can be spared from a toxic and unnecessary treatment.”

MINDACT is a randomized phase III trial that investigates the clinical utility of MammaPrint, when compared (or – “used in conjunction with”) to the standard clinical pathological criteria, for the selection of patients unlikely to benefit from adjuvant chemotherapy. From 2007 to 2011, 6,693 women who had undergone surgery for early-stage breast cancer enrolled in the trial (111 centers in nine countries). Participants were categorized as low or high risk for tumor recurrence in two ways: first, through analysis of tumor tissue using MammaPrint at a central location in Amsterdam; and second, using Adjuvant! Online, a tool that calculates risk of breast cancer recurrence based on common clinical and biological criteria.

Patients characterized in both clinical and genomic assessments as “low- risk” are spared chemotherapy, while patients characterized as “high- risk” are advised chemotherapy. Those with conflicting results are randomized to use either clinical or genomic risk (MammaPrint) evaluation to decide on chemotherapy treatment.

The MINDACT trial is managed and sponsored by the EORTC as part of an extensive and complex partnership in collaboration with Agendia and BIG, and many other academic and commercial partners, as well as patient advocates.

“These MINDACT trial results are a testament that the science of the MammaPrint test is the most robust in the genomic breast recurrence assay market.  Agendia will continue to collaborate with pharmaceutical companies, leading cancer centers and academic groups on additional clinical research and in the pursuit of bringing more effective, individualized treatments within reach of cancer patients,” said Mark Straley, Chief Executive Officer at Agendia. “We value the partnership with the EORTC and BIG and it’s a great honor to share this critical milestone.”

Breast cancer is the most frequently diagnosed cancer in women worldwide(1). In 2012, there were nearly 1.7 million new breast cancer cases among women worldwide, accounting for 25 percent of all new cancer cases in women(2).

Read Full Post »

Imaging of Cancer Cells, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

references:

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information

 

Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)    http://dx.doi.org:/10.1038/srep21471

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses. http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

 

Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..

 

Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f2.jpg

The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category

 

Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471; http://dx.doi.org:/10.1038/srep21471

 

Computer Algorithm Helps Characterize Cancerous Genomic Variations

http://www.genengnews.com/gen-news-highlights/computer-algorithm-helps-characterize-cancerous-genomic-variations/81252626/

To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal GenePattern.org.

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.

 

Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)              http://dx.doi.org:/10.1038/nbt.3527

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes

 

Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…

 

An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)    http://dx.doi.org:/10.1038/srep03751

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).

 

Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.

 

We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.

 

Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason

http://www.mdtmag.com/news/2016/04/microscope-uses-ai-find-cancer-cells-more-efficiently

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;   http://dx.doi.org:/10.1038/srep21471

 

 

Read Full Post »

CRISPR/Cas9, Familial Amyloid Polyneuropathy ( FAP) and Neurodegenerative Disease

CRISPR/Cas9, Familial Amyloid Polyneuropathy (FAP) and Neurodegenerative Disease, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

CRISPR/Cas9, Familial Amyloid Polyneuropathy ( FAP) and Neurodegenerative Disease

Curator: Larry H. Bernstein, MD, FCAP

 

CRISPR/Cas9 and Targeted Genome Editing: A New Era in Molecular Biology

https://www.neb.com/tools-and-resources/feature-articles/crispr-cas9-and-targeted-genome-editing-a-new-era-in-molecular-biology

The development of efficient and reliable ways to make precise, targeted changes to the genome of living cells is a long-standing goal for biomedical researchers. Recently, a new tool based on a bacterial CRISPR-associated protein-9 nuclease (Cas9) from Streptococcus pyogenes has generated considerable excitement (1). This follows several attempts over the years to manipulate gene function, including homologous recombination (2) and RNA interference (RNAi) (3). RNAi, in particular, became a laboratory staple enabling inexpensive and high-throughput interrogation of gene function (4, 5), but it is hampered by providing only temporary inhibition of gene function and unpredictable off-target effects (6). Other recent approaches to targeted genome modification – zinc-finger nucleases [ZFNs, (7)] and transcription-activator like effector nucleases [TALENs (8)]– enable researchers to generate permanent mutations by introducing doublestranded breaks to activate repair pathways. These approaches are costly and time-consuming to engineer, limiting their widespread use, particularly for large scale, high-throughput studies.

The Biology of Cas9

The functions of CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) and CRISPR-associated (Cas) genes are essential in adaptive immunity in select bacteria and archaea, enabling the organisms to respond to and eliminate invading genetic material. These repeats were initially discovered in the 1980s in E. coli (9), but their function wasn’t confirmed until 2007 by Barrangou and colleagues, who demonstrated that S. thermophilus can acquire resistance against a bacteriophage by integrating a genome fragment of an infectious virus into its CRISPR locus (10).

Three types of CRISPR mechanisms have been identified, of which type II is the most studied. In this case, invading DNA from viruses or plasmids is cut into small fragments and incorporated into a CRISPR locus amidst a series of short repeats (around 20 bps). The loci are transcribed, and transcripts are then processed to generate small RNAs (crRNA – CRISPR RNA), which are used to guide effector endonucleases that target invading DNA based on sequence complementarity (Figure 1) (11).

Figure 1. Cas9 in vivo: Bacterial Adaptive Immunity

https://www.neb.com/~/media/NebUs/Files/Feature%20Articles/Images/FA_Cas9_Fig1_Cas9InVivo.png

In the acquisition phase, foreign DNA is incorporated into the bacterial genome at the CRISPR loci. CRISPR loci is then transcribed and processed into crRNA during crRNA biogenesis. During interference, Cas9 endonuclease complexed with a crRNA and separate tracrRNA cleaves foreign DNA containing a 20-nucleotide crRNA complementary sequence adjacent to the PAM sequence. (Figure not drawn to scale.)

https://www.neb.com/~/media/NebUs/Files/Feature%20Articles/Images/FA_Cas9_GenomeEditingGlossary.png

One Cas protein, Cas9 (also known as Csn1), has been shown, through knockdown and rescue experiments to be a key player in certain CRISPR mechanisms (specifically type II CRISPR systems). The type II CRISPR mechanism is unique compared to other CRISPR systems, as only one Cas protein (Cas9) is required for gene silencing (12). In type II systems, Cas9 participates in the processing of crRNAs (12), and is responsible for the destruction of the target DNA (11). Cas9’s function in both of these steps relies on the presence of two nuclease domains, a RuvC-like nuclease domain located at the amino terminus and a HNH-like nuclease domain that resides in the mid-region of the protein (13).

To achieve site-specific DNA recognition and cleavage, Cas9 must be complexed with both a crRNA and a separate trans-activating crRNA (tracrRNA or trRNA), that is partially complementary to the crRNA (11). The tracrRNA is required for crRNA maturation from a primary transcript encoding multiple pre-crRNAs. This occurs in the presence of RNase III and Cas9 (12).

During the destruction of target DNA, the HNH and RuvC-like nuclease domains cut both DNA strands, generating double-stranded breaks (DSBs) at sites defined by a 20-nucleotide target sequence within an associated crRNA transcript (11, 14). The HNH domain cleaves the complementary strand, while the RuvC domain cleaves the noncomplementary strand.

The double-stranded endonuclease activity of Cas9 also requires that a short conserved sequence, (2–5 nts) known as protospacer-associated motif (PAM), follows immediately 3´- of the crRNA complementary sequence (15). In fact, even fully complementary sequences are ignored by Cas9-RNA in the absence of a PAM sequence (16).

Cas9 and CRISPR as a New Tool in Molecular Biology

The simplicity of the type II CRISPR nuclease, with only three required components (Cas9 along with the crRNA and trRNA) makes this system amenable to adaptation for genome editing. This potential was realized in 2012 by the Doudna and Charpentier labs (11). Based on the type II CRISPR system described previously, the authors developed a simplified two-component system by combining trRNA and crRNA into a single synthetic single guide RNA (sgRNA). sgRNAprogrammed Cas9 was shown to be as effective as Cas9 programmed with separate trRNA and crRNA in guiding targeted gene alterations (Figure 2A).

To date, three different variants of the Cas9 nuclease have been adopted in genome-editing protocols. The first is wild-type Cas9, which can site-specifically cleave double-stranded DNA, resulting in the activation of the doublestrand break (DSB) repair machinery. DSBs can be repaired by the cellular Non-Homologous End Joining (NHEJ) pathway (17), resulting in insertions and/or deletions (indels) which disrupt the targeted locus. Alternatively, if a donor template with homology to the targeted locus is supplied, the DSB may be repaired by the homology-directed repair (HDR) pathway allowing for precise replacement mutations to be made (Figure 2A) (17, 18).

Cong and colleagues (1) took the Cas9 system a step further towards increased precision by developing a mutant form, known as Cas9D10A, with only nickase activity. This means it cleaves only one DNA strand, and does not activate NHEJ. Instead, when provided with a homologous repair template, DNA repairs are conducted via the high-fidelity HDR pathway only, resulting in reduced indel mutations (1, 11, 19). Cas9D10A is even more appealing in terms of target specificity when loci are targeted by paired Cas9 complexes designed to generate adjacent DNA nicks (20) (see further details about “paired nickases” in Figure 2B).

The third variant is a nuclease-deficient Cas9 (dCas9, Figure 2C) (21). Mutations H840A in the HNH domain and D10A in the RuvC domain inactivate cleavage activity, but do not prevent DNA binding (11, 22). Therefore, this variant can be used to sequence-specifically target any region of the genome without cleavage. Instead, by fusing with various effector domains, dCas9 can be used either as a gene silencing or activation tool (21, 23–26). Furthermore, it can be used as a visualization tool. For instance, Chen and colleagues used dCas9 fused to Enhanced Green Fluorescent Protein (EGFP) to visualize repetitive DNA sequences with a single sgRNA or nonrepetitive loci using multiple sgRNAs (27).

Figure 2. CRISPR/Cas9 System Applications

https://www.neb.com/~/media/NebUs/Files/Feature%20Articles/Images/FA_Cas9_Fig2_Cas9forGenomeEditing.png?device=modal

  1. Wild-type Cas9 nuclease site specifically cleaves double-stranded DNA activating double-strand break repair machinery. In the absence of a homologous repair template non-homologous end joining can result in indels disrupting the target sequence. Alternatively, precise mutations and knock-ins can be made by providing a homologous repair template and exploiting the homology directed repair pathway.
    B. Mutated Cas9 makes a site specific single-strand nick. Two sgRNA can be used to introduce a staggered double-stranded break which can then undergo homology directed repair.
    C. Nuclease-deficient Cas9 can be fused with various effector domains allowing specific localization. For example, transcriptional activators, repressors, and fluorescent proteins.

Targeting Efficiency and Off-target Mutations

Targeting efficiency, or the percentage of desired mutation achieved, is one of the most important parameters by which to assess a genome-editing tool. The targeting efficiency of Cas9 compares favorably with more established methods, such as TALENs or ZFNs (8). For example, in human cells, custom-designed ZFNs and TALENs could only achieve efficiencies ranging from 1% to 50% (29–31). In contrast, the Cas9 system has been reported to have efficiencies up to >70% in zebrafish (32) and plants (33), and ranging from 2–5% in induced pluripotent stem cells (34). In addition, Zhou and colleagues were able to improve genome targeting up to 78% in one-cell mouse embryos, and achieved effective germline transmission through the use of dual sgRNAs to simultaneously target an individual gene (35).

A widely used method to identify mutations is the T7 Endonuclease I mutation detection assay (36, 37) (Figure 3). This assay detects heteroduplex DNA that results from the annealing of a DNA strand, including desired mutations, with a wildtype DNA strand (37).

Figure 3. T7 Endonuclease I Targeting Efficiency Assay

https://www.neb.com/~/media/NebUs/Files/Feature%20Articles/Images/FA_Cas9_Fig3_T7Assay_TargetEfficiency.png

Genomic DNA is amplified with primers bracketing the modified locus. PCR products are then denatured and re-annealed yielding 3 possible structures. Duplexes containing a mismatch are digested by T7 Endonuclease I. The DNA is then electrophoretically separated and fragment analysis is used to calculate targeting efficiency.

Another important parameter is the incidence of off-target mutations. Such mutations are likely to appear in sites that have differences of only a few nucleotides compared to the original sequence, as long as they are adjacent to a PAM sequence. This occurs as Cas9 can tolerate up to 5 base mismatches within the protospacer region (36) or a single base difference in the PAM sequence (38). Off-target mutations are generally more difficult to detect, requiring whole-genome sequencing to rule them out completely.

Recent improvements to the CRISPR system for reducing off-target mutations have been made through the use of truncated gRNA (truncated within the crRNA-derived sequence) or by adding two extra guanine (G) nucleotides to the 5´ end (28, 37). Another way researchers have attempted to minimize off-target effects is with the use of “paired nickases” (20). This strategy uses D10A Cas9 and two sgRNAs complementary to the adjacent area on opposite strands of the target site (Figure 2B). While this induces DSBs in the target DNA, it is expected to create only single nicks in off-target locations and, therefore, result in minimal off-target mutations.

By leveraging computation to reduce off-target mutations, several groups have developed webbased tools to facilitate the identification of potential CRISPR target sites and assess their potential for off-target cleavage. Examples include the CRISPR Design Tool (38) and the ZiFiT Targeter, Version 4.2 (39, 40).

Applications as a Genome-editing and Genome Targeting Tool

Following its initial demonstration in 2012 (9), the CRISPR/Cas9 system has been widely adopted. This has already been successfully used to target important genes in many cell lines and organisms, including human (34), bacteria (41), zebrafish (32), C. elegans (42), plants (34), Xenopus tropicalis (43), yeast (44), Drosophila (45), monkeys (46), rabbits (47), pigs (42), rats (48) and mice (49). Several groups have now taken advantage of this method to introduce single point mutations (deletions or insertions) in a particular target gene, via a single gRNA (14, 21, 29). Using a pair of gRNA-directed Cas9 nucleases instead, it is also possible to induce large deletions or genomic rearrangements, such as inversions or translocations (50). A recent exciting development is the use of the dCas9 version of the CRISPR/Cas9 system to target protein domains for transcriptional regulation (26, 51, 52), epigenetic modification (25), and microscopic visualization of specific genome loci (27).

The CRISPR/Cas9 system requires only the redesign of the crRNA to change target specificity. This contrasts with other genome editing tools, including zinc finger and TALENs, where redesign of the protein-DNA interface is required. Furthermore, CRISPR/Cas9 enables rapid genome-wide interrogation of gene function by generating large gRNA libraries (51, 53) for genomic screening.

The Future of CRISPR/Cas9

The rapid progress in developing Cas9 into a set of tools for cell and molecular biology research has been remarkable, likely due to the simplicity, high efficiency and versatility of the system. Of the designer nuclease systems currently available for precision genome engineering, the CRISPR/Cas system is by far the most user friendly. It is now also clear that Cas9’s potential reaches beyond DNA cleavage, and its usefulness for genome locus-specific recruitment of proteins will likely only be limited by our imagination.

 

Scientists urge caution in using new CRISPR technology to treat human genetic disease

By Robert Sanders, Media relations | MARCH 19, 2015
http://news.berkeley.edu/2015/03/19/scientists-urge-caution-in-using-new-crispr-technology-to-treat-human-genetic-disease/

http://news.berkeley.edu/wp-content/uploads/2015/03/crispr350.jpg

The bacterial enzyme Cas9 is the engine of RNA-programmed genome engineering in human cells. (Graphic by Jennifer Doudna/UC Berkeley)

A group of 18 scientists and ethicists today warned that a revolutionary new tool to cut and splice DNA should be used cautiously when attempting to fix human genetic disease, and strongly discouraged any attempts at making changes to the human genome that could be passed on to offspring.

Among the authors of this warning is Jennifer Doudna, the co-inventor of the technology, called CRISPR-Cas9, which is driving a new interest in gene therapy, or “genome engineering.” She and colleagues co-authored a perspective piece that appears in the March 20 issue of Science, based on discussions at a meeting that took place in Napa on Jan. 24. The same issue of Science features a collection of recent research papers, commentary and news articles on CRISPR and its implications.    …..

A prudent path forward for genomic engineering and germline gene modification

David Baltimore1,  Paul Berg2, …., Jennifer A. Doudna4,10,*, et al.
http://science.sciencemag.org/content/early/2015/03/18/science.aab1028.full
Science  19 Mar 2015.  http://dx.doi.org:/10.1126/science.aab1028

 

Correcting genetic defects

Scientists today are changing DNA sequences to correct genetic defects in animals as well as cultured tissues generated from stem cells, strategies that could eventually be used to treat human disease. The technology can also be used to engineer animals with genetic diseases mimicking human disease, which could lead to new insights into previously enigmatic disorders.

The CRISPR-Cas9 tool is still being refined to ensure that genetic changes are precisely targeted, Doudna said. Nevertheless, the authors met “… to initiate an informed discussion of the uses of genome engineering technology, and to identify proactively those areas where current action is essential to prepare for future developments. We recommend taking immediate steps toward ensuring that the application of genome engineering technology is performed safely and ethically.”

 

Amyloid CRISPR Plasmids and si/shRNA Gene Silencers

http://www.scbt.com/crispr/table-amyloid.html

Santa Cruz Biotechnology, Inc. offers a broad range of gene silencers in the form of siRNAs, shRNA Plasmids and shRNA Lentiviral Particles as well as CRISPR/Cas9 Knockout and CRISPR Double Nickase plasmids. Amyloid gene silencers are available as Amyloid siRNA, Amyloid shRNA Plasmid, Amyloid shRNA Lentiviral Particles and Amyloid CRISPR/Cas9 Knockout plasmids. Amyloid CRISPR/dCas9 Activation Plasmids and CRISPR Lenti Activation Systems for gene activation are also available. Gene silencers and activators are useful for gene studies in combination with antibodies used for protein detection.    Amyloid CRISPR Knockout, HDR and Nickase Knockout Plasmids

 

CRISPR-Cas9-Based Knockout of the Prion Protein and Its Effect on the Proteome


Mehrabian M, Brethour D, MacIsaac S, Kim JK, Gunawardana C.G, Wang H, et al.
PLoS ONE 2014; 9(12): e114594. http://dx.doi.org/10.1371/journal.pone.0114594

The molecular function of the cellular prion protein (PrPC) and the mechanism by which it may contribute to neurotoxicity in prion diseases and Alzheimer’s disease are only partially understood. Mouse neuroblastoma Neuro2a cells and, more recently, C2C12 myocytes and myotubes have emerged as popular models for investigating the cellular biology of PrP. Mouse epithelial NMuMG cells might become attractive models for studying the possible involvement of PrP in a morphogenetic program underlying epithelial-to-mesenchymal transitions. Here we describe the generation of PrP knockout clones from these cell lines using CRISPR-Cas9 knockout technology. More specifically, knockout clones were generated with two separate guide RNAs targeting recognition sites on opposite strands within the first hundred nucleotides of the Prnp coding sequence. Several PrP knockout clones were isolated and genomic insertions and deletions near the CRISPR-target sites were characterized. Subsequently, deep quantitative global proteome analyses that recorded the relative abundance of>3000 proteins (data deposited to ProteomeXchange Consortium) were undertaken to begin to characterize the molecular consequences of PrP deficiency. The levels of ∼120 proteins were shown to reproducibly correlate with the presence or absence of PrP, with most of these proteins belonging to extracellular components, cell junctions or the cytoskeleton.

http://journals.plos.org/plosone/article/figure/image?size=inline&id=info:doi/10.1371/journal.pone.0114594.g001

http://journals.plos.org/plosone/article/figure/image?size=inline&id=info:doi/10.1371/journal.pone.0114594.g003

 

Development and Applications of CRISPR-Cas9 for Genome Engineering

Patrick D. Hsu,1,2,3 Eric S. Lander,1 and Feng Zhang1,2,*
Cell. 2014 Jun 5; 157(6): 1262–1278.   doi:  10.1016/j.cell.2014.05.010

Recent advances in genome engineering technologies based on the CRISPR-associated RNA-guided endonuclease Cas9 are enabling the systematic interrogation of mammalian genome function. Analogous to the search function in modern word processors, Cas9 can be guided to specific locations within complex genomes by a short RNA search string. Using this system, DNA sequences within the endogenous genome and their functional outputs are now easily edited or modulated in virtually any organism of choice. Cas9-mediated genetic perturbation is simple and scalable, empowering researchers to elucidate the functional organization of the genome at the systems level and establish causal linkages between genetic variations and biological phenotypes. In this Review, we describe the development and applications of Cas9 for a variety of research or translational applications while highlighting challenges as well as future directions. Derived from a remarkable microbial defense system, Cas9 is driving innovative applications from basic biology to biotechnology and medicine.

The development of recombinant DNA technology in the 1970s marked the beginning of a new era for biology. For the first time, molecular biologists gained the ability to manipulate DNA molecules, making it possible to study genes and harness them to develop novel medicine and biotechnology. Recent advances in genome engineering technologies are sparking a new revolution in biological research. Rather than studying DNA taken out of the context of the genome, researchers can now directly edit or modulate the function of DNA sequences in their endogenous context in virtually any organism of choice, enabling them to elucidate the functional organization of the genome at the systems level, as well as identify causal genetic variations.

Broadly speaking, genome engineering refers to the process of making targeted modifications to the genome, its contexts (e.g., epigenetic marks), or its outputs (e.g., transcripts). The ability to do so easily and efficiently in eukaryotic and especially mammalian cells holds immense promise to transform basic science, biotechnology, and medicine (Figure 1).

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4343198/bin/nihms659174f1.jpg

For life sciences research, technologies that can delete, insert, and modify the DNA sequences of cells or organisms enable dissecting the function of specific genes and regulatory elements. Multiplexed editing could further allow the interrogation of gene or protein networks at a larger scale. Similarly, manipulating transcriptional regulation or chromatin states at particular loci can reveal how genetic material is organized and utilized within a cell, illuminating relationships between the architecture of the genome and its functions. In biotechnology, precise manipulation of genetic building blocks and regulatory machinery also facilitates the reverse engineering or reconstruction of useful biological systems, for example, by enhancing biofuel production pathways in industrially relevant organisms or by creating infection-resistant crops. Additionally, genome engineering is stimulating a new generation of drug development processes and medical therapeutics. Perturbation of multiple genes simultaneously could model the additive effects that underlie complex polygenic disorders, leading to new drug targets, while genome editing could directly correct harmful mutations in the context of human gene therapy (Tebas et al., 2014).

Eukaryotic genomes contain billions of DNA bases and are difficult to manipulate. One of the breakthroughs in genome manipulation has been the development of gene targeting by homologous recombination (HR), which integrates exogenous repair templates that contain sequence homology to the donor site (Figure 2A) (Capecchi, 1989). HR-mediated targeting has facilitated the generation of knockin and knockout animal models via manipulation of germline competent stem cells, dramatically advancing many areas of biological research. However, although HR-mediated gene targeting produces highly precise alterations, the desired recombination events occur extremely infrequently (1 in 106–109 cells) (Capecchi, 1989), presenting enormous challenges for large-scale applications of gene-targeting experiments.

Genome Editing Technologies Exploit Endogenous DNA Repair Machinery

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4343198/bin/nihms659174f2.gif

To overcome these challenges, a series of programmable nuclease-based genome editing technologies have been developed in recent years, enabling targeted and efficient modification of a variety of eukaryotic and particularly mammalian species. Of the current generation of genome editing technologies, the most rapidly developing is the class of RNA-guided endonucleases known as Cas9 from the microbial adaptive immune system CRISPR (clustered regularly interspaced short palindromic repeats), which can be easily targeted to virtually any genomic location of choice by a short RNA guide. Here, we review the development and applications of the CRISPR-associated endonuclease Cas9 as a platform technology for achieving targeted perturbation of endogenous genomic elements and also discuss challenges and future avenues for innovation.   ……

Figure 4   Natural Mechanisms of Microbial CRISPR Systems in Adaptive Immunity

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4343198/bin/nihms659174f4.gif

……  A key turning point came in 2005, when systematic analysis of the spacer sequences separating the individual direct repeats suggested their extrachromosomal and phage-associated origins (Mojica et al., 2005Pourcel et al., 2005Bolotin et al., 2005). This insight was tremendously exciting, especially given previous studies showing that CRISPR loci are transcribed (Tang et al., 2002) and that viruses are unable to infect archaeal cells carrying spacers corresponding to their own genomes (Mojica et al., 2005). Together, these findings led to the speculation that CRISPR arrays serve as an immune memory and defense mechanism, and individual spacers facilitate defense against bacteriophage infection by exploiting Watson-Crick base-pairing between nucleic acids (Mojica et al., 2005Pourcel et al., 2005). Despite these compelling realizations that CRISPR loci might be involved in microbial immunity, the specific mechanism of how the spacers act to mediate viral defense remained a challenging puzzle. Several hypotheses were raised, including thoughts that CRISPR spacers act as small RNA guides to degrade viral transcripts in a RNAi-like mechanism (Makarova et al., 2006) or that CRISPR spacers direct Cas enzymes to cleave viral DNA at spacer-matching regions (Bolotin et al., 2005).   …..

As the pace of CRISPR research accelerated, researchers quickly unraveled many details of each type of CRISPR system (Figure 4). Building on an earlier speculation that protospacer adjacent motifs (PAMs) may direct the type II Cas9 nuclease to cleave DNA (Bolotin et al., 2005), Moineau and colleagues highlighted the importance of PAM sequences by demonstrating that PAM mutations in phage genomes circumvented CRISPR interference (Deveau et al., 2008). Additionally, for types I and II, the lack of PAM within the direct repeat sequence within the CRISPR array prevents self-targeting by the CRISPR system. In type III systems, however, mismatches between the 5′ end of the crRNA and the DNA target are required for plasmid interference (Marraffini and Sontheimer, 2010).  …..

In 2013, a pair of studies simultaneously showed how to successfully engineer type II CRISPR systems from Streptococcus thermophilus (Cong et al., 2013) andStreptococcus pyogenes (Cong et al., 2013Mali et al., 2013a) to accomplish genome editing in mammalian cells. Heterologous expression of mature crRNA-tracrRNA hybrids (Cong et al., 2013) as well as sgRNAs (Cong et al., 2013Mali et al., 2013a) directs Cas9 cleavage within the mammalian cellular genome to stimulate NHEJ or HDR-mediated genome editing. Multiple guide RNAs can also be used to target several genes at once. Since these initial studies, Cas9 has been used by thousands of laboratories for genome editing applications in a variety of experimental model systems (Sander and Joung, 2014). ……

The majority of CRISPR-based technology development has focused on the signature Cas9 nuclease from type II CRISPR systems. However, there remains a wide diversity of CRISPR types and functions. Cas RAMP module (Cmr) proteins identified in Pyrococcus furiosus and Sulfolobus solfataricus (Hale et al., 2012) constitute an RNA-targeting CRISPR immune system, forming a complex guided by small CRISPR RNAs that target and cleave complementary RNA instead of DNA. Cmr protein homologs can be found throughout bacteria and archaea, typically relying on a 5 site tag sequence on the target-matching crRNA for Cmr-directed cleavage.

Unlike RNAi, which is targeted largely by a 6 nt seed region and to a lesser extent 13 other bases, Cmr crRNAs contain 30–40 nt of target complementarity. Cmr-CRISPR technologies for RNA targeting are thus a promising target for orthogonal engineering and minimal off-target modification. Although the modularity of Cmr systems for RNA-targeting in mammalian cells remains to be investigated, Cmr complexes native to P. furiosus have already been engineered to target novel RNA substrates (Hale et al., 20092012).   ……

Although Cas9 has already been widely used as a research tool, a particularly exciting future direction is the development of Cas9 as a therapeutic technology for treating genetic disorders. For a monogenic recessive disorder due to loss-of-function mutations (such as cystic fibrosis, sickle-cell anemia, or Duchenne muscular dystrophy), Cas9 may be used to correct the causative mutation. This has many advantages over traditional methods of gene augmentation that deliver functional genetic copies via viral vector-mediated overexpression—particularly that the newly functional gene is expressed in its natural context. For dominant-negative disorders in which the affected gene is haplosufficient (such as transthyretin-related hereditary amyloidosis or dominant forms of retinitis pigmentosum), it may also be possible to use NHEJ to inactivate the mutated allele to achieve therapeutic benefit. For allele-specific targeting, one could design guide RNAs capable of distinguishing between single-nucleotide polymorphism (SNP) variations in the target gene, such as when the SNP falls within the PAM sequence.

 

 

CRISPR/Cas9: a powerful genetic engineering tool for establishing large animal models of neurodegenerative diseases

Zhuchi Tu, Weili Yang, Sen Yan, Xiangyu Guo and Xiao-Jiang Li

Molecular Neurodegeneration 2015; 10:35  http://dx.doi.org:/10.1186/s13024-015-0031-x

Animal models are extremely valuable to help us understand the pathogenesis of neurodegenerative disorders and to find treatments for them. Since large animals are more like humans than rodents, they make good models to identify the important pathological events that may be seen in humans but not in small animals; large animals are also very important for validating effective treatments or confirming therapeutic targets. Due to the lack of embryonic stem cell lines from large animals, it has been difficult to use traditional gene targeting technology to establish large animal models of neurodegenerative diseases. Recently, CRISPR/Cas9 was used successfully to genetically modify genomes in various species. Here we discuss the use of CRISPR/Cas9 technology to establish large animal models that can more faithfully mimic human neurodegenerative diseases.

Neurodegenerative diseases — Alzheimer’s disease(AD),Parkinson’s disease(PD), amyotrophic lateral sclerosis (ALS), Huntington’s disease (HD), and frontotemporal dementia (FTD) — are characterized by age-dependent and selective neurodegeneration. As the life expectancy of humans lengthens, there is a greater prevalence of these neurodegenerative diseases; however, the pathogenesis of most of these neurodegenerative diseases remain unclear, and we lack effective treatments for these important brain disorders.

CRISPR/Cas9,  Non-human primates,  Neurodegenerative diseases,  Animal model

There are a number of excellent reviews covering different types of neurodegenerative diseases and their genetic mouse models [812]. Investigations of different mouse models of neurodegenerative diseases have revealed a common pathology shared by these diseases. First, the development of neuropathology and neurological symptoms in genetic mouse models of neurodegenerative diseases is age dependent and progressive. Second, all the mouse models show an accumulation of misfolded or aggregated proteins resulting from the expression of mutant genes. Third, despite the widespread expression of mutant proteins throughout the body and brain, neuronal function appears to be selectively or preferentially affected. All these facts indicate that mouse models of neurodegenerative diseases recapitulate important pathologic features also seen in patients with neurodegenerative diseases.

However, it seems that mouse models can not recapitulate the full range of neuropathology seen in patients with neurodegenerative diseases. Overt neurodegeneration, which is the most important pathological feature in patient brains, is absent in genetic rodent models of AD, PD, and HD. Many rodent models that express transgenic mutant proteins under the control of different promoters do not replicate overt neurodegeneration, which is likely due to their short life spans and the different aging processes of small animals. Also important are the remarkable differences in brain development between rodents and primates. For example, the mouse brain takes 21 days to fully develop, whereas the formation of primate brains requires more than 150 days [13]. The rapid development of the brain in rodents may render neuronal cells resistant to misfolded protein-mediated neurodegeneration. Another difficulty in using rodent models is how to analyze cognitive and emotional abnormalities, which are the early symptoms of most neurodegenerative diseases in humans. Differences in neuronal circuitry, anatomy, and physiology between rodent and primate brains may also account for the behavioral differences between rodent and primate models.

 

Mitochondrial dynamics–fusion, fission, movement, and mitophagy–in neurodegenerative diseases

Hsiuchen Chen and David C. Chan
Human Molec Gen 2009; 18, Review Issue 2 R169–R176
http://dx.doi.org:/10.1093/hmg/ddp326

Neurons are metabolically active cells with high energy demands at locations distant from the cell body. As a result, these cells are particularly dependent on mitochondrial function, as reflected by the observation that diseases of mitochondrial dysfunction often have a neurodegenerative component. Recent discoveries have highlighted that neurons are reliant particularly on the dynamic properties of mitochondria. Mitochondria are dynamic organelles by several criteria. They engage in repeated cycles of fusion and fission, which serve to intermix the lipids and contents of a population of mitochondria. In addition, mitochondria are actively recruited to subcellular sites, such as the axonal and dendritic processes of neurons. Finally, the quality of a mitochondrial population is maintained through mitophagy, a form of autophagy in which defective mitochondria are selectively degraded. We review the general features of mitochondrial dynamics, incorporating recent findings on mitochondrial fusion, fission, transport and mitophagy. Defects in these key features are associated with neurodegenerative disease. Charcot-Marie-Tooth type 2A, a peripheral neuropathy, and dominant optic atrophy, an inherited optic neuropathy, result from a primary deficiency of mitochondrial fusion. Moreover, several major neurodegenerative diseases—including Parkinson’s, Alzheimer’s and Huntington’s disease—involve disruption of mitochondrial dynamics. Remarkably, in several disease models, the manipulation of mitochondrial fusion or fission can partially rescue disease phenotypes. We review how mitochondrial dynamics is altered in these neurodegenerative diseases and discuss the reciprocal interactions between mitochondrial fusion, fission, transport and mitophagy.

 

Applications of CRISPR–Cas systems in Neuroscience

Matthias Heidenreich  & Feng Zhang
Nature Rev Neurosci 2016; 17:36–44   http://dx.doi.org:/10.1038/nrn.2015.2

Genome-editing tools, and in particular those based on CRISPR–Cas (clustered regularly interspaced short palindromic repeat (CRISPR)–CRISPR-associated protein) systems, are accelerating the pace of biological research and enabling targeted genetic interrogation in almost any organism and cell type. These tools have opened the door to the development of new model systems for studying the complexity of the nervous system, including animal models and stem cell-derived in vitro models. Precise and efficient gene editing using CRISPR–Cas systems has the potential to advance both basic and translational neuroscience research.
Cellular neuroscience
, DNA recombination, Genetic engineering, Molecular neuroscience

Figure 3: In vitro applications of Cas9 in human iPSCs.close

http://www.nature.com/nrn/journal/v17/n1/carousel/nrn.2015.2-f3.jpg

a | Evaluation of disease candidate genes from large-population genome-wide association studies (GWASs). Human primary cells, such as neurons, are not easily available and are difficult to expand in culture. By contrast, induced pluripo…

  1. Genome-editing Technologies for Gene and Cell Therapy

Molecular Therapy 12 Jan 2016

  1. Systematic quantification of HDR and NHEJ reveals effects of locus, nuclease, and cell type on genome-editing

Scientific Reports 31 Mar 2016

  1. Controlled delivery of β-globin-targeting TALENs and CRISPR/Cas9 into mammalian cells for genome editing using microinjection

Scientific Reports 12 Nov 2015

 

Alzheimer’s Disease: Medicine’s Greatest Challenge in the 21st Century

https://www.physicsforums.com/insights/can-gene-editing-eliminate-alzheimers-disease/

The development of the CRISPR/Cas9 system has made gene editing a relatively simple task.  While CRISPR and other gene editing technologies stand to revolutionize biomedical research and offers many promising therapeutic avenues (such as in the treatment of HIV), a great deal of debate exists over whether CRISPR should be used to modify human embryos. As I discussed in my previous Insight article, we lack enough fundamental biological knowledge to enhance many traits like height or intelligence, so we are not near a future with genetically-enhanced super babies. However, scientists have identified a few rare genetic variants that protect against disease.  One such protective variant is a mutation in the APP gene that protects against Alzheimer’s disease and cognitive decline in old age. If we can perfect gene editing technologies, is this mutation one that we should be regularly introducing into embryos? In this article, I explore the potential for using gene editing as a way to prevent Alzheimer’s disease in future generations. Alzheimer’s Disease: Medicine’s Greatest Challenge in the 21st Century Can gene editing be the missing piece in the battle against Alzheimer’s? (Source: bostonbiotech.org) I chose to assess the benefit of germline gene editing in the context of Alzheimer’s disease because this disease is one of the biggest challenges medicine faces in the 21st century. Alzheimer’s disease is a chronic neurodegenerative disease responsible for the majority of the cases of dementia in the elderly. The disease symptoms begins with short term memory loss and causes more severe symptoms – problems with language, disorientation, mood swings, behavioral issues – as it progresses, eventually leading to the loss of bodily functions and death. Because of the dementia the disease causes, Alzheimer’s patients require a great deal of care, and the world spends ~1% of its total GDP on caring for those with Alzheimer’s and related disorders. Because the prevalence of the disease increases with age, the situation will worsen as life expectancies around the globe increase: worldwide cases of Alzheimer’s are expected to grow from 35 million today to over 115 million by 2050.

Despite much research, the exact causes of Alzheimer’s disease remains poorly understood. The disease seems to be related to the accumulation of plaques made of amyloid-β peptides that form on the outside of neurons, as well as the formation of tangles of the protein tau inside of neurons. Although many efforts have been made to target amyloid-β or the enzymes involved in its formation, we have so far been unsuccessful at finding any treatment that stops the disease or reverses its progress. Some researchers believe that most attempts at treating Alzheimer’s have failed because, by the time a patient shows symptoms, the disease has already progressed past the point of no return.

While research towards a cure continues, researchers have sought effective ways to prevent Alzheimer’s disease. Although some studies show that mental and physical exercise may lower ones risk of Alzheimer’s disease, approximately 60-80% of the risk for Alzheimer’s disease appears to be genetic. Thus, if we’re serious about prevention, we may have to act at the genetic level. And because the brain is difficult to access surgically for gene therapy in adults, this means using gene editing on embryos.

Reference https://www.physicsforums.com/insights/can-gene-editing-eliminate-alzheimers-disease/

 

Utilising CRISPR to Generate Predictive Disease Models: a Case Study in Neurodegenerative Disorders


Dr. Bhuvaneish.T. Selvaraj  – Scottish Centre for Regenerative Medicine

http://www.crisprsummit.com/utilising-crispr-to-generate-predictive-disease-models-a-case-study-in-neurodegenerative-disorders

  • Introducing the latest developments in predictive model generation
  • Discover how CRISPR is being used to develop disease models to study and treat neurodegenerative disorders
  • In depth Q&A session to answer your most pressing questions

 

Turning On Genes, Systematically, with CRISPR/Cas9

http://www.genengnews.com/gen-news-highlights/turning-on-genes-systematically-with-crispr-cas9/81250697/

 

Scientists based at MIT assert that they can reliably turn on any gene of their choosing in living cells. [Feng Zhang and Steve Dixon]  http://www.genengnews.com/media/images/GENHighlight/Dec12_2014_CRISPRCas9GeneActivationSystem7838101231.jpg

With the latest CRISPR/Cas9 advance, the exhortation “turn on, tune in, drop out” comes to mind. The CRISPR/Cas9 gene-editing system was already a well-known means of “tuning in” (inserting new genes) and “dropping out” (knocking out genes). But when it came to “turning on” genes, CRISPR/Cas9 had little potency. That is, it had demonstrated only limited success as a way to activate specific genes.

A new CRISPR/Cas9 approach, however, appears capable of activating genes more effectively than older approaches. The new approach may allow scientists to more easily determine the function of individual genes, according to Feng Zhang, Ph.D., a researcher at MIT and the Broad Institute. Dr. Zhang and colleagues report that the new approach permits multiplexed gene activation and rapid, large-scale studies of gene function.

The new technique was introduced in the December 10 online edition of Nature, in an article entitled, “Genome-scale transcriptional activation by an engineered CRISPR-Cas9 complex.” The article describes how Dr. Zhang, along with the University of Tokyo’s Osamu Nureki, Ph.D., and Hiroshi Nishimasu, Ph.D., overhauled the CRISPR/Cas9 system. The research team based their work on their analysis (published earlier this year) of the structure formed when Cas9 binds to the guide RNA and its target DNA. Specifically, the team used the structure’s 3D shape to rationally improve the system.

In previous efforts to revamp CRISPR/Cas9 for gene activation purposes, scientists had tried to attach the activation domains to either end of the Cas9 protein, with limited success. From their structural studies, the MIT team realized that two small loops of the RNA guide poke out from the Cas9 complex and could be better points of attachment because they allow the activation domains to have more flexibility in recruiting transcription machinery.

Using their revamped system, the researchers activated about a dozen genes that had proven difficult or impossible to turn on using the previous generation of Cas9 activators. Each gene showed at least a twofold boost in transcription, and for many genes, the researchers found multiple orders of magnitude increase in activation.

After investigating single-guide RNA targeting rules for effective transcriptional activation, demonstrating multiplexed activation of 10 genes simultaneously, and upregulating long intergenic noncoding RNA transcripts, the research team decided to undertake a large-scale screen. This screen was designed to identify genes that confer resistance to a melanoma drug called PLX-4720.

“We … synthesized a library consisting of 70,290 guides targeting all human RefSeq coding isoforms to screen for genes that, upon activation, confer resistance to a BRAF inhibitor,” wrote the authors of the Nature paper. “The top hits included genes previously shown to be able to confer resistance, and novel candidates were validated using individual [single-guide RNA] and complementary DNA overexpression.”

A gene signature based on the top screening hits, the authors added, correlated with a gene expression signature of BRAF inhibitor resistance in cell lines and patient-derived samples. It was also suggested that large-scale screens such as the one demonstrated in the current study could help researchers discover new cancer drugs that prevent tumors from becoming resistant.

More at –  http://www.genengnews.com/gen-news-highlights/turning-on-genes-systematically-with-crispr-cas9/81250697/

 

Susceptibility and modifier genes in Portuguese transthyretin V30M amyloid polyneuropathy: complexity in a single-gene disease
Miguel L. Soares1,2, Teresa Coelho3,6, Alda Sousa4,5, …, Maria Joa˜o Saraiva2,5 and Joel N. Buxbaum1
Human Molec Gen 2005; 14(4): 543–553   http://dx.doi.org:/10.1093/hmg/ddi051
https://www.researchgate.net/profile/Isabel_Conceicao/publication/8081351_Susceptibility_and_modifier_genes_in_Portuguese_transthyretin_V30M_amyloid_polyneuropathy_complexity_in_a_single-gene_disease/links/53e123d70cf2235f352733b3.pdf

Familial amyloid polyneuropathy type I is an autosomal dominant disorder caused by mutations in the transthyretin (TTR ) gene; however, carriers of the same mutation exhibit variability in penetrance and clinical expression. We analyzed alleles of candidate genes encoding non-fibrillar components of TTR amyloid deposits and a molecule metabolically interacting with TTR [retinol-binding protein (RBP)], for possible associations with age of disease onset and/or susceptibility in a Portuguese population sample with the TTR V30M mutation and unrelated controls. We show that the V30M carriers represent a distinct subset of the Portuguese population. Estimates of genetic distance indicated that the controls and the classical onset group were furthest apart, whereas the late-onset group appeared to differ from both. Importantly, the data also indicate that genetic interactions among the multiple loci evaluated, rather than single-locus effects, are more likely to determine differences in the age of disease onset. Multifactor dimensionality reduction indicated that the best genetic model for classical onset group versus controls involved the APCS gene, whereas for late-onset cases, one APCS variant (APCSv1) and two RBP variants (RBPv1 and RBPv2) are involved. Thus, although the TTR V30M mutation is required for the disease in Portuguese patients, different genetic factors may govern the age of onset, as well as the occurrence of anticipation.

Autosomal dominant disorders may vary in expression even within a given kindred. The basis of this variability is uncertain and can be attributed to epigenetic factors, environment or epistasis. We have studied familial amyloid polyneuropathy (FAP), an autosomal dominant disorder characterized by peripheral sensorimotor and autonomic neuropathy. It exhibits variation in cardiac, renal, gastrointestinal and ocular involvement, as well as age of onset. Over 80 missense mutations in the transthyretin gene (TTR ) result in autosomal dominant disease http://www.ibmc.up.pt/~mjsaraiv/ttrmut.html). The presence of deposits consisting entirely of wild-type TTR molecules in the hearts of 10– 25% of individuals over age 80 reveals its inherent in vivo amyloidogenic potential (1).

FAP was initially described in Portuguese (2) where, until recently, the TTR V30M has been the only pathogenic mutation associated with the disease (3,4). Later reports identified the same mutation in Swedish and Japanese families (5,6). The disorder has since been recognized in other European countries and in North American kindreds in association with V30M, as well as other mutations (7).

TTR V30M produces disease in only 5–10% of Swedish carriers of the allele (8), a much lower degree of penetrance than that seen in Portuguese (80%) (9) or in Japanese with the same mutation. The actual penetrance in Japanese carriers has not been formally established, but appears to resemble that seen in Portuguese. Portuguese and Japanese carriers show considerable variation in the age of clinical onset (10,11). In both populations, the first symptoms had originally been described as typically occurring before age 40 (so-called ‘classical’ or early-onset); however, in recent years, more individuals developing symptoms late in life have been identified (11,12). Hence, present data indicate that the distribution of the age of onset in Portuguese is continuous, but asymmetric with a mean around age 35 and a long tail into the older age group (Fig. 1) (9,13). Further, DNA testing in Portugal has identified asymptomatic carriers over age 70 belonging to a subset of very late-onset kindreds in whose descendants genetic anticipation is frequent. The molecular basis of anticipation in FAP, which is not mediated by trinucleotide repeat expansions in the TTR or any other gene (14), remains elusive.

Variation in penetrance, age of onset and clinical features are hallmarks of many autosomal dominant disorders including the human TTR amyloidoses (7). Some of these clearly reflect specific biological effects of a particular mutation or a class of mutants. However, when such phenotypic variability is seen with a single mutation in the gene encoding the same protein, it suggests an effect of modifying genetic loci and/or environmental factors contributing differentially to the course of disease. We have chosen to examine age of onset as an example of a discrete phenotypic variation in the presence of the particular autosomal dominant disease-associated mutation TTR V30M. Although the role of environmental factors cannot be excluded, the existence of modifier genes involved in TTR amyloidogenesis is an attractive hypothesis to explain the phenotypic variability in FAP. ….

ATTR (TTR amyloid), like all amyloid deposits, contains several molecular components, in addition to the quantitatively dominant fibril-forming amyloid protein, including heparan sulfate proteoglycan 2 (HSPG2 or perlecan), SAP, a plasma glycoprotein of the pentraxin family (encoded by the APCS gene) that undergoes specific calcium-dependent binding to all types of amyloid fibrils, and apolipoprotein E (ApoE), also found in all amyloid deposits (15). The ApoE4 isoform is associated with an increased frequency and earlier onset of Alzheimer’s disease (Ab), the most common form of brain amyloid, whereas the ApoE2 isoform appears to be protective (16). ApoE variants could exert a similar modulatory effect in the onset of FAP, although early studies on a limited number of patients suggested this was not the case (17).

In at least one instance of senile systemic amyloidosis, small amounts of AA-related material were found in TTR deposits (18). These could reflect either a passive co-aggregation or a contributory involvement of protein AA, encoded by the serum amyloid A (SAA ) genes and the main component of secondary (reactive) amyloid fibrils, in the formation of ATTR.

Retinol-binding protein (RBP), the serum carrier of vitamin A, circulates in plasma bound to TTR. Vitamin A-loaded RBP and L-thyroxine, the two natural ligands of TTR, can act alone or synergistically to inhibit the rate and extent of TTR fibrillogenesis in vitro, suggesting that RBP may influence the course of FAP pathology in vivo (19). We have analyzed coding and non-coding sequence polymorphisms in the RBP4 (serum RBP, 10q24), HSPG2 (1p36.1), APCS (1q22), APOE (19q13.2), SAA1 and SAA2 (11p15.1) genes with the goal of identifying chromosomes carrying common and functionally significant variants. At the time these studies were performed, the full human genome sequence was not completed and systematic singlenucleotide polymorphism (SNP) analyses were not available for any of the suspected candidate genes. We identified new SNPs in APCS and RBP4 and utilized polymorphisms in SAA, HSPG2 and APOE that had already been characterized and shown to have potential pathophysiologic significance in other disorders (16,20–22). The genotyping data were analyzed for association with the presence of the V30M amyloidogenic allele (FAP patients versus controls) and with the age of onset (classical- versus late-onset patients). Multilocus analyses were also performed to examine the effects of simultaneous contributions of the six loci for determining the onset of the first symptoms.  …..

The potential for different underlying models for classical and late onset is supported by the MDR analysis, which produces two distinct models when comparing each class with the controls. One could view the two onset classes as unique diseases. If this is the case, then the failure to detect a single predictive genetic model is consistent with two related, but different, diseases. This is exactly what would be expected in such a case of genetic heterogeneity (28). Using this approach, a major gene effect can be viewed as a necessary, but not sufficient, condition to explain the course of the disease. Analyzing the cases but omitting from the analysis of phenotype the necessary allele, in this case TTR V30M, can then reveal a variety of important modifiers that are distinct between the phenotypes.

The significant comparisons obtained in our study cohort indicate that the combined effects mainly result from two and three-locus interactions involving all loci except SAA1 and SAA2 for susceptibility to disease. A considerable number of four-site combinations modulate the age of onset with SAA1 appearing in a majority of significant combinations in late-onset disease, perhaps indicating a greater role of the SAA variants in the age of onset of FAP.

The correlation between genotype and phenotype in socalled simple Mendelian disorders is often incomplete, as only a subset of all mutations can reliably predict specific phenotypes (34). This is because non-allelic genetic variations and/or environmental influences underlie these disorders whose phenotypes behave as complex traits. A few examples include the identification of the role of homozygozity for the SAA1.1 allele in conferring the genetic susceptibility to renal amyloidosis in FMF (20) and the association of an insertion/deletion polymorphism in the ACE gene with disease severity in familial hypertrophic cardiomyopathy (35). In these disorders, the phenotypes arise from mutations in MEFV and b-MHC, but are modulated by independently inherited genetic variation. In this report, we show that interactions among multiple genes, whose products are confirmed or putative constituents of ATTR deposits, or metabolically interact with TTR, modulate the onset of the first symptoms and predispose individuals to disease in the presence of the V30M mutation in TTR. The exact nature of the effects identified here requires further study with potential application in the development of genetic screening with prognostic value pertaining to the onset of disease in the TTR V30M carriers.

If the effects of additional single or interacting genes dictate the heterogeneity of phenotype, as reflected in variability of onset and clinical expression (with the same TTR mutation), the products encoded by alleles at such loci could contribute to the process of wild-type TTR deposition in elderly individuals without a mutation (senile systemic amyloidosis), a phenomenon not readily recognized as having a genetic basis because of the insensitivity of family history in the elderly.

 

Safety and Efficacy of RNAi Therapy for Transthyretin Amyloidosis

Coelho T, Adams D, Silva A, et al.
N Engl J Med 2013;369:819-29.    http://dx.doi.org:/10.1056/NEJMoa1208760

Transthyretin amyloidosis is caused by the deposition of hepatocyte-derived transthyretin amyloid in peripheral nerves and the heart. A therapeutic approach mediated by RNA interference (RNAi) could reduce the production of transthyretin.

Methods We identified a potent antitransthyretin small interfering RNA, which was encapsulated in two distinct first- and second-generation formulations of lipid nanoparticles, generating ALN-TTR01 and ALN-TTR02, respectively. Each formulation was studied in a single-dose, placebo-controlled phase 1 trial to assess safety and effect on transthyretin levels. We first evaluated ALN-TTR01 (at doses of 0.01 to 1.0 mg per kilogram of body weight) in 32 patients with transthyretin amyloidosis and then evaluated ALN-TTR02 (at doses of 0.01 to 0.5 mg per kilogram) in 17 healthy volunteers.

Results Rapid, dose-dependent, and durable lowering of transthyretin levels was observed in the two trials. At a dose of 1.0 mg per kilogram, ALN-TTR01 suppressed transthyretin, with a mean reduction at day 7 of 38%, as compared with placebo (P=0.01); levels of mutant and nonmutant forms of transthyretin were lowered to a similar extent. For ALN-TTR02, the mean reductions in transthyretin levels at doses of 0.15 to 0.3 mg per kilogram ranged from 82.3 to 86.8%, with reductions of 56.6 to 67.1% at 28 days (P<0.001 for all comparisons). These reductions were shown to be RNAi mediated. Mild-to-moderate infusion-related reactions occurred in 20.8% and 7.7% of participants receiving ALN-TTR01 and ALN-TTR02, respectively.

ALN-TTR01 and ALN-TTR02 suppressed the production of both mutant and nonmutant forms of transthyretin, establishing proof of concept for RNAi therapy targeting messenger RNA transcribed from a disease-causing gene.

 

Alnylam May Seek Approval for TTR Amyloidosis Rx in 2017 as Other Programs Advance


https://www.genomeweb.com/rnai/alnylam-may-seek-approval-ttr-amyloidosis-rx-2017-other-programs-advance

Officials from Alnylam Pharmaceuticals last week provided updates on the two drug candidates from the company’s flagship transthyretin-mediated amyloidosis program, stating that the intravenously delivered agent patisiran is proceeding toward a possible market approval in three years, while a subcutaneously administered version called ALN-TTRsc is poised to enter Phase III testing before the end of the year.

Meanwhile, Alnylam is set to advance a handful of preclinical therapies into human studies in short order, including ones for complement-mediated diseases, hypercholesterolemia, and porphyria.

The officials made their comments during a conference call held to discuss Alnylam’s second-quarter financial results.

ATTR is caused by a mutation in the TTR gene, which normally produces a protein that acts as a carrier for retinol binding protein and is characterized by the accumulation of amyloid deposits in various tissues. Alnylam’s drugs are designed to silence both the mutant and wild-type forms of TTR.

Patisiran, which is delivered using lipid nanoparticles developed by Tekmira Pharmaceuticals, is currently in a Phase III study in patients with a form of ATTR called familial amyloid polyneuropathy (FAP) affecting the peripheral nervous system. Running at over 20 sites in nine countries, that study is set to enroll up to 200 patients and compare treatment to placebo based on improvements in neuropathy symptoms.

According to Alnylam Chief Medical Officer Akshay Vaishnaw, Alnylam expects to have final data from the study in two to three years, which would put patisiran on track for a new drug application filing in 2017.

Meanwhile, ALN-TTRsc, which is under development for a version of ATTR that affects cardiac tissue called familial amyloidotic cardiomyopathy (FAC) and uses Alnylam’s proprietary GalNAc conjugate delivery technology, is set to enter Phase III by year-end as Alnylam holds “active discussions” with US and European regulators on the design of that study, CEO John Maraganore noted during the call.

In the interim, Alnylam continues to enroll patients in a pilot Phase II study of ALN-TTRsc, which is designed to test the drug’s efficacy for FAC or senile systemic amyloidosis (SSA), a condition caused by the idiopathic accumulation of wild-type TTR protein in the heart.

Based on “encouraging” data thus far, Vaishnaw said that Alnylam has upped the expected enrollment in this study to 25 patients from 15. Available data from the trial is slated for release in November, he noted, stressing that “any clinical endpoint result needs to be considered exploratory given the small sample size and the very limited duration of treatment of only six weeks” in the trial.

Vaishnaw added that an open-label extension (OLE) study for patients in the ALN-TTRsc study will kick off in the coming weeks, allowing the company to gather long-term dosing tolerability and clinical activity data on the drug.

Enrollment in an OLE study of patisiran has been completed with 27 patients, he said, and, “as of today, with up to nine months of therapy … there have been no study drug discontinuations.” Clinical endpoint data from approximately 20 patients in this study will be presented at the American Neurological Association meeting in October.

As part of its ATTR efforts, Alnylam has also been conducting natural history of disease studies in both FAP and FAC patients. Data from the 283-patient FAP study was presented earlier this year and showed a rapid progression in neuropathy impairment scores and a high correlation of this measurement with disease severity.

During last week’s conference call, Vaishnaw said that clinical endpoint and biomarker data on about 400 patients with either FAC or SSA have already been collected in a nature history study on cardiac ATTR. Maraganore said that these findings would likely be released sometime next year.

Alnylam Presents New Phase II, Preclinical Data from TTR Amyloidosis Programs
https://www.genomeweb.com/rnai/alnylam-presents-new-phase-ii-preclinical-data-ttr-amyloidosis-programs

 

Amyloid disease drug approved

Nature Biotechnology 2012; (3http://dx.doi.org:/10.1038/nbt0212-121b

The first medication for a rare and often fatal protein misfolding disorder has been approved in Europe. On November 16, the E gave a green light to Pfizer’s Vyndaqel (tafamidis) for treating transthyretin amyloidosis in adult patients with stage 1 polyneuropathy symptoms. [Jeffery Kelly, La Jolla]

 

Safety and Efficacy of RNAi Therapy for Transthyretin …

http://www.nejm.org/…/NEJMoa1208760?&#8230;

The New England Journal of Medicine

Aug 29, 2013 – Transthyretin amyloidosis is caused by the deposition of hepatocyte-derived transthyretin amyloid in peripheral nerves and the heart.

 

Alnylam’s RNAi therapy targets amyloid disease

Ken Garber
Nature Biotechnology 2015; 33(577)    http://dx.doi.org:/10.1038/nbt0615-577a

RNA interference’s silencing of target genes could result in potent therapeutics.

http://www.nature.com/nbt/journal/v33/n6/images/nbt0615-577a-I1.jpg

The most clinically advanced RNA interference (RNAi) therapeutic achieved a milestone in April when Alnylam Pharmaceuticals in Cambridge, Massachusetts, reported positive results for patisiran, a small interfering RNA (siRNA) oligonucleotide targeting transthyretin for treating familial amyloidotic polyneuropathy (FAP).  …

  1. Analysis of 589,306 genomes identifies individuals resilient to severe Mendelian childhood diseases

Nature Biotechnology 11 April 2016

  1. CRISPR-Cas systems for editing, regulating and targeting genomes

Nature Biotechnology 02 March 2014

  1. Near-optimal probabilistic RNA-seq quantification

Nature Biotechnology 04 April 2016

 

Translational Neuroscience: Toward New Therapies

https://books.google.com/books?isbn=0262029863

Karoly Nikolich, ‎Steven E. Hyman – 2015 – ‎Medical

Tafamidis for Transthyretin Familial Amyloid Polyneuropathy: A Randomized, Controlled Trial. … Multiplex Genome Engineering Using CRISPR/Cas Systems.

 

Is CRISPR a Solution to Familial Amyloid Polyneuropathy?

Author and Curator: Larry H. Bernstein, MD, FCAP

Originally published as

https://pharmaceuticalintelligence.com/2016/04/13/is-crispr-a-solution-to-familial-amyloid-polyneuropathy/

 

http://scholar.aci.info/view/1492518a054469f0388/15411079e5a00014c3d

FAP is characterized by the systemic deposition of amyloidogenic variants of the transthyretin protein, especially in the peripheral nervous system, causing a progressive sensory and motor polyneuropathy.

FAP is caused by a mutation of the TTR gene, located on human chromosome 18q12.1-11.2.[5] A replacement of valine by methionine at position 30 (TTR V30M) is the mutation most commonly found in FAP.[1] The variant TTR is mostly produced by the liver.[citation needed] The transthyretin protein is a tetramer.    ….

 

 

Read Full Post »

« Newer Posts - Older Posts »