Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Elizabeth Unger from the Tian group at UC Davis, Jacob Keller from the Looger lab from HHMI, Michael Altermatt from the Gradinaru group at California Institute of Technology, and colleagues did just this, by redesigned the binding pocket of periplasmic binding proteins (PBPs) using artificial intelligence, such that it became a fluorescent sensor specific for serotonin. Not only this, the group showed that it could express and use this molecule to detect serotonin on the cell, tissue, and whole animal level.
By starting with a microbial PBP and early version of an acetyl choline sensor (iAChSnFR), the scientists used machine learning and modeling to redesign the binding site to exhibit a higher affinity and specificity to serotonin. After three repeats of mutagenesis, modeling, and library readouts, they produced iSeroSnFR. This version harbors 19 mutations compared to iAChSnFR0.6 and a Kd of 310 µM. This results in an increase in fluorescence in HEK293T cells expressing the serotonin receptor of 800%. Of over 40 neurotransmitters, amino acids, and small molecules screened, only two endogenous molecules evoked some fluorescence, but at significantly higher concentrations.
To acutely test the ability of the sensor to detect rapid changes of serotonin in the environment, the researchers used caged serotonin, a technique in which the serotonin is rapidly released into the environment with light pulses, and showed that iSeroSnFR accurately and robustly produced a signal with each flash of light. With this tool, it was then possible to move to ex-vivo mouse brain slices and detect endogenous serotonin release patterns across the brain. Three weeks after targeted injection of iSeroSnFR to specifically deliver the receptor into the prefrontal cortex and dorsal striatum, strong fluorescent signal could be detected during perfusion of serotonin or electrical stimulation.
Most significantly, this molecule was also shown to be detected in freely moving mice, a tool which could offer critical insight into the acute role of serotonin regulation during important functions such as mood and alertness. Through optical fiber placements in the basolateral amygdala and prefrontal cortex, the team measured dynamic and real-time changes in serotonin release in fear-trained mice, social interactions, and sleep wake cycles. For example, while both areas of the brain have been established as relevant to the fear response, they reliably tracked that the PFC response was immediate, while the BSA displayed a delayed response. This additional temporal resolution of neuromodulation may have important implications in neurotransmitter pharmacology of the central nervous system.
This study provided the scientific community with several insights and tools. The serotonin sensor itself will be a critical tool in the study of the central nervous system and possibly beyond. Additionally, an AI approach to mutagenesis in order to redesign a binding pocket of a receptor opens new avenues to the development of pharmacological tools and may lead to many new designs in therapeutics and research.
The relationship between gut microbial metabolism and mental health is one of the most intriguing and controversial topics in microbiome research. Bidirectional microbiota–gut–brain communication has mostly been explored in animal models, with human research lagging behind. Large-scale metagenomics studies could facilitate the translational process, but their interpretation is hampered by a lack of dedicated reference databases and tools to study the microbial neuroactive potential.
Out of all the many ways, the teeming ecosystem of microbes in a person’s gut and other tissues might affect health. But, its potential influences on the brain may be the most provocative for research. Several studies in mice had indicated that gut microbes can affect behavior, and small scale studies on human beings suggested this microbial repertoire is altered in depression. Studies by two large European groups have found that several species of gut bacteria are missing in people with depression. The researchers can’t say whether the absence is a cause or an effect of the illness, but they showed that many gut bacteria could make substances that affect the nerve cell function—and maybe the mood.
Butyrate-producing Faecalibacterium and Coprococcus bacteria were consistently associated with higher quality of life indicators. Together with Dialister, Coprococcus spp. was also depleted in depression, even after correcting for the confounding effects of antidepressants. Two kinds of microbes, Coprococcus and Dialister, were missing from the microbiomes of the depressed subjects, but not from those with a high quality of life. The researchers also found the depressed people had an increase in bacteria implicated in Crohn disease, suggesting inflammation may be at fault.
Looking for something that could link microbes to mood, researchers compiled a list of 56 substances important for proper functioning of nervous system that gut microbes either produce or break down. They found, for example, that Coprococcus seems to have a pathway related to dopamine, a key brain signal involved in depression, although they have no evidence how this might protect against depression. The same microbe also makes an anti-inflammatory substance called butyrate, and increased inflammation is implicated in depression.
Still, it is very much unclear that how microbial compounds made in the gut might influence the brain. One possible channel is the vagus nerve, which links the gut and brain. Resolving the microbiome-brain connection might lead to novel therapies. Some physicians and companies are already exploring typical probiotics, oral bacterial supplements, for depression, although they don’t normally include the missing gut microbes identified in the new study.
A Genetic Switch to Control Female Sexual Behavior, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
In an African cichlid fish, Astatotilapia burtoni, fertile females select a mate and perform a stereotyped spawning / mating routine, offering quantifiable behavioral outputs of neural circuits. A male fish attracts a fertile female by rapidly quivering his brightly colored body. If she chooses him, he guides her back to his territory, where he quivers some more as she pecks at fish egg–colored spots on his anal fin. Next, she lays eggs and quickly scoops them up in her mouth. With a mouthful of eggs, she continues pecking at the male’s spots, “believing” them to be eggs to be collected. As she does, he releases sperm from near his anal fin, which she also gathers. This fertilizes the eggs, and she carries the embryos in her mouth for two weeks as they develop.
But, the question was how these females can time their reproduction to coincide with when they are fertile. The female fish will not approach or choose males until they are ready to reproduce, so there must be something in their brains that signals when sexual behavior will be required. The scientists began by considering signaling molecules previously associated with sexual behavior and reproduction, and showed that PGF2α injection activates a naturalistic pattern of sexual behavior in female Astatotilapia burtoni. They would engage in mating behavior even if they were non-fertile, doing the quiver dance with males, but wouldn’t actually lay eggs since they had none.
The scientists also identified cells in the brain that transduce the prostaglandin signal to mate and showed that the gonadal steroid 17α, 20β-dihydroxyprogesterone modulates mRNA levels of the putative receptor for PGF2α. The scientists keyed in on a receptor for PGF2α in the preoptic area (POA) within the hypothalamus of the brain, a region involved in sexual behavior across animals. They suspected that when PGF2α levels elevated in the fish, the molecule attaches to this receptor and triggers sexual behavior. Then they used CRISPR/Cas9 to generate PGF2α receptor knockout fish. This gene deletion or knockout uncoupled the sexual behavior from fertility status to prove that the receptor of PGF2α is necessary for the initiation of sexual behavior.
The finding has parallels across all vertebrates, and might influence the understanding of social behavior in humans. The next steps for this work will involve understanding other behaviors that are regulated by this receptor, and the finding provides insight into both the evolution of reproduction and sexual behaviors. In mammals and other vertebrates, PGF2α promotes the onset of labor and motherly behaviors, and this present research, coupled with other studies, suggests that PGF2α signaling has a common ancestral function associated with birth and its related behaviors.
Disease related changes in proteomics, protein folding, protein-protein interaction, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
Disease related changes in proteomics, protein folding, protein-protein interaction
Curator: Larry H. Bernstein, MD, FCAP
LPBI
Frankenstein Proteins Stitched Together by Scientists
The Frankenstein monster, stitched together from disparate body parts, proved to be an abomination, but stitched together proteins may fare better. They may, for example, serve specific purposes in medicine, research, and industry. At least, that’s the ambition of scientists based at the University of North Carolina. They have developed a computational protocol called SEWING that builds new proteins from connected or disconnected pieces of existing structures. [Wikipedia]
Unlike Victor Frankenstein, who betrayed Promethean ambition when he sewed together his infamous creature, today’s biochemists are relatively modest. Rather than defy nature, they emulate it. For example, at the University of North Carolina (UNC), researchers have taken inspiration from natural evolutionary mechanisms to develop a technique called SEWING—Structure Extension With Native-substructure Graphs. SEWING is a computational protocol that describes how to stitch together new proteins from connected or disconnected pieces of existing structures.
“We can now begin to think about engineering proteins to do things that nothing else is capable of doing,” said UNC’s Brian Kuhlman, Ph.D. “The structure of a protein determines its function, so if we are going to learn how to design new functions, we have to learn how to design new structures. Our study is a critical step in that direction and provides tools for creating proteins that haven’t been seen before in nature.”
Traditionally, researchers have used computational protein design to recreate in the laboratory what already exists in the natural world. In recent years, their focus has shifted toward inventing novel proteins with new functionality. These design projects all start with a specific structural “blueprint” in mind, and as a result are limited. Dr. Kuhlman and his colleagues, however, believe that by removing the limitations of a predetermined blueprint and taking cues from evolution they can more easily create functional proteins.
Dr. Kuhlman’s UNC team developed a protein design approach that emulates natural mechanisms for shuffling tertiary structures such as pleats, coils, and furrows. Putting the approach into action, the UNC team mapped 50,000 stitched together proteins on the computer, and then it produced 21 promising structures in the laboratory. Details of this work appeared May 6 in the journal Science, in an article entitled, “Design of Structurally Distinct Proteins Using Strategies Inspired by Evolution.”
“Helical proteins designed with SEWING contain structural features absent from other de novo designed proteins and, in some cases, remain folded at more than 100°C,” wrote the authors. “High-resolution structures of the designed proteins CA01 and DA05R1 were solved by x-ray crystallography (2.2 angstrom resolution) and nuclear magnetic resonance, respectively, and there was excellent agreement with the design models.”
Essentially, the UNC scientists confirmed that the proteins they had synthesized contained the unique structural varieties that had been designed on the computer. The UNC scientists also determined that the structures they had created had new surface and pocket features. Such features, they noted, provide potential binding sites for ligands or macromolecules.
“We were excited that some had clefts or grooves on the surface, regions that naturally occurring proteins use for binding other proteins,” said the Science article’s first author, Tim M. Jacobs, Ph.D., a former graduate student in Dr. Kuhlman’s laboratory. “That’s important because if we wanted to create a protein that can act as a biosensor to detect a certain metabolite in the body, either for diagnostic or research purposes, it would need to have these grooves. Likewise, if we wanted to develop novel therapeutics, they would also need to attach to specific proteins.”
Currently, the UNC researchers are using SEWING to create proteins that can bind to several other proteins at a time. Many of the most important proteins are such multitaskers, including the blood protein hemoglobin.
Histone Mutation Deranges DNA Methylation to Cause Cancer
In some cancers, including chondroblastoma and a rare form of childhood sarcoma, a mutation in histone H3 reduces global levels of methylation (dark areas) in tumor cells but not in normal cells (arrowhead). The mutation locks the cells in a proliferative state to promote tumor development. [Laboratory of Chromatin Biology and Epigenetics at The Rockefeller University]
They have been called oncohistones, the mutated histones that are known to accompany certain pediatric cancers. Despite their suggestive moniker, oncohistones have kept their oncogenic secrets. For example, it has been unclear whether oncohistones are able to cause cancer on their own, or whether they need to act in concert with additional DNA mutations, that is, mutations other than those affecting histone structures.
While oncohistone mechanisms remain poorly understood, this particular question—the oncogenicity of lone oncohistones—has been resolved, at least in part. According to researchers based at The Rockefeller University, a change to the structure of a histone can trigger a tumor on its own.
This finding appeared May 13 in the journal Science, in an article entitled, “Histone H3K36 Mutations Promote Sarcomagenesis Through Altered Histone Methylation Landscape.” The article describes the Rockefeller team’s study of a histone protein called H3, which has been found in about 95% of samples of chondoblastoma, a benign tumor that arises in cartilage, typically during adolescence.
The Rockefeller scientists found that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo.
After the scientists inserted the H3 histone mutation into mouse mesenchymal progenitor cells (MPCs)—which generate cartilage, bone, and fat—they watched these cells lose the ability to differentiate in the lab. Next, the scientists injected the mutant cells into living mice, and the animals developed the tumors rich in MPCs, known as an undifferentiated sarcoma. Finally, the researchers tried to understand how the mutation causes the tumors to develop.
The scientists determined that H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases.
“Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation,” the authors of the Science study wrote. “After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation.”
Essentially, when the H3K36M mutation occurs, the cell becomes locked in a proliferative state—meaning it divides constantly, leading to tumors. Specifically, the mutation inhibits enzymes that normally tag the histone with chemical groups known as methyls, allowing genes to be expressed normally.
In response to this lack of modification, another part of the histone becomes overmodified, or tagged with too many methyl groups. “This leads to an overall resetting of the landscape of chromatin, the complex of DNA and its associated factors, including histones,” explained co-author Peter Lewis, Ph.D., a professor at the University of Wisconsin-Madison and a former postdoctoral fellow in laboratory of C. David Allis, Ph.D., a professor at Rockefeller.
The finding—that a “resetting” of the chromatin landscape can lock the cell into a proliferative state—suggests that researchers should be on the hunt for more mutations in histones that might be driving tumors. For their part, the Rockefeller researchers are trying to learn more about how this specific mutation in histone H3 causes tumors to develop.
“We want to know which pathways cause the mesenchymal progenitor cells that carry the mutation to continue to divide, and not differentiate into the bone, fat, and cartilage cells they are destined to become,” said co-author Chao Lu, Ph.D., a postdoctoral fellow in the Allis lab.
Once researchers understand more about these pathways, added Dr. Lewis, they can consider ways of blocking them with drugs, particularly in tumors such as MPC-rich sarcomas—which, unlike chondroblastoma, can be deadly. In fact, drugs that block these pathways may already exist and may even be in use for other types of cancers.
“One long-term goal of our collaborative team is to better understand fundamental mechanisms that drive these processes, with the hope of providing new therapeutic approaches,” concluded Dr. Allis.
Histone H3K36 mutations promote sarcomagenesis through altered histone methylation landscape
Missense mutations (that change one amino acid for another) in histone H3 can produce a so-called oncohistone and are found in a number of pediatric cancers. For example, the lysine-36–to-methionine (K36M) mutation is seen in almost all chondroblastomas. Lu et al. show that K36M mutant histones are oncogenic, and they inhibit the normal methylation of this same residue in wild-type H3 histones. The mutant histones also interfere with the normal development of bone-related cells and the deposition of inhibitory chromatin marks.
Several types of pediatric cancers reportedly contain high-frequency missense mutations in histone H3, yet the underlying oncogenic mechanism remains poorly characterized. Here we report that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo. H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases. Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation. After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation. Our findings are mirrored in human undifferentiated sarcomas in which novel K36M/I mutations in H3.1 are identified.
Mitochondria? We Don’t Need No Stinking Mitochondria!
Diagram comparing typical eukaryotic cell to the newly discovered mitochondria-free organism. [Karnkowska et al., 2016, Current Biology 26, 1–11]
The organelle that produces a significant portion of energy for eukaryotic cells would seemingly be indispensable, yet over the years, a number of organisms have been discovered that challenge that biological pretense. However, these so-called amitochondrial species may lack a defined organelle, but they still retain some residual functions of their mitochondria-containing brethren. Even the intestinal eukaryotic parasite Giardia intestinalis, which was for many years considered to be mitochondria-free, was proven recently to contain a considerably shriveled version of the organelle.
Now, an international group of scientists has released results from a new study that challenges the notion that mitochondria are essential for eukaryotes—discovering an organism that resides in the gut of chinchillas that contains absolutely no trace of mitochondria at all.
“In low-oxygen environments, eukaryotes often possess a reduced form of the mitochondrion, but it was believed that some of the mitochondrial functions are so essential that these organelles are indispensable for their life,” explained lead study author Anna Karnkowska, Ph.D., visiting scientist at the University of British Columbia in Vancouver. “We have characterized a eukaryotic microbe which indeed possesses no mitochondrion at all.”
Mysterious Eukaryote Missing Mitochondria
Researchers uncover the first example of a eukaryotic organism that lacks the organelles.
Monocercomonoides sp. PA203VLADIMIR HAMPL, CHARLES UNIVERSITY, PRAGUE, CZECH REPUBLIC
Scientists have long thought that mitochondria—organelles responsible for energy generation—are an essential and defining feature of a eukaryotic cell. Now, researchers from Charles University in Prague and their colleagues are challenging this notion with their discovery of a eukaryotic organism,Monocercomonoides species PA203, which lacks mitochondria. The team’s phylogenetic analysis, published today (May 12) in Current Biology,suggests that Monocercomonoides—which belong to the Oxymonadida group of protozoa and live in low-oxygen environments—did have mitochondria at one point, but eventually lost the organelles.
“This is quite a groundbreaking discovery,” said Thijs Ettema, who studies microbial genome evolution at Uppsala University in Sweden and was not involved in the work.
“This study shows that mitochondria are not so central for all lineages of living eukaryotes,” Toni Gabaldonof the Center for Genomic Regulation in Barcelona, Spain, who also was not involved in the work, wrote in an email to The Scientist. “Yet, this mitochondrial-devoid, single-cell eukaryote is as complex as other eukaryotic cells in almost any other aspect of cellular complexity.”
Charles University’s Vladimir Hampl studies the evolution of protists. Along with Anna Karnkowska and colleagues, Hampl decided to sequence the genome of Monocercomonoides, a little-studied protist that lives in the digestive tracts of vertebrates. The 75-megabase genome—the first of an oxymonad—did not contain any conserved genes found on mitochondrial genomes of other eukaryotes, the researchers found. It also did not contain any nuclear genes associated with mitochondrial functions.
“It was surprising and for a long time, we didn’t believe that the [mitochondria-associated genes were really not there]. We thought we were missing something,” Hampl told The Scientist. “But when the data kept accumulating, we switched to the hypothesis that this organism really didn’t have mitochondria.”
Because researchers have previously not found examples of eukaryotes without some form of mitochondria, the current theory of the origin of eukaryotes poses that the appearance of mitochondria was crucial to the identity of these organisms.
“We now view these mitochondria-like organelles as a continuum from full mitochondria to very small . Some anaerobic protists, for example, have only pared down versions of mitochondria, such as hydrogenosomes and mitosomes, which lack a mitochondrial genome. But these mitochondrion-like organelles perform essential functions of the iron-sulfur cluster assembly pathway, which is known to be conserved in virtually all eukaryotic organisms studied to date.
Yet, in their analysis, the researchers found no evidence of the presence of any components of this mitochondrial pathway.
Like the scaling down of mitochondria into mitosomes in some organisms, the ancestors of modernMonocercomonoides once had mitochondria. “Because this organism is phylogenetically nested among relatives that had conventional mitochondria, this is most likely a secondary adaptation,” said Michael Gray, a biochemist who studies mitochondria at Dalhousie University in Nova Scotia and was not involved in the study. According to Gray, the finding of a mitochondria-deficient eukaryote does not mean that the organelles did not play a major role in the evolution of eukaryotic cells.
To be sure they were not missing mitochondrial proteins, Hampl’s team also searched for potential mitochondrial protein homologs of other anaerobic species, and for signature sequences of a range of known mitochondrial proteins. While similar searches with other species uncovered a few mitochondrial proteins, the team’s analysis of Monocercomonoides came up empty.
“The data is very complete,” said Ettema. “It is difficult to prove the absence of something but [these authors] do a convincing job.”
To form the essential iron-sulfur clusters, the team discovered that Monocercomonoides use a sulfur mobilization system found in the cytosol, and that an ancestor of the organism acquired this system by lateral gene transfer from bacteria. This cytosolic, compensating system allowed Monocercomonoides to lose the otherwise essential iron-sulfur cluster-forming pathway in the mitochondrion, the team proposed.
“This work shows the great evolutionary plasticity of the eukaryotic cell,” said Karnkowska, who participated in the study while she was a postdoc at Charles University. Karnkowska, who is now a visiting researcher at the University of British Columbia in Canada, added: “This is a striking example of how far the evolution of a eukaryotic cell can go that was beyond our expectations.”
“The results highlight how many surprises may await us in the poorly studied eukaryotic phyla that live in under-explored environments,” Gabaldon said.
Ettema agreed. “Now that we’ve found one, we need to look at the bigger picture and see if there are other examples of eukaryotes that have lost their mitochondria, to understand how adaptable eukaryotes are.”
Karnkowska et al., “A eukaryote without a mitochondrial organelle,” Current Biology,doi:10.1016/j.cub.2016.03.053, 2016.
•Monocercomonoides sp. is a eukaryotic microorganism with no mitochondria
•The complete absence of mitochondria is a secondary loss, not an ancestral feature
•The essential mitochondrial ISC pathway was replaced by a bacterial SUF system
The presence of mitochondria and related organelles in every studied eukaryote supports the view that mitochondria are essential cellular components. Here, we report the genome sequence of a microbial eukaryote, the oxymonad Monocercomonoides sp., which revealed that this organism lacks all hallmark mitochondrial proteins. Crucially, the mitochondrial iron-sulfur cluster assembly pathway, thought to be conserved in virtually all eukaryotic cells, has been replaced by a cytosolic sulfur mobilization system (SUF) acquired by lateral gene transfer from bacteria. In the context of eukaryotic phylogeny, our data suggest that Monocercomonoides is not primitively amitochondrial but has lost the mitochondrion secondarily. This is the first example of a eukaryote lacking any form of a mitochondrion, demonstrating that this organelle is not absolutely essential for the viability of a eukaryotic cell.
This method catches a bait protein together with its associated protein partners in virus-like particles that are budded from human cells. Like this, cell lysis is not needed and protein complexes are preserved during purification.
With his feet in both a proteomics lab and an interactomics lab, VIB/UGent professor Sven Eyckerman is well aware of the shortcomings of conventional approaches to analyze protein complexes. The lysis conditions required in mass spectrometry–based strategies to break open cell membranes often affect protein-protein interactions. “The first step in a classical study on protein complexes essentially turns the highly organized cellular structure into a big messy soup”, Eyckerman explains.
Inspired by virus biology, Eyckerman came up with a creative solution. “We used the natural process of HIV particle formation to our benefit by hacking a completely safe form of the virus to abduct intact protein machines from the cell.” It is well known that the HIV virus captures a number of host proteins during its particle formation. By fusing a bait protein to the HIV-1 GAG protein, interaction partners become trapped within virus-like particles that bud from mammalian cells. Standard proteomic approaches are used next to reveal the content of these particles. Fittingly, the team named the method ‘Virotrap’.
The Virotrap approach is exceptional as protein networks can be characterized under natural conditions. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. The researchers showed the method was suitable for detection of known binary interactions as well as mass spectrometry-based identification of novel protein partners.
Virotrap is a textbook example of bringing research teams with complementary expertise together. Cross-pollination with the labs of Jan Tavernier (VIB/UGent) and Kris Gevaert (VIB/UGent) enabled the development of this platform.
Jan Tavernier: “Virotrap represents a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. It is complementary to the arsenal of existing interactomics methods, but also holds potential for other fields, like drug target characterization. We also developed a small molecule-variant of Virotrap that could successfully trap protein partners for small molecule baits.”
Kris Gevaert: “Virotrap can also impact our understanding of disease pathways. We were actually surprised to see that this virus-based system could be used to study antiviral pathways, like Toll-like receptor signaling. Understanding these protein machines in their natural environment is essential if we want to modulate their activity in pathology.“
Trapping mammalian protein complexes in viral particles
Cell lysis is an inevitable step in classical mass spectrometry–based strategies to analyse protein complexes. Complementary lysis conditions, in situ cross-linking strategies and proximal labelling techniques are currently used to reduce lysis effects on the protein complex. We have developed Virotrap, a viral particle sorting approach that obviates the need for cell homogenization and preserves the protein complexes during purification. By fusing a bait protein to the HIV-1 GAG protein, we show that interaction partners become trapped within virus-like particles (VLPs) that bud from mammalian cells. Using an efficient VLP enrichment protocol, Virotrap allows the detection of known binary interactions and MS-based identification of novel protein partners as well. In addition, we show the identification of stimulus-dependent interactions and demonstrate trapping of protein partners for small molecules. Virotrap constitutes an elegant complementary approach to the arsenal of methods to study protein complexes.
Proteins mostly exert their function within supramolecular complexes. Strategies for detecting protein–protein interactions (PPIs) can be roughly divided into genetic systems1 and co-purification strategies combined with mass spectrometry (MS) analysis (for example, AP–MS)2. The latter approaches typically require cell or tissue homogenization using detergents, followed by capture of the protein complex using affinity tags3 or specific antibodies4. The protein complexes extracted from this ‘soup’ of constituents are then subjected to several washing steps before actual analysis by trypsin digestion and liquid chromatography–MS/MS analysis. Such lysis and purification protocols are typically empirical and have mostly been optimized using model interactions in single labs. In fact, lysis conditions can profoundly affect the number of both specific and nonspecific proteins that are identified in a typical AP–MS set-up. Indeed, recent studies using the nuclear pore complex as a model protein complex describe optimization of purifications for the different proteins in the complex by examining 96 different conditions5. Nevertheless, for new purifications, it remains hard to correctly estimate the loss of factors in a standard AP–MS experiment due to washing and dilution effects during treatments (that is, false negatives). These considerations have pushed the concept of stabilizing PPIs before the actual homogenization step. A classical approach involves cross-linking with simple reagents (for example, formaldehyde) or with more advanced isotope-labelled cross-linkers (reviewed in ref. 2). However, experimental challenges such as cell permeability and reactivity still preclude the widespread use of cross-linking agents. Moreover, MS-generated spectra of cross-linked peptides are notoriously difficult to identify correctly. A recent lysis-independent solution involves the expression of a bait protein fused to a promiscuous biotin ligase, which results in labelling of proteins proximal to the activity of the enzyme-tagged bait protein6. When compared with AP–MS, this BioID approach delivers a complementary set of candidate proteins, including novel interaction partners7, 8. Such particular studies clearly underscore the need for complementary approaches in the co-complex strategies.
The evolutionary stress on viruses promoted highly condensed coding of information and maximal functionality for small genomes. Accordingly, for HIV-1 it is sufficient to express a single protein, the p55 GAG protein, for efficient production of virus-like particles (VLPs) from cells9, 10. This protein is highly mobile before its accumulation in cholesterol-rich regions of the membrane, where multimerization initiates the budding process11. A total of 4,000–5,000 GAG molecules is required to form a single particle of about 145 nm (ref. 12). Both VLPs and mature viruses contain a number of host proteins that are recruited by binding to viral proteins. These proteins can either contribute to the infectivity (for example, Cyclophilin/FKBPA13) or act as antiviral proteins preventing the spreading of the virus (for example, APOBEC proteins14).
We here describe the development and application of Virotrap, an elegant co-purification strategy based on the trapping of a bait protein together with its associated protein partners in VLPs that are budded from the cell. After enrichment, these particles can be analysed by targeted (for example, western blotting) or unbiased approaches (MS-based proteomics). Virotrap allows detection of known binary PPIs, analysis of protein complexes and their dynamics, and readily detects protein binders for small molecules.
Concept of the Virotrap system
Classical AP–MS approaches rely on cell homogenization to access protein complexes, a step that can vary significantly with the lysis conditions (detergents, salt concentrations, pH conditions and so on)5. To eliminate the homogenization step in AP–MS, we reasoned that incorporation of a protein complex inside a secreted VLP traps the interaction partners under native conditions and protects them during further purification. We thus explored the possibility of protein complex packaging by the expression of GAG-bait protein chimeras (Fig. 1) as expression of GAG results in the release of VLPs from the cells9, 10. As a first PPI pair to evaluate this concept, we selected the HRAS protein as a bait combined with the RAF1 prey protein. We were able to specifically detect the HRAS–RAF1 interaction following enrichment of VLPs via ultracentrifugation (Supplementary Fig. 1a). To prevent tedious ultracentrifugation steps, we designed a novel single-step protocol wherein we co-express the vesicular stomatitis virus glycoprotein (VSV-G) together with a tagged version of this glycoprotein in addition to the GAG bait and prey. Both tagged and untagged VSV-G proteins are probably presented as trimers on the surface of the VLPs, allowing efficient antibody-based recovery from large volumes. The HRAS–RAF1 interaction was confirmed using this single-step protocol (Supplementary Fig. 1b). No associations with unrelated bait or prey proteins were observed for both protocols.
Figure 1: Schematic representation of the Virotrap strategy.
Expression of a GAG-bait fusion protein (1) results in submembrane multimerization (2) and subsequent budding of VLPs from cells (3). Interaction partners of the bait protein are also trapped within these VLPs and can be identified after purification by western blotting or MS analysis (4).
Virotrap for the detection of binary interactions
We next explored the reciprocal detection of a set of PPI pairs, which were selected based on published evidence and cytosolic localization15. After single-step purification and western blot analysis, we could readily detect reciprocal interactions between CDK2 and CKS1B, LCP2 and GRAP2, and S100A1 and S100B (Fig. 2a). Only for the LCP2 prey we observed nonspecific association with an irrelevant bait construct. However, the particle levels of the GRAP2 bait were substantially lower as compared with those of the GAG control construct (GAG protein levels in VLPs; Fig. 2a, second panel of the LCP2 prey). After quantification of the intensities of bait and prey proteins and normalization of prey levels using bait levels, we observed a strong enrichment for the GAG-GRAP2 bait (Supplementary Fig. 2).
…..
Virotrap for unbiased discovery of novel interactions
For the detection of novel interaction partners, we scaled up VLP production and purification protocols (Supplementary Fig. 5 and Supplementary Note 1 for an overview of the protocol) and investigated protein partners trapped using the following bait proteins: Fas-associated via death domain (FADD), A20 (TNFAIP3), nuclear factor-κB (NF-κB) essential modifier (IKBKG), TRAF family member-associated NF-κB activator (TANK), MYD88 and ring finger protein 41 (RNF41). To obtain specific interactors from the lists of identified proteins, we challenged the data with a combined protein list of 19 unrelated Virotrap experiments (Supplementary Table 1 for an overview). Figure 3 shows the design and the list of candidate interactors obtained after removal of all proteins that were found in the 19 control samples (including removal of proteins from the control list identified with a single peptide). The remaining list of confident protein identifications (identified with at least two peptides in at least two biological repeats) reveals both known and novel candidate interaction partners. All candidate interactors including single peptide protein identifications are given in Supplementary Data 2 and also include recurrent protein identifications of known interactors based on a single peptide; for example, CASP8 for FADD and TANK for NEMO. Using alternative methods, we confirmed the interaction between A20 and FADD, and the associations with transmembrane proteins (insulin receptor and insulin-like growth factor receptor 1) that were captured using RNF41 as a bait (Supplementary Fig. 6). To address the use of Virotrap for the detection of dynamic interactions, we activated the NF-κB pathway via the tumour necrosis factor (TNF) receptor (TNFRSF1A) using TNFα (TNF) and performed Virotrap analysis using A20 as bait (Fig. 3). This resulted in the additional enrichment of receptor-interacting kinase (RIPK1), TNFR1-associated via death domain (TRADD), TNFRSF1A and TNF itself, confirming the expected activated complex20.
Figure 3: Use of Virotrap for unbiased interactome analysis
Lysis conditions used in AP–MS strategies are critical for the preservation of protein complexes. A multitude of lysis conditions have been described, culminating in a recent report where protein complex stability was assessed under 96 lysis/purification protocols5. Moreover, the authors suggest to optimize the conditions for every complex, implying an important workload for researchers embarking on protein complex analysis using classical AP–MS. As lysis results in a profound change of the subcellular context and significantly alters the concentration of proteins, loss of complex integrity during a classical AP–MS protocol can be expected. A clear evolution towards ‘lysis-independent’ approaches in the co-complex analysis field is evident with the introduction of BioID6 and APEX25 where proximal proteins, including proteins residing in the complex, are labelled with biotin by an enzymatic activity fused to a bait protein. A side-by-side comparison between classical AP–MS and BioID showed overlapping and unique candidate binding proteins for both approaches7, 8, supporting the notion that complementary methods are needed to provide a comprehensive view on protein complexes. This has also been clearly demonstrated for binary approaches15 and is a logical consequence of the heterogenic nature underlying PPIs (binding mechanism, requirement for posttranslational modifications, location, affinity and so on).
In this report, we explore an alternative, yet complementary method to isolate protein complexes without interfering with cellular integrity. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. This constitutes a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. A comparison of our Virotrap approach with AP–MS shows complementary data, with specific false positives and false negatives for both methods (Supplementary Fig. 7).
The current implementation of the Virotrap platform implies the use of a GAG-bait construct resulting in considerable expression of the bait protein. Different strategies are currently pursued to reduce bait expression including co-expression of a native GAG protein together with the GAG-bait protein, not only reducing bait expression but also creating more ‘space’ in the particles potentially accommodating larger bait protein complexes. Nevertheless, the presence of the bait on the forming GAG scaffold creates an intracellular affinity matrix (comparable to the early in vitro affinity columns for purification of interaction partners from lysates26) that has the potential to compete with endogenous complexes by avidity effects. This avidity effect is a powerful mechanism that aids in the recruitment of cyclophilin to GAG27, a well-known weak interaction (Kd=16 μM (ref. 28)) detectable as a background association in the Virotrap system. Although background binding may be increased by elevated bait expression, weaker associations are readily detectable (for example, MAL—MYD88-binding study; Fig. 2c).
The size of Virotrap particles (around 145 nm) suggests limitations in the size of the protein complex that can be accommodated in the particles. Further experimentation is required to define the maximum size of proteins or the number of protein complexes that can be trapped inside the particles.
….
In conclusion, Virotrap captures significant parts of known interactomes and reveals new interactions. This cell lysis-free approach purifies protein complexes under native conditions and thus provides a powerful method to complement AP–MS or other PPI data. Future improvements of the system include strategies to reduce bait expression to more physiological levels and application of advanced data analysis options to filter out background. These developments can further aid in the deployment of Virotrap as a powerful extension of the current co-complex technology arsenal.
New Autism Blood Biomarker Identified
Researchers at UT Southwestern Medical Center have identified a blood biomarker that may aid in earlier diagnosis of children with autism spectrum disorder, or ASD
In a recent edition of Scientific Reports, UT Southwestern researchers reported on the identification of a blood biomarker that could distinguish the majority of ASD study participants versus a control group of similar age range. In addition, the biomarker was significantly correlated with the level of communication impairment, suggesting that the blood test may give insight into ASD severity.
“Numerous investigators have long sought a biomarker for ASD,” said Dr. Dwight German, study senior author and Professor of Psychiatry at UT Southwestern. “The blood biomarker reported here along with others we are testing can represent a useful test with over 80 percent accuracy in identifying ASD.”
ASD1 – was 66 percent accurate in diagnosing ASD. When combined with thyroid stimulating hormone level measurements, the ASD1-binding biomarker was 73 percent accurate at diagnosis
A Search for Blood Biomarkers for Autism: Peptoids
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social interaction and communication, and restricted, repetitive patterns of behavior. In order to identify individuals with ASD and initiate interventions at the earliest possible age, biomarkers for the disorder are desirable. Research findings have identified widespread changes in the immune system in children with autism, at both systemic and cellular levels. In an attempt to find candidate antibody biomarkers for ASD, highly complex libraries of peptoids (oligo-N-substituted glycines) were screened for compounds that preferentially bind IgG from boys with ASD over typically developing (TD) boys. Unexpectedly, many peptoids were identified that preferentially bound IgG from TD boys. One of these peptoids was studied further and found to bind significantly higher levels (>2-fold) of the IgG1 subtype in serum from TD boys (n = 60) compared to ASD boys (n = 74), as well as compared to older adult males (n = 53). Together these data suggest that ASD boys have reduced levels (>50%) of an IgG1 antibody, which resembles the level found normally with advanced age. In this discovery study, the ASD1 peptoid was 66% accurate in predicting ASD.
….
Peptoid libraries have been used previously to search for autoantibodies for neurodegenerative diseases19 and for systemic lupus erythematosus (SLE)21. In the case of SLE, peptoids were identified that could identify subjects with the disease and related syndromes with moderate sensitivity (70%) and excellent specificity (97.5%). Peptoids were used to measure IgG levels from both healthy subjects and SLE patients. Binding to the SLE-peptoid was significantly higher in SLE patients vs. healthy controls. The IgG bound to the SLE-peptoid was found to react with several autoantigens, suggesting that the peptoids are capable of interacting with multiple, structurally similar molecules. These data indicate that IgG binding to peptoids can identify subjects with high levels of pathogenic autoantibodies vs. a single antibody.
In the present study, the ASD1 peptoid binds significantly lower levels of IgG1 in ASD males vs. TD males. This finding suggests that the ASD1 peptoid recognizes antibody(-ies) of an IgG1 subtype that is (are) significantly lower in abundance in the ASD males vs. TD males. Although a previous study14 has demonstrated lower levels of plasma IgG in ASD vs. TD children, here, we additionally quantified serum IgG levels in our individuals and found no difference in IgG between the two groups (data not shown). Furthermore, our IgG levels did not correlate with ASD1 binding levels, indicating that ASD1 does not bind IgG generically, and that the peptoid’s ability to differentiate between ASD and TD males is related to a specific antibody(-ies).
ASD subjects underwent a diagnostic evaluation using the ADOS and ADI-R, and application of the DSM-IV criteria prior to study inclusion. Only those subjects with a diagnosis of Autistic Disorder were included in the study. The ADOS is a semi-structured observation of a child’s behavior that allows examiners to observe the three core domains of ASD symptoms: reciprocal social interaction, communication, and restricted and repetitive behaviors1. When ADOS subdomain scores were compared with peptoid binding, the only significant relationship was with Social Interaction. However, the positive correlation would suggest that lower peptoid binding is associated with better social interaction, not poorer social interaction as anticipated.
The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.
It is interesting that in serum samples from older men, the ASD1 binding is similar to that in the ASD boys. This is consistent with the observation that with aging there is a reduction in the strength of the immune system, and the changes are gender-specific25. Recent studies using parabiosis26, in which blood from young mice reverse age-related impairments in cognitive function and synaptic plasticity in old mice, reveal that blood constituents from young subjects may contain important substances for maintaining neuronal functions. Work is in progress to identify the antibody/antibodies that are differentially binding to the ASD1 peptoid, which appear as a single band on the electrophoresis gel (Fig. 4).
……..
The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.
Titration of IgG binding to ASD1 using serum pooled from 10 TD males and 10 ASD males demonstrates ASD1’s ability to differentiate between the two groups. (B)Detecting IgG1 subclass instead of total IgG amplifies this differentiation. (C) IgG1 binding of individual ASD (n=74) and TD (n=60) male serum samples (1:100 dilution) to ASD1 significantly differs with TD>ASD. In addition, IgG1 binding of older adult male (AM) serum samples (n=53) to ASD1 is significantly lower than TD males, and not different from ASD males. The three groups were compared with a Kruskal-Wallis ANOVA, H = 10.1781, p<0.006. **p<0.005. Error bars show SEM. (D) Receiver-operating characteristic curve for ASD1’s ability to discriminate between ASD and TD males.
Association between peptoid binding and ADOS and ADI-R subdomains
Higher scores in any domain on the ADOS and ADI-R are indicative of more abnormal behaviors and/or symptoms. Among ADOS subdomains, there was no significant relationship between Communication and peptoid binding (z = 0.04, p = 0.966), Communication + Social interaction (z = 1.53, p = 0.127), or Stereotyped Behaviors and Restrictive Interests (SBRI) (z = 0.46, p = 0.647). Higher scores on the Social Interaction domain were significantly associated with higher peptoid binding (z = 2.04, p = 0.041).
Among ADI-R subdomains, higher scores on the Communication domain were associated with lower levels of peptoid binding (z = −2.28, p = 0.023). There was not a significant relationship between Social Interaction (z = 0.07, p = 0.941) or Restrictive/Repetitive Stereotyped Behaviors (z = −1.40, p = 0.162) and peptoid binding.
Computational Model Finds New Protein-Protein Interactions
Researchers at University of Pittsburgh have discovered 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia.
Using a computational model they developed, researchers at the University of Pittsburgh School of Medicine have discovered more than 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia. The findings, published online in npj Schizophrenia, a Nature Publishing Group journal, could lead to greater understanding of the biological underpinnings of this mental illness, as well as point the way to treatments.
There have been many genome-wide association studies (GWAS) that have identified gene variants associated with an increased risk for schizophrenia, but in most cases there is little known about the proteins that these genes make, what they do and how they interact, said senior investigator Madhavi Ganapathiraju, Ph.D., assistant professor of biomedical informatics, Pitt School of Medicine.
“GWAS studies and other research efforts have shown us what genes might be relevant in schizophrenia,” she said. “What we have done is the next step. We are trying to understand how these genes relate to each other, which could show us the biological pathways that are important in the disease.”
Each gene makes proteins and proteins typically interact with each other in a biological process. Information about interacting partners can shed light on the role of a gene that has not been studied, revealing pathways and biological processes associated with the disease and also its relation to other complex diseases.
Dr. Ganapathiraju’s team developed a computational model called High-Precision Protein Interaction Prediction (HiPPIP) and applied it to discover PPIs of schizophrenia-linked genes identified through GWAS, as well as historically known risk genes. They found 504 never-before known PPIs, and noted also that while schizophrenia-linked genes identified historically and through GWAS had little overlap, the model showed they shared more than 100 common interactors.
“We can infer what the protein might do by checking out the company it keeps,” Dr. Ganapathiraju explained. “For example, if I know you have many friends who play hockey, it could mean that you are involved in hockey, too. Similarly, if we see that an unknown protein interacts with multiple proteins involved in neural signaling, for example, there is a high likelihood that the unknown entity also is involved in the same.”
Dr. Ganapathiraju and colleagues have drawn such inferences on protein function based on the PPIs of proteins, and made their findings available on a website Schizo-Pi. This information can be used by biologists to explore the schizophrenia interactome with the aim of understanding more about the disease or developing new treatment drugs.
Schizophrenia interactome with 504 novel protein–protein interactions
(GWAS) have revealed the role of rare and common genetic variants, but the functional effects of the risk variants remain to be understood. Protein interactome-based studies can facilitate the study of molecular mechanisms by which the risk genes relate to schizophrenia (SZ) genesis, but protein–protein interactions (PPIs) are unknown for many of the liability genes. We developed a computational model to discover PPIs, which is found to be highly accurate according to computational evaluations and experimental validations of selected PPIs. We present here, 365 novel PPIs of liability genes identified by the SZ Working Group of the Psychiatric Genomics Consortium (PGC). Seventeen genes that had no previously known interactions have 57 novel interactions by our method. Among the new interactors are 19 drug targets that are targeted by 130 drugs. In addition, we computed 147 novel PPIs of 25 candidate genes investigated in the pre-GWAS era. While there is little overlap between the GWAS genes and the pre-GWAS genes, the interactomes reveal that they largely belong to the same pathways, thus reconciling the apparent disparities between the GWAS and prior gene association studies. The interactome including 504 novel PPIs overall, could motivate other systems biology studies and trials with repurposed drugs. The PPIs are made available on a webserver, called Schizo-Pi at http://severus.dbmi.pitt.edu/schizo-pi with advanced search capabilities.
Schizophrenia (SZ) is a common, potentially severe psychiatric disorder that afflicts all populations.1 Gene mapping studies suggest that SZ is a complex disorder, with a cumulative impact of variable genetic effects coupled with environmental factors.2 As many as 38 genome-wide association studies (GWAS) have been reported on SZ out of a total of 1,750 GWAS publications on 1,087 traits or diseases reported in the GWAS catalog maintained by the National Human Genome Research Institute of USA3 (as of April 2015), revealing the common variants associated with SZ.4 The SZ Working Group of the Psychiatric Genomics Consortium (PGC) identified 108 genetic loci that likely confer risk for SZ.5 While the role of genetics has been clearly validated by this study, the functional impact of the risk variants is not well-understood.6,7 Several of the genes implicated by the GWAS have unknown functions and could participate in possibly hitherto unknown pathways.8 Further, there is little or no overlap between the genes identified through GWAS and ‘candidate genes’ proposed in the pre-GWAS era.9
Interactome-based studies can be useful in discovering the functional associations of genes. For example,disrupted in schizophrenia 1 (DISC1), an SZ related candidate gene originally had no known homolog in humans. Although it had well-characterized protein domains such as coiled-coil domains and leucine-zipper domains, its function was unknown.10,11 Once its protein–protein interactions (PPIs) were determined using yeast 2-hybrid technology,12 investigators successfully linked DISC1 to cAMP signaling, axon elongation, and neuronal migration, and accelerated the research pertaining to SZ in general, and DISC1 in particular.13 Typically such studies are carried out on known protein–protein interaction (PPI) networks, or as in the case of DISC1, when there is a specific gene of interest, its PPIs are determined by methods such as yeast 2-hybrid technology.
Knowledge of human PPI networks is thus valuable for accelerating discovery of protein function, and indeed, biomedical research in general. However, of the hundreds of thousands of biophysical PPIs thought to exist in the human interactome,14,15 <100,000 are known today (Human Protein Reference Database, HPRD16 and BioGRID17 databases). Gold standard experimental methods for the determination of all the PPIs in human interactome are time-consuming, expensive and may not even be feasible, as about 250 million pairs of proteins would need to be tested overall; high-throughput methods such as yeast 2-hybrid have important limitations for whole interactome determination as they have a low recall of 23% (i.e., remaining 77% of true interactions need to be determined by other means), and a low precision (i.e., the screens have to be repeated multiple times to achieve high selectivity).18,19Computational methods are therefore necessary to complete the interactome expeditiously. Algorithms have begun emerging to predict PPIs using statistical machine learning on the characteristics of the proteins, but these algorithms are employed predominantly to study yeast. Two significant computational predictions have been reported for human interactome; although they have had high false positive rates, these methods have laid the foundation for computational prediction of human PPIs.20,21
We have created a new PPI prediction model called High-Confidence Protein–Protein Interaction Prediction (HiPPIP) model. Novel interactions predicted with this model are making translational impact. For example, we discovered a PPI between OASL and DDX58, which on validation showed that an increased expression of OASL could boost innate immunity to combat influenza by activating the RIG-I pathway.22 Also, the interactome of the genes associated with congenital heart disease showed that the disease morphogenesis has a close connection with the structure and function of cilia.23Here, we describe the HiPPIP model and its application to SZ genes to construct the SZ interactome. After computational evaluations and experimental validations of selected novel PPIs, we present here 504 highly confident novel PPIs in the SZ interactome, shedding new light onto several uncharacterized genes that are associated with SZ.
We developed a computational model called HiPPIP to predict PPIs (see Methods and Supplementary File 1). The model has been evaluated by computational methods and experimental validations and is found to be highly accurate. Evaluations on a held-out test data showed a precision of 97.5% and a recall of 5%. 5% recall out of 150,000 to 600,000 estimated number of interactions in the human interactome corresponds to 7,500–30,000 novel PPIs in the whole interactome. Note that, it is likely that the real precision would be higher than 97.5% because in this test data, randomly paired proteins are treated as non-interacting protein pairs, whereas some of them may actually be interacting pairs with a small probability; thus, some of the pairs that are treated as false positives in test set are likely to be true but hitherto unknown interactions. In Figure 1a, we show the precision versus recall of our method on ‘hub proteins’ where we considered all pairs that received a score >0.5 by HiPPIP to be novel interactions. In Figure 1b, we show the number of true positives versus false positives observed in hub proteins. Both these figures also show our method to be superior in comparison to the prediction of membrane-receptor interactome by Qi et al’s.24 True positives versus false positives are also shown for individual hub proteins by our method in Figure 1cand by Qi et al’s.23 in Figure 1d. These evaluations showed that our predictions contain mostly true positives. Unlike in other domains where ranked lists are commonly used such as information retrieval, in PPI prediction the ‘false positives’ may actually be unlabeled instances that are indeed true interactions that are not yet discovered. In fact, such unlabeled pairs predicted as interactors of the hub gene HMGB1 (namely, the pairs HMGB1-KL and HMGB1-FLT1) were validated by experimental methods and found to be true PPIs (See the Figures e–g inSupplementary File 3). Thus, we concluded that the protein pairs that received a score of ⩾0.5 are highly confident to be true interactions. The pairs that receive a score less than but close to 0.5 (i.e., in the range of 0.4–0.5) may also contain several true PPIs; however, we cannot confidently say that all in this range are true PPIs. Only the PPIs predicted with a score >0.5 are included in the interactome.
Computational evaluation of predicted protein–protein interactions on hub proteins: (a) precision recall curve. (b) True positive versus false positives in ranked lists of hub type membrane receptors for our method and that by Qi et al. True positives versus false positives are shown for individual membrane receptors by our method in (c) and by Qi et al. in (d). Thick line is the average, which is also the same as shown in (b). Note:x-axis is recall in (a), whereas it is number of false positives in (b–d). The range of y-axis is observed by varying the threshold from 1.0–0 in (a), and to 0.5 in (b–d).
SZ interactome
By applying HiPPIP to the GWAS genes and Historic (pre-GWAS) genes, we predicted over 500 high confidence new PPIs adding to about 1400 previously known PPIs.
Schizophrenia interactome: network view of the schizophrenia interactome is shown as a graph, where genes are shown as nodes and PPIs as edges connecting the nodes. Schizophrenia-associated genes are shown as dark blue nodes, novel interactors as red color nodes and known interactors as blue color nodes. The source of the schizophrenia genes is indicated by its label font, where Historic genes are shown italicized, GWAS genes are shown in bold, and the one gene that is common to both is shown in italicized and bold. For clarity, the source is also indicated by the shape of the node (triangular for GWAS and square for Historic and hexagonal for both). Symbols are shown only for the schizophrenia-associated genes; actual interactions may be accessed on the web. Red edges are the novel interactions, whereas blue edges are known interactions. GWAS, genome-wide association studies of schizophrenia; PPI, protein–protein interaction.
We have made the known and novel interactions of all SZ-associated genes available on a webserver called Schizo-Pi, at the addresshttp://severus.dbmi.pitt.edu/schizo-pi. This webserver is similar to Wiki-Pi33 which presents comprehensive annotations of both participating proteins of a PPI side-by-side. The difference between Wiki-Pi which we developed earlier, and Schizo-Pi, is the inclusion of novel predicted interactions of the SZ genes into the latter.
Despite the many advances in biomedical research, identifying the molecular mechanisms underlying the disease is still challenging. Studies based on protein interactions were proven to be valuable in identifying novel gene associations that could shed new light on disease pathology.35 The interactome including more than 500 novel PPIs will help to identify pathways and biological processes associated with the disease and also its relation to other complex diseases. It also helps identify potential drugs that could be repurposed to use for SZ treatment.
Functional and pathway enrichment in SZ interactome
When a gene of interest has little known information, functions of its interacting partners serve as a starting point to hypothesize its own function. We computed statistically significant enrichment of GO biological process terms among the interacting partners of each of the genes using BinGO36 (see online at http://severus.dbmi.pitt.edu/schizo-pi).
Protein aggregation and aggregate toxicity: new insights into protein folding, misfolding diseases and biological evolution
Massimo Stefani · Christopher M. Dobson
Abstract The deposition of proteins in the form of amyloid fibrils and plaques is the characteristic feature of more than 20 degenerative conditions affecting either the central nervous system or a variety of peripheral tissues. As these conditions include Alzheimer’s, Parkinson’s and the prion diseases, several forms of fatal systemic amyloidosis, and at least one condition associated with medical intervention (haemodialysis), they are of enormous importance in the context of present-day human health and welfare. Much remains to be learned about the mechanism by which the proteins associated with these diseases aggregate and form amyloid structures, and how the latter affect the functions of the organs with which they are associated. A great deal of information concerning these diseases has emerged, however, during the past 5 years, much of it causing a number of fundamental assumptions about the amyloid diseases to be reexamined. For example, it is now apparent that the ability to form amyloid structures is not an unusual feature of the small number of proteins associated with these diseases but is instead a general property of polypeptide chains. It has also been found recently that aggregates of proteins not associated with amyloid diseases can impair the ability of cells to function to a similar extent as aggregates of proteins linked with specific neurodegenerative conditions. Moreover, the mature amyloid fibrils or plaques appear to be substantially less toxic than the prefibrillar aggregates that are their precursors. The toxicity of these early aggregates appears to result from an intrinsic ability to impair fundamental cellular processes by interacting with cellular membranes, causing oxidative stress and increases in free Ca2+ that eventually lead to apoptotic or necrotic cell death. The ‘new view’ of these diseases also suggests that other degenerative conditions could have similar underlying origins to those of the amyloidoses. In addition, cellular protection mechanisms, such as molecular chaperones and the protein degradation machinery, appear to be crucial in the prevention of disease in normally functioning living organisms. It also suggests some intriguing new factors that could be of great significance in the evolution of biological molecules and the mechanisms that regulate their behaviour.
The genetic information within a cell encodes not only the specific structures and functions of proteins but also the way these structures are attained through the process known as protein folding. In recent years many of the underlying features of the fundamental mechanism of this complex process and the manner in which it is regulated in living systems have emerged from a combination of experimental and theoretical studies [1]. The knowledge gained from these studies has also raised a host of interesting issues. It has become apparent, for example, that the folding and unfolding of proteins is associated with a whole range of cellular processes from the trafficking of molecules to specific organelles to the regulation of the cell cycle and the immune response. Such observations led to the inevitable conclusion that the failure to fold correctly, or to remain correctly folded, gives rise to many different types of biological malfunctions and hence to many different forms of disease [2]. In addition, it has been recognised recently that a large number of eukaryotic genes code for proteins that appear to be ‘natively unfolded’, and that proteins can adopt, under certain circumstances, highly organised multi-molecular assemblies whose structures are not specifically encoded in the amino acid sequence. Both these observations have raised challenging questions about one of the most fundamental principles of biology: the close relationship between the sequence, structure and function of proteins, as we discuss below [3].
It is well established that proteins that are ‘misfolded’, i.e. that are not in their functionally relevant conformation, are devoid of normal biological activity. In addition, they often aggregate and/or interact inappropriately with other cellular components leading to impairment of cell viability and eventually to cell death. Many diseases, often known as misfolding or conformational diseases, ultimately result from the presence in a living system of protein molecules with structures that are ‘incorrect’, i.e. that differ from those in normally functioning organisms [4]. Such diseases include conditions in which a specific protein, or protein complex, fails to fold correctly (e.g. cystic fibrosis, Marfan syndrome, amyotonic lateral sclerosis) or is not sufficiently stable to perform its normal function (e.g. many forms of cancer). They also include conditions in which aberrant folding behaviour results in the failure of a protein to be correctly trafficked (e.g. familial hypercholesterolaemia, α1-antitrypsin deficiency, and some forms of retinitis pigmentosa) [4]. The tendency of proteins to aggregate, often to give species extremely intractable to dissolution and refolding, is of course also well known in other circumstances. Examples include the formation of inclusion bodies during overexpression of heterologous proteins in bacteria and the precipitation of proteins during laboratory purification procedures. Indeed, protein aggregation is well established as one of the major difficulties associated with the production and handling of proteins in the biotechnology and pharmaceutical industries [5].
Considerable attention is presently focused on a group of protein folding diseases known as amyloidoses. In these diseases specific peptides or proteins fail to fold or to remain correctly folded and then aggregate (often with other components) so as to give rise to ‘amyloid’ deposits in tissue. Amyloid structures can be recognised because they possess a series of specific tinctorial and biophysical characteristics that reflect a common core structure based on the presence of highly organised βsheets [6]. The deposits in strictly defined amyloidoses are extracellular and can often be observed as thread-like fibrillar structures, sometimes assembled further into larger aggregates or plaques. These diseases include a range of sporadic, familial or transmissible degenerative diseases, some of which affect the brain and the central nervous system (e.g. Alzheimer’s and Creutzfeldt-Jakob diseases), while others involve peripheral tissues and organs such as the liver, heart and spleen (e.g. systemic amyloidoses and type II diabetes) [7, 8]. In other forms of amyloidosis, such as primary or secondary systemic amyloidoses, proteinaceous deposits are found in skeletal tissue and joints (e.g. haemodialysis-related amyloidosis) as well as in several organs (e.g. heart and kidney). Yet other components such as collagen, glycosaminoglycans and proteins (e.g. serum amyloid protein) are often present in the deposits protecting them against degradation [9, 10, 11]. Similar deposits to those in the amyloidoses are, however, found intracellularly in other diseases; these can be localised either in the cytoplasm, in the form of specialised aggregates known as aggresomes or as Lewy or Russell bodies or in the nucleus (see below).
The presence in tissue of proteinaceous deposits is a hallmark of all these diseases, suggesting a causative link between aggregate formation and pathological symptoms (often known as the amyloid hypothesis) [7, 8, 12]. At the present time the link between amyloid formation and disease is widely accepted on the basis of a large number of biochemical and genetic studies. The specific nature of the pathogenic species, and the molecular basis of their ability to damage cells, are however, the subject of intense debate [13, 14, 15, 16, 17, 18, 19, 20]. In neurodegenerative disorders it is very likely that the impairment of cellular function follows directly from the interactions of the aggregated proteins with cellular components [21, 22]. In the systemic non-neurological diseases, however, it is widely believed that the accumulation in vital organs of large amounts of amyloid deposits can by itself cause at least some of the clinical symptoms [23]. It is quite possible, however, that there are other more specific effects of aggregates on biochemical processes even in these diseases. The presence of extracellular or intracellular aggregates of a specific polypeptide molecule is a characteristic of all the 20 or so recognised amyloid diseases. The polypeptides involved include full length proteins (e.g. lysozyme or immunoglobulin light chains), biological peptides (amylin, atrial natriuretic factor) and fragments of larger proteins produced as a result of specific processing (e.g. the Alzheimer βpeptide) or of more general degradation [e.g. poly(Q) stretches cleaved from proteins with poly(Q) extensions such as huntingtin, ataxins and the androgen receptor]. The peptides and proteins associated with known amyloid diseases are listed in Table 1. In some cases the proteins involved have wild type sequences, as in sporadic forms of the diseases, but in other cases these are variants resulting from genetic mutations associated with familial forms of the diseases. In some cases both sporadic and familial diseases are associated with a given protein; in this case the mutational variants are usually associated with early-onset forms of the disease. In the case of the neurodegenerative diseases associated with the prion protein some forms of the diseases are transmissible. The existence of familial forms of a number of amyloid diseases has provided significant clues to the origins of the pathologies. For example, there are increasingly strong links between the age at onset of familial forms of disease and the effects of the mutations involved on the propensity of the affected proteins to aggregate in vitro. Such findings also support the link between the process of aggregation and the clinical manifestations of disease [24, 25].
The presence in cells of misfolded or aggregated proteins triggers a complex biological response. In the cytosol, this is referred to as the ‘heat shock response’ and in the endoplasmic reticulum (ER) it is known as the ‘unfolded protein response’. These responses lead to the expression, among others, of the genes for heat shock proteins (Hsp, or molecular chaperone proteins) and proteins involved in the ubiquitin-proteasome pathway [26]. The evolution of such complex biochemical machinery testifies to the fact that it is necessary for cells to isolate and clear rapidly and efficiently any unfolded or incorrectly folded protein as soon as it appears. In itself this fact suggests that these species could have a generally adverse effect on cellular components and cell viability. Indeed, it was a major step forward in understanding many aspects of cell biology when it was recognised that proteins previously associated only with stress, such as heat shock, are in fact crucial in the normal functioning of living systems. This advance, for example, led to the discovery of the role of molecular chaperones in protein folding and in the normal ‘housekeeping’ processes that are inherent in healthy cells [27, 28]. More recently a number of degenerative diseases, both neurological and systemic, have been linked to, or shown to be affected by, impairment of the ubiquitin-proteasome pathway (Table 2). The diseases are primarily associated with a reduction in either the expression or the biological activity of Hsps, ubiquitin, ubiquitinating or deubiquitinating enzymes and the proteasome itself, as we show below [29, 30, 31, 32], or even to the failure of the quality control mechanisms that ensure proper maturation of proteins in the ER. The latter normally leads to degradation of a significant proportion of polypeptide chains before they have attained their native conformations through retrograde translocation to the cytosol [33, 34].
….
It is now well established that the molecular basis of protein aggregation into amyloid structures involves the existence of ‘misfolded’ forms of proteins, i.e. proteins that are not in the structures in which they normally function in vivo or of fragments of proteins resulting from degradation processes that are inherently unable to fold [4, 7, 8, 36]. Aggregation is one of the common consequences of a polypeptide chain failing to reach or maintain its functional three-dimensional structure. Such events can be associated with specific mutations, misprocessing phenomena, aberrant interactions with metal ions, changes in environmental conditions, such as pH or temperature, or chemical modification (oxidation, proteolysis). Perturbations in the conformational properties of the polypeptide chain resulting from such phenomena may affect equilibrium 1 in Fig. 1 increasing the population of partially unfolded, or misfolded, species that are much more aggregation-prone than the native state.
Fig. 1 Overview of the possible fates of a newly synthesised polypeptide chain. The equilibrium ① between the partially folded molecules and the natively folded ones is usually strongly in favour of the latter except as a result of specific mutations, chemical modifications or partially destabilising solution conditions. The increased equilibrium populations of molecules in the partially or completely unfolded ensemble of structures are usually degraded by the proteasome; when this clearance mechanism is impaired, such species often form disordered aggregates or shift equilibrium ② towards the nucleation of pre-fibrillar assemblies that eventually grow into mature fibrils (equilibrium ③). DANGER! indicates that pre-fibrillar aggregates in most cases display much higher toxicity than mature fibrils. Heat shock proteins (Hsp) can suppress the appearance of pre-fibrillar assemblies by minimising the population of the partially folded molecules by assisting in the correct folding of the nascent chain and the unfolded protein response target incorrectly folded proteins for degradation.
……
Little is known at present about the detailed arrangement of the polypeptide chains themselves within amyloid fibrils, either those parts involved in the core βstrands or in regions that connect the various β-strands. Recent data suggest that the sheets are relatively untwisted and may in some cases at least exist in quite specific supersecondary structure motifs such as β-helices [6, 40] or the recently proposed µ-helix [41]. It seems possible that there may be significant differences in the way the strands are assembled depending on characteristics of the polypeptide chain involved [6, 42]. Factors including length, sequence (and in some cases the presence of disulphide bonds or post-translational modifications such as glycosylation) may be important in determining details of the structures. Several recent papers report structural models for amyloid fibrils containing different polypeptide chains, including the Aβ40 peptide, insulin and fragments of the prion protein, based on data from such techniques as cryo-electron microscopy and solid-state magnetic resonance spectroscopy [43, 44]. These models have much in common and do indeed appear to reflect the fact that the structures of different fibrils are likely to be variations on a common theme [40]. It is also emerging that there may be some common and highly organised assemblies of amyloid protofilaments that are not simply extended threads or ribbons. It is clear, for example, that in some cases large closed loops can be formed [45, 46, 47], and there may be specific types of relatively small spherical or ‘doughnut’ shaped structures that can result in at least some circumstances (see below).
…..
The similarity of some early amyloid aggregates with the pores resulting from oligomerisation of bacterial toxins and pore-forming eukaryotic proteins (see below) also suggest that the basic mechanism of protein aggregation into amyloid structures may not only be associated with diseases but in some cases could result in species with functional significance. Recent evidence indicates that a variety of micro-organisms may exploit the controlled aggregation of specific proteins (or their precursors) to generate functional structures. Examples include bacterial curli [52] and proteins of the interior fibre cells of mammalian ocular lenses, whose β-sheet arrays seem to be organised in an amyloid-like supramolecular order [53]. In this case the inherent stability of amyloid-like protein structure may contribute to the long-term structural integrity and transparency of the lens. Recently it has been hypothesised that amyloid-like aggregates of serum amyloid A found in secondary amyloidoses following chronic inflammatory diseases protect the host against bacterial infections by inducing lysis of bacterial cells [54]. One particularly interesting example is a ‘misfolded’ form of the milk protein α-lactalbumin that is formed at low pH and trapped by the presence of specific lipid molecules [55]. This form of the protein has been reported to trigger apoptosis selectively in tumour cells providing evidence for its importance in protecting infants from certain types of cancer [55]. ….
Amyloid formation is a generic property of polypeptide chains ….
It is clear that the presence of different side chains can influence the details of amyloid structures, particularly the assembly of protofibrils, and that they give rise to the variations on the common structural theme discussed above. More fundamentally, the composition and sequence of a peptide or protein affects profoundly its propensity to form amyloid structures under given conditions (see below).
Because the formation of stable protein aggregates of amyloid type does not normally occur in vivo under physiological conditions, it is likely that the proteins encoded in the genomes of living organisms are endowed with structural adaptations that mitigate against aggregation under these conditions. A recent survey involving a large number of structures of β-proteins highlights several strategies through which natural proteins avoid intermolecular association of β-strands in their native states [65]. Other surveys of protein databases indicate that nature disfavours sequences of alternating polar and nonpolar residues, as well as clusters of several consecutive hydrophobic residues, both of which enhance the tendency of a protein to aggregate prior to becoming completely folded [66, 67].
……
Precursors of amyloid fibrils can be toxic to cells
It was generally assumed until recently that the proteinaceous aggregates most toxic to cells are likely to be mature amyloid fibrils, the form of aggregates that have been commonly detected in pathological deposits. It therefore appeared probable that the pathogenic features underlying amyloid diseases are a consequence of the interaction with cells of extracellular deposits of aggregated material. As well as forming the basis for understanding the fundamental causes of these diseases, this scenario stimulated the exploration of therapeutic approaches to amyloidoses that focused mainly on the search for molecules able to impair the growth and deposition of fibrillar forms of aggregated proteins. ….
Structural basis and molecular features of amyloid toxicity
The presence of toxic aggregates inside or outside cells can impair a number of cell functions that ultimately lead to cell death by an apoptotic mechanism [95, 96]. Recent research suggests, however, that in most cases initial perturbations to fundamental cellular processes underlie the impairment of cell function induced by aggregates of disease-associated polypeptides. Many pieces of data point to a central role of modifications to the intracellular redox status and free Ca2+ levels in cells exposed to toxic aggregates [45, 89, 97, 98, 99, 100, 101]. A modification of the intracellular redox status in such cells is associated with a sharp increase in the quantity of reactive oxygen species (ROS) that is reminiscent of the oxidative burst by which leukocytes destroy invading foreign cells after phagocytosis. In addition, changes have been observed in reactive nitrogen species, lipid peroxidation, deregulation of NO metabolism [97], protein nitrosylation [102] and upregulation of heme oxygenase-1, a specific marker of oxidative stress [103]. ….
Results have recently been reported concerning the toxicity towards cultured cells of aggregates of poly(Q) peptides which argues against a disease mechanism based on specific toxic features of the aggregates. These results indicate that there is a close relationship between the toxicity of proteins with poly(Q) extensions and their nuclear localisation. In addition they support the hypotheses that the toxicity of poly(Q) aggregates can be a consequence of altered interactions with nuclear coactivator or corepressor molecules including p53, CBP, Sp1 and TAF130 or of the interaction with transcription factors and nuclear coactivators, such as CBP, endowed with short poly(Q) stretches ([95] and references therein)…..
Concluding remarks
The data reported in the past few years strongly suggest that the conversion of normally soluble proteins into amyloid fibrils and the toxicity of small aggregates appearing during the early stages of the formation of the latter are common or generic features of polypeptide chains. Moreover, the molecular basis of this toxicity also appears to display common features between the different systems that have so far been studied. The ability of many, perhaps all, natural polypeptides to ‘misfold’ and convert into toxic aggregates under suitable conditions suggests that one of the most important driving forces in the evolution of proteins must have been the negative selection against sequence changes that increase the tendency of a polypeptide chain to aggregate. Nevertheless, as protein folding is a stochastic process, and no such process can be completely infallible, misfolded proteins or protein folding intermediates in equilibrium with the natively folded molecules must continuously form within cells. Thus mechanisms to deal with such species must have co-evolved with proteins. Indeed, it is clear that misfolding, and the associated tendency to aggregate, is kept under control by molecular chaperones, which render the resulting species harmless assisting in their refolding, or triggering their degradation by the cellular clearance machinery [166, 167, 168, 169, 170, 171, 172, 173, 175, 177, 178].
Misfolded and aggregated species are likely to owe their toxicity to the exposure on their surfaces of regions of proteins that are buried in the interior of the structures of the correctly folded native states. The exposure of large patches of hydrophobic groups is likely to be particularly significant as such patches favour the interaction of the misfolded species with cell membranes [44, 83, 89, 90, 91, 93]. Interactions of this type are likely to lead to the impairment of the function and integrity of the membranes involved, giving rise to a loss of regulation of the intracellular ion balance and redox status and eventually to cell death. In addition, misfolded proteins undoubtedly interact inappropriately with other cellular components, potentially giving rise to the impairment of a range of other biological processes. Under some conditions the intracellular content of aggregated species may increase directly, due to an enhanced propensity of incompletely folded or misfolded species to aggregate within the cell itself. This could occur as the result of the expression of mutational variants of proteins with decreased stability or cooperativity or with an intrinsically higher propensity to aggregate. It could also occur as a result of the overproduction of some types of protein, for example, because of other genetic factors or other disease conditions, or because of perturbations to the cellular environment that generate conditions favouring aggregation, such as heat shock or oxidative stress. Finally, the accumulation of misfolded or aggregated proteins could arise from the chaperone and clearance mechanisms becoming overwhelmed as a result of specific mutant phenotypes or of the general effects of ageing [173, 174].
The topics discussed in this review not only provide a great deal of evidence for the ‘new view’ that proteins have an intrinsic capability of misfolding and forming structures such as amyloid fibrils but also suggest that the role of molecular chaperones is even more important than was thought in the past. The role of these ubiquitous proteins in enhancing the efficiency of protein folding is well established [185]. It could well be that they are at least as important in controlling the harmful effects of misfolded or aggregated proteins as in enhancing the yield of functional molecules.
Nutritional Status is Associated with Faster Cognitive Decline and Worse Functional Impairment in the Progression of Dementia: The Cache County Dementia Progression Study1
Nutritional status may be a modifiable factor in the progression of dementia. We examined the association of nutritional status and rate of cognitive and functional decline in a U.S. population-based sample. Study design was an observational longitudinal study with annual follow-ups up to 6 years of 292 persons with dementia (72% Alzheimer’s disease, 56% female) in Cache County, UT using the Mini-Mental State Exam (MMSE), Clinical Dementia Rating Sum of Boxes (CDR-sb), and modified Mini Nutritional Assessment (mMNA). mMNA scores declined by approximately 0.50 points/year, suggesting increasing risk for malnutrition. Lower mMNA score predicted faster rate of decline on the MMSE at earlier follow-up times, but slower decline at later follow-up times, whereas higher mMNA scores had the opposite pattern (mMNA by time β= 0.22, p = 0.017; mMNA by time2 β= –0.04, p = 0.04). Lower mMNA score was associated with greater impairment on the CDR-sb over the course of dementia (β= 0.35, p < 0.001). Assessment of malnutrition may be useful in predicting rates of progression in dementia and may provide a target for clinical intervention.
Shared Genetic Risk Factors for Late-Life Depression and Alzheimer’s Disease
Background: Considerable evidence has been reported for the comorbidity between late-life depression (LLD) and Alzheimer’s disease (AD), both of which are very common in the general elderly population and represent a large burden on the health of the elderly. The pathophysiological mechanisms underlying the link between LLD and AD are poorly understood. Because both LLD and AD can be heritable and are influenced by multiple risk genes, shared genetic risk factors between LLD and AD may exist. Objective: The objective is to review the existing evidence for genetic risk factors that are common to LLD and AD and to outline the biological substrates proposed to mediate this association. Methods: A literature review was performed. Results: Genetic polymorphisms of brain-derived neurotrophic factor, apolipoprotein E, interleukin 1-beta, and methylenetetrahydrofolate reductase have been demonstrated to confer increased risk to both LLD and AD by studies examining either LLD or AD patients. These results contribute to the understanding of pathophysiological mechanisms that are common to both of these disorders, including deficits in nerve growth factors, inflammatory changes, and dysregulation mechanisms involving lipoprotein and folate. Other conflicting results have also been reviewed, and few studies have investigated the effects of the described polymorphisms on both LLD and AD. Conclusion: The findings suggest that common genetic pathways may underlie LLD and AD comorbidity. Studies to evaluate the genetic relationship between LLD and AD may provide insights into the molecular mechanisms that trigger disease progression as the population ages.
Association of Vitamin B12, Folate, and Sulfur Amino Acids With Brain Magnetic Resonance Imaging Measures in Older Adults: A Longitudinal Population-Based Study
Importance Vitamin B12, folate, and sulfur amino acids may be modifiable risk factors for structural brain changes that precede clinical dementia.
Objective To investigate the association of circulating levels of vitamin B12, red blood cell folate, and sulfur amino acids with the rate of total brain volume loss and the change in white matter hyperintensity volume as measured by fluid-attenuated inversion recovery in older adults.
Design, Setting, and Participants The magnetic resonance imaging subsample of the Swedish National Study on Aging and Care in Kungsholmen, a population-based longitudinal study in Stockholm, Sweden, was conducted in 501 participants aged 60 years or older who were free of dementia at baseline. A total of 299 participants underwent repeated structural brain magnetic resonance imaging scans from September 17, 2001, to December 17, 2009.
Main Outcomes and Measures The rate of brain tissue volume loss and the progression of total white matter hyperintensity volume.
Results In the multi-adjusted linear mixed models, among 501 participants (300 women [59.9%]; mean [SD] age, 70.9 [9.1] years), higher baseline vitamin B12 and holotranscobalamin levels were associated with a decreased rate of total brain volume loss during the study period: for each increase of 1 SD, β (SE) was 0.048 (0.013) for vitamin B12 (P < .001) and 0.040 (0.013) for holotranscobalamin (P = .002). Increased total homocysteine levels were associated with faster rates of total brain volume loss in the whole sample (β [SE] per 1-SD increase, –0.035 [0.015]; P = .02) and with the progression of white matter hyperintensity among participants with systolic blood pressure greater than 140 mm Hg (β [SE] per 1-SD increase, 0.000019 [0.00001]; P = .047). No longitudinal associations were found for red blood cell folate and other sulfur amino acids.
Conclusions and Relevance This study suggests that both vitamin B12 and total homocysteine concentrations may be related to accelerated aging of the brain. Randomized clinical trials are needed to determine the importance of vitamin B12supplementation on slowing brain aging in older adults.
Notes from Kurzweill
This vitamin stops the aging process in organs, say Swiss researchers
A potential breakthrough for regenerative medicine, pending further studies
Improved muscle stem cell numbers and muscle function in NR-treated aged mice: Newly regenerated muscle fibers 7 days after muscle damage in aged mice (left: control group; right: fed NR). (Scale bar = 50 μm). (credit: Hongbo Zhang et al./Science) http://www.kurzweilai.net/images/improved-muscle-fibers.png
EPFL researchers have restored the ability of mice organs to regenerate and extend life by simply administering nicotinamide riboside (NR) to them.
NR has been shown in previous studies to be effective in boosting metabolism and treating a number of degenerative diseases. Now, an article by PhD student Hongbo Zhang published in Science also describes the restorative effects of NR on the functioning of stem cells for regenerating organs.
As in all mammals, as mice age, the regenerative capacity of certain organs (such as the liver and kidneys) and muscles (including the heart) diminishes. Their ability to repair them following an injury is also affected. This leads to many of the disorders typical of aging.
Mitochondria —> stem cells —> organs
To understand how the regeneration process deteriorates with age, Zhang teamed up with colleagues from ETH Zurich, the University of Zurich, and universities in Canada and Brazil. By using several biomarkers, they were able to identify the molecular chain that regulates how mitochondria — the “powerhouse” of the cell — function and how they change with age. “We were able to show for the first time that their ability to function properly was important for stem cells,” said Auwerx.
Under normal conditions, these stem cells, reacting to signals sent by the body, regenerate damaged organs by producing new specific cells. At least in young bodies. “We demonstrated that fatigue in stem cells was one of the main causes of poor regeneration or even degeneration in certain tissues or organs,” said Zhang.
How to revitalize stem cells
Which is why the researchers wanted to “revitalize” stem cells in the muscles of elderly mice. And they did so by precisely targeting the molecules that help the mitochondria to function properly. “We gave nicotinamide riboside to 2-year-old mice, which is an advanced age for them,” said Zhang.
“This substance, which is close to vitamin B3, is a precursor of NAD+, a molecule that plays a key role in mitochondrial activity. And our results are extremely promising: muscular regeneration is much better in mice that received NR, and they lived longer than the mice that didn’t get it.”
Parallel studies have revealed a comparable effect on stem cells of the brain and skin. “This work could have very important implications in the field of regenerative medicine,” said Auwerx. This work on the aging process also has potential for treating diseases that can affect — and be fatal — in young people, like muscular dystrophy (myopathy).
So far, no negative side effects have been observed following the use of NR, even at high doses. But while it appears to boost the functioning of all cells, it could include pathological ones, so further in-depth studies are required.
Abstract of NAD+ repletion improves mitochondrial and stem cell function and enhances life span in mice
Adult stem cells (SCs) are essential for tissue maintenance and regeneration yet are susceptible to senescence during aging. We demonstrate the importance of the amount of the oxidized form of cellular nicotinamide adenine dinucleotide (NAD+) and its impact on mitochondrial activity as a pivotal switch to modulate muscle SC (MuSC) senescence. Treatment with the NAD+ precursor nicotinamide riboside (NR) induced the mitochondrial unfolded protein response (UPRmt) and synthesis of prohibitin proteins, and this rejuvenated MuSCs in aged mice. NR also prevented MuSC senescence in the Mdx mouse model of muscular dystrophy. We furthermore demonstrate that NR delays senescence of neural SCs (NSCs) and melanocyte SCs (McSCs), and increased mouse lifespan. Strategies that conserve cellular NAD+ may reprogram dysfunctional SCs and improve lifespan in mammals.
Discriminating the gene target of a distal regulatory element from other nearby transcribed genes is a challenging problem with the potential to illuminate the causal underpinnings of complex diseases. We present TargetFinder, a computational method that reconstructs regulatory landscapes from diverse features along the genome. The resulting models accurately predict individual enhancer–promoter interactions across multiple cell lines with a false discovery rate up to 15 times smaller than that obtained using the closest gene. By evaluating the genomic features driving this accuracy, we uncover interactions between structural proteins, transcription factors, epigenetic modifications, and transcription that together distinguish interacting from non-interacting enhancer–promoter pairs. Most of this signature is not proximal to the enhancers and promoters but instead decorates the looping DNA. We conclude that complex but consistent combinations of marks on the one-dimensional genome encode the three-dimensional structure of fine-scale regulatory interactions.
Mapping the dreaming brain through neuroimaging and studies of brain damage
By Karen Zusi | March 1, 2016
Prefrontal leucotomies—surgeries to cut a section of white matter in the front of the brain, thus severing the frontal lobe’s connections to other brain regions—were all the rage through the 1950s as treatments for psychoses. The operations drastically altered the mental state of most patients. But along with personality changes, dulled initiative, and reduced imagination came a seemingly innocuous effect of many of these procedures: the patients stopped dreaming.
Mark Solms, a neuropsychologist at the University of Cape Town in South Africa, uncovered the correlation in historical data from around the globe as part of a long-term study to assess the impact, on dreams and dreaming, of damage to different parts of the brain. Between 1985 and 1995, Solms interviewed 332 of his own patients at hospitals in Johannesburg and London who had various types of brain trauma, asking them about their nightly experiences.
Solms identified two brain regions that appeared critical for the experience of dreaming. The first was at the junction of the parietal, temporal, and occipital lobes—a cortical area that supports spatial cognition and mental imagery. The second was the ventromesial quadrant of the frontal lobes, a lump of white matter commonly associated with goal-seeking behavior that links the limbic structures to the frontal cortex. “This lesion site rang a historical bell in my mind—that’s where the prefrontal leucotomy used to be done,” says Solms, adding that the operation controlled the hallucinations and delusions that came with psychosis. “That sort of struck me as, ‘Gosh, that’s what dreaming is.’” Lesions in other areas could intensify or reduce certain aspects of dreams, but damage to either of the regions Solms pinpointed reportedly caused dreaming to cease completely (Psychoanal Q, 64:43-67, 1995).
Advances in neuroimaging have lent more support to Solms’s brain map, and pinned down other areas that researchers now understand play a part in dream development. In 2013, Bill Domhoff, a psychologist from the University of California, Santa Cruz, and colleagues from the University of British Columbia published results that combined neuroimaging scans from separate studies of REM sleep and daydreaming. They discovered that brain regions that light up when there’s a high chance that one is dreaming overlapped with parts of the brain’s default mode network—regions active when the brain is awake but not focused on a specific external task (Front Hum Neurosci, 7:412, 2013). “It very much lines up,” says Domhoff. “It’s just stunning.”
The default mode network allows us to turn our attention inward, and dreaming is the extreme example, explains Jessica Andrews-Hanna, a cognitive scientist at the University of Colorado Boulder. The network takes up a large amount of cortical real estate. Key players are regions on the midline of the brain that support memories and future planning; these brain sections connect to other areas affecting how we process social encounters and imagine other individuals’ thoughts. “When people are sleeping—in particular, when they’re dreaming—the default mode network actually stays very active,” says Andrews-Hanna. With external stimuli largely cut off, the brain operates in a closed loop, and flights of fancy often ensue.
We usually take the bizarre nature of these experiences at face value. “Even in a completely crazy dream, we all think that it’s normal,” says Martin Dresler, a cognitive neuroscientist at Radboud University in the Netherlands. Dresler and many other researchers attribute this blasé acceptance to the deactivation of a brain region called the dorsolateral prefrontal cortex. When we sleep, the dorsolateral prefrontal cortex powers down, and higher executive control—which would normally flag a nonsensical concern, such as running late for a class when you haven’t been in school for a decade, as unimportant—evaporates. “You have this overactive default mode network with no connectivity, with no communication with regions that are important for making sense of the thoughts,” says Andrews-Hanna.
In healthy sleeping subjects, these executive functions can be unlocked in what’s known as lucid dreaming, when the prefrontal cortex reactivates and sleepers gain awareness of and control over their imagined actions. A lucid dreamer can actually “direct” a dream as it unfolds, deciding to fly, for example, or turning a nightmarish monster into a docile pet.
Records of lucid dreaming are limited to REM sleep, the sleep stage where the brain is most active. REM sleep normally induces paralysis to prevent people from acting out their dreams, but the eye muscles are exempt, and this gives skilled lucid dreamers a way to signal their lucidity to researchers.
Dresler’s team is using this phenomenon as a tool to ask specific questions about dreams. Before trained lucid dreamers fall asleep in Dresler’s lab, they agree to flick their eyes from left to right as soon as they realize within a dream that they’re asleep. The dreamed movement causes their actual eyes to move in a similar way under their closed eyelids. Researchers mark this signal as the beginning of a lucid dream, and then track brain patterns associated with specific dreamed actions. Dreaming also occurs in non-REM sleep, but with the brain less active, the eye muscles won’t respond to dream input—so there’s no robust way to tell if lucid dreaming takes place.
When subjects achieved lucidity and consciously dreamed that they performed a predetermined hand movement, Dresler’s research team observed activity in the sensorimotor cortex matching what would occur if the subjects actually moved their hands while awake (Curr Biol, 21:1833-37, 2011). “It’s probably the case that, for most of what we are dreaming about, the very same machinery and the very same brain regions are active compared to wakefulness,” says Dresler. “It’s just that the motor execution is stopped at the spinal level.”
Beyond sleep research, tracking lucid and normal dreaming offers an investigative model to study aspects of psychosis, according to some researchers. “These regions that are activated during lucid dreaming are typically impaired in patients with psychosis,” explains Dresler. “Having insight into your non-normal mental state in dreaming shares neural correlates with having insights into your non-normal state of consciousness in psychosis.” Dresler proposes training patients in early stages of psychosis to dream lucidly, in the hope that it might grant them some therapeutically relevant understanding of their illness.
While executive functions are impaired in many patients suffering from psychosis, their default networks seem to be overactive, says Andrews-Hanna. But how much similarity exists between the brain states of dreaming and psychosis remains controversial. Domhoff emphasizes the unique nature of dreams. “They’re not like schizophrenia, they’re not like meditation, they’re not like any kind of drug trip,” he says. “They’re an enactment of a scenario that is based upon various wishes and concerns.”
Ultimately, says Solms, deciphering dreaming furthers the field’s knowledge of what the brain does, as much as studies conducted during waking hours. “If you’re a clinician, and you understand what the different parts of the brain do in relation to dreaming, then it’s one of the things you can use as a road map for evaluating your patients.”
Dreamed Movement Elicits Activation in the Sensorimotor Cortex
Since the discovery of the close association between rapid eye movement (REM) sleep and dreaming, much effort has been devoted to link physiological signatures of REM sleep to the contents of associated dreams [1, 2, 3 and 4]. Due to the impossibility of experimentally controlling spontaneous dream activity, however, a direct demonstration of dream contents by neuroimaging methods is lacking. By combining brain imaging with polysomnography and exploiting the state of “lucid dreaming,” we show here that a predefined motor task performed during dreaming elicits neuronal activation in the sensorimotor cortex. In lucid dreams, the subject is aware of the dreaming state and capable of performing predefined actions while all standard polysomnographic criteria of REM sleep are fulfilled [5 and 6]. Using eye signals as temporal markers, neural activity measured by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS) was related to dreamed hand movements during lucid REM sleep. Though preliminary, we provide first evidence that specific contents of REM-associated dreaming can be visualized by neuroimaging.
Highlights
► Eye signals can be used to access dream content with concurrent EEG and neuroimaging
► Dreamed hand movements correspond to activity in the contralateral sensorimotor cortex
Lucid dreaming is a rare but robust state of sleep that can be trained [5]. Phenomenologically, it comprises features of both waking and dreaming [7]: in lucid dreams, the sleeping subject becomes aware of his or her dreaming state, has full access to memory, and is able to volitionally control dreamed actions [6]. Although all standard polysomnographic criteria of rapid eye movement (REM) sleep [8] are maintained and REM sleep muscle atonia prevents overt motor behavior, lucid dreamers are able to communicate their state by predefined volitional eye movements [6], clearly discernable in the electrooculogram (EOG) (Figure 1). Combining the techniques of lucid dreaming, polysomnography, and brain imaging via functional magnetic resonance imaging (fMRI) or near-infrared spectroscopy (NIRS), we demonstrate the possibility to investigate the neural underpinnings of specific dream contents—in this case, dreamed hand clenching. Predecided eye movements served as temporal markers for the onset of hand clenching and for hand switching. Previous studies have shown that muscle atonia prevents the overt execution of dreamed hand movements, which are visible as minor muscle twitches at most [3 and 9].
Figure 1.
Exemplary Lucid REM Sleep as Captured by Polysomnography during Simultaneous fMRI
Note high-frequency electroencephalogram (EEG) and minimal electromyogram (EMG) amplitude due to muscle atonia characteristic of rapid eye movement (REM) sleep (left), with wakefulness for comparison (right). Subjects were instructed to communicate the state of lucidity by quick left-right-left-right (LRLR) eye movements. Filter settings are as follows: EEG, bandpass filter 0.5−70 Hz, with additional notch filter at 50 Hz; electrooculogram (EOG), bandpass filter 0.1–30 Hz; EMG, bandpass filter 16–250 Hz.
Figure 2.
Comparison of Sensorimotor Activation during Wakefulness and Sleep
Functional magnetic resonance imaging (fMRI) blood oxygen level-dependent (BOLD)-response increases were contrasted between left and right hand movements (columns) in the three conditions (rows): executed hand movement during wakefulness (WE) (A), imagined hand movement during wakefulness (WI) (B), and dreamed hand movement during lucid REM sleep (LD) (C). Effects of left (right) hand movements were calculated in a fixed-effects analysis as a contrast “left > right” and “right > left,” respectively. Subpanels depict results in an SPM glass-brain view (sagital and coronal orientation) to demonstrate the regional specificity of the associated cortical activation, along with sensorimotor activation overlaid on an axial slice of the subject’s T1-weighted anatomical scan (position indicated on the glass brain for condition A). Clusters of activation in the glass-brain views are marked using the numbering given in Table S1. Red outlines in the glass-brain views mark the extent of activation found in the WE condition. This region of interest (ROI) was derived from the respective activation map during executed hand movement (A), thresholded at whole-brain corrected pFWE < 0.005, cluster extent >50 voxels, and served as a ROI for analysis of the WI and LD conditions in (B) and (C), respectively. T values are color-coded as indicated. The time course of the peak voxel inside the ROI is depicted (black) along with the predicted hemodynamic response based on the external pacing (A and B) or the predefined LRLR-eye signals during (C). The maximal difference in activation of the peak voxel between conditions is indicated as percentage of BOLD signal fluctuations of the predicted time course (gray).
FMRI results were confirmed by an independent imaging method in a second subject: NIRS data showed a typical hemodynamic response pattern of increased contralateral oxygenation over the sensorimotor region during successful task performance in lucid REM sleep (Figure 3; Figure 4). Notably, during dreaming, the hemodynamic responses were smaller in the sensorimotor cortex but of similar amplitude in the supplementary motor area (SMA) when compared to overt motor performance during wakefulness.
Figure 3.
Near-Infrared Spectroscopy Topography
Concentration changes of oxygenated (Δ[HbO], upper panel) and deoxygenated hemoglobin (Δ[HbR], lower panel) during executed (WE) and imagined (WI) hand clenching in the awake state and dreamed hand clenching (LD). The optical probe array covered an area of ∼7.5 × 12.5 cm2 over the right sensorimotor area. The solid box indicates the ROI over the right sensorimotor cortex with near-infrared spectroscopy (NIRS)-channels surrounding the C4-EEG electrode position. NIRS channels located centrally over midline and more anterior compared to sensorimotor ROI were chosen as ROI for the supplementary motor area (SMA, dotted box).
Figure 4.
Condition-Related NIRS Time Courses
Time courses of HbO (red traces) and HbR (blue traces) from the right sensorimotor ROI (left panel) and the supplementary motor ROI SMA (right panel) for executed (WE) and imagined (WI) hand clenching in the awake state and dreamed hand clenching (LD). The time courses represent averaged time courses from NIRS channels within the respective ROI (Figure 3). For each condition, 0 s denotes the onset of hand clenching indicated by LRLR-signals. Note that the temporal dynamics, i.e., an increase in HbO and a decrease in HbR, are in line with the typical hemodynamic response. Overt movement during wakefulness (dark red/blue traces) showed the strongest hemodynamic response, whereas the motor-task during dreaming leads to smaller changes (light red/blue traces). In the SMA, the hemodynamic response was stronger during the dreamed task when compared to imagery movement during wakefulness
Neurophysiological studies suggest that during REM sleep, the brain functions as a closed loop system, in which activation is triggered in pontine regions while sensory input is gated by enhanced thalamic inhibition and motor output is suppressed by atonia generated at the brain stem level [4 and 12].
Efforts have been made to correlate REMs to gaze direction during dreams—the “scanning hypothesis” [1 and 2]—and indeed similar cortical areas are involved in eye movement generation in wake and REM sleep [17]. In a similar vein, small muscle twitches during REM sleep were presumed to signal a change in the dream content [3]. Dream research methodology mostly relies on the evaluation of subjective reports of very diverse dream contents.
During dreaming, activation was much more localized in small clusters representing either generally weaker activation or focal activation of hand areas only, with signal fluctuations only in the order of 50% as compared to the actually executed task during wakefulness. The SMA is involved in timing, preparation, and monitoring of movements [21], and linked to the retrieval of a learned motor sequence especially in the absence of external cues [22]. Our NIRS data speak for an activation of SMA even during simple movements. This is in line with several PET and fMRI studies reporting SMA activations for simple tasks such as hand clenching, single finger-tapping, and alternated finger-tapping.
While You Were Sleeping
Assessing body position in addition to activity may improve monitoring of sleep-wake periods.
By Ruth Williams | March 1, 2016
Polysomnography—the combined assessment of brain waves, heart rate, oxygen saturation, muscle activity, and other parameters—is the most precise way to track a person’s sleeping patterns. However, the equipment required for such analyses is expensive, bulky, and disruptive to natural behavior.
Researchers are thus searching for ways to improve the accuracy of wearable devices while maintaining user-friendliness. Maria Angeles Rol of the University of Murcia in Spain and her colleagues have now discovered that by using a device strapped to the patient’s upper arm that measures both arm activity and position (the degree of tilt), they can more precisely detect periods of sleep.
The researchers studied just 13 people in this pilot study, says Barbara Galland of the University of Otago in New Zealand, but adds that nonetheless it “provide[s] an opening for further investigations to demonstrate the value of this novel technique.” (Chronobiol Int, 32:701-10, 2015)
Validation of an innovative method, based on tilt sensing, for the assessment of activity and body position
Since there is less movement during sleep than during wake, the recording of body movements by actigraphy has been used to indirectly evaluate the sleep–wake cycle. In general, most actigraphic devices are placed on the wrist and their measures are based on acceleration detection. Here, we propose an alternative way of measuring actigraphy at the level of the arm for joint evaluation of activity and body position. This method analyzes the tilt of three axes, scoring activity as the cumulative change of degrees per minute with respect to the previous sampling, and measuring arm tilt for the body position inference. In this study, subjects (N = 13) went about their daily routine for 7 days, kept daily sleep logs, wore three ambulatory monitoring devices and collected sequential saliva samples during evenings for the measurement of dim light melatonin onset (DLMO). These devices measured motor activity (arm activity, AA) and body position (P) using the tilt sensing of the arm, with acceleration (wrist acceleration, WA) and skin temperature at wrist level (WT). Cosinor, Fourier and non-parametric rhythmic analyses were performed for the different variables, and the results were compared by the ANOVA test. Linear correlations were also performed between actimetry methods (AA and WA) and WT. The AA and WA suitability for circadian phase prediction and for evaluating the sleep–wake cycle was assessed by comparison with the DLMO and sleep logs, respectively. All correlations between rhythmic parameters obtained from AA and WA were highly significant. Only parameters related to activity levels, such as mesor, RA (relative amplitude), VL5 and VM10 (value for the 5 and 10 consecutive hours of minimum and maximum activity, respectively) showed significant differences between AA and WA records. However, when a correlation analysis was performed on the phase markers acrophase, mid-time for the 10 consecutive hours of highest (M10) and mid-time for the five consecutive hours of lowest activity (L5) with DLMO, all of them showed a significant correlation for AA (R = 0.607, p = 0.028; R = 0.582, p = 0.037; R = 0.620, p = 0.031, respectively), while for WA, only acrophase did (R = 0.621, p = 0.031). Regarding sleep detection, WA showed higher specificity than AA (0.95 ± 0.01 versus 0.86 ± 0.02), while the agreement rate and sensitivity were higher for AA (0.76 ± 0.02 versus 0.66 ± 0.02 and 0.71 ± 0.03 versus 0.53 ± 0.03, respectively). Cohen’s kappa coefficient also presented the highest values for AA (0.49 ± 0.04) and AP (0.64 ± 0.04), followed by WT (0.45 ± 0.06) and WA (0.37 ± 0.04). The findings demonstrate that this alternative actigraphy method (AA), based on tilt sensing of the arm, can be used to reliably evaluate the activity and sleep–wake rhythm, since it presents a higher agreement rate and sensitivity for detecting sleep, at the same time allows the detection of body position and improves circadian phase assessment compared to the classical actigraphic method based on wrist acceleration.
Sleep’s Kernel
Surprisingly small sections of brain, and even neuronal and glial networks in a dish, display many electrical indicators of sleep.
Sleep is usually considered a whole-brain phenomenon in which neuronal regulatory circuits impose sleep on the brain. This paradigm has its origins in the historically important work of Viennese neurologist Constantin von Economo, who found that people who suffered from brain infections that damaged the anterior hypothalamus slept less. The finding was a turning point in sleep research, as it suggested that sleep was a consequence of active processes within the brain. This stood in stark contrast to the ideas of renowned St. Petersburg physiologist Ivan Pavlov, who believed that sleep resulted from the passive withdrawal of sensory input. Although the withdrawal of sensory input remains recognized as playing a role in sleep initiation, there is now much evidence supporting the idea that neuronal and glial activity in the anterior hypothalamus leads to the inhibition of multiple excitatory neuronal networks that project widely throughout the brain.
But we also know from millions of stroke cases that cause brain damage and from experimentally induced brain damage in animal models that, regardless of where a lesion occurs in the brain, including the anterior hypothalamus, all humans or animals that survive the brain damage will continue to sleep. Further, a key question remains inadequately answered: How does the hypothalamus know to initiate sleep? Unless one believes in the separation of mind and brain, then, one must ask: What is telling the hypothalamus to initiate sleep? If an answer is found, it leads to: What is telling the structure that told the hypothalamus? This is what philosophers call an infinite regress, an unacceptable spiral of logic.
For these reasons, 25 years ago the late Ferenc Obál Jr. of A. Szent-Györgyi Medical University in Szeged, Hungary, and I (J.K.) began questioning the prevailing ideas of how sleep is regulated. The field needed answers to fundamental questions. What is the minimum amount of brain tissue required for sleep to manifest? Where is sleep located? What actually sleeps? Without knowing what sleeps or where sleep is, how can one talk with any degree of precision about sleep regulation or sleep function? A new paradigm was needed.
There is no direct measure of sleep, and no single measure is always indicative of sleep. Quiescent behavior and muscle relaxation usually occur simultaneously with sleep but are also found in other circumstances, such as during meditation or watching a boring TV show. Sleep is thus defined in the clinic and in experimental animals using a combination of multiple parameters that typically correlate with sleep.
The primary tool for assessing sleep state in mammals and birds is the electroencephalogram (EEG). High-amplitude delta waves (0.5–4 Hz) are a defining characteristic of the deepest stage of non–rapid eye movement (non-REM) sleep. However, similar waves are evident in adolescents who hyperventilate for a few seconds while wide awake. Other measures used to characterize sleep include synchronization of electrical activity between EEG electrodes and the quantification of EEG delta wave amplitudes. Within specific sensory circuits, the cortical electrical responses induced by sensory stimulation (called evoked response potentials, or ERPs) are higher during sleep than during waking. And individual neurons in the cerebral cortex and thalamus display action potential burst-pause patterns of firing during sleep.
Using such measures, researchers have shown that different parts of the mammalian brain can sleep independently of one another. Well-characterized sleep regulatory substances, or somnogens, such as growth hormone releasing hormone (GHRH) and tumor necrosis factor α (TNF-α), can induce supranormal EEG delta waves during non-REM sleep in the specific half of the rat brain where the molecules were injected. Conversely, if endogenous TNF-α or GHRH production is inhibited, spontaneous EEG delta waves during non-REM sleep are lower on the side receiving the inhibitor. A more natural example of sleep lateralization is found in the normal unihemispheric sleep of some marine mammals. (See “Who Sleeps?”)
Much smaller parts of the brain also exhibit sleep-like cycles. As early as 1949, Kristian Kristiansen and Guy Courtois at McGill University and the Montreal Neurological Institute showed that, when neurons carrying input from the thalamus and surrounding cortical tissue are surgically severed, clusters of neurons called cerebral cortical islands will alternate between periods of high-amplitude slow waves that characterize sleep and low-amplitude fast waves typical of waking, independently of surrounding tissue.1 This suggests that sleep is self-organizing within small brain units.
In 1997, Ivan Pigarev of the Russian Academy of Sciences in Moscow and colleagues provided more-concrete evidence that sleep is a property of local networks. Measuring the firing patterns of neurons in monkeys’ visual cortices as the animals fell asleep while performing a visual task, they found that some of the neurons began to stop firing even while performance persisted. Specifically, the researchers found that, within the visual receptive field being engaged, cells on the outer edges of the field stopped firing first. Then, as the animal progressed deeper into a sleep state, cells in more-central areas stopped firing. This characteristic spatial distribution of the firing failures is likely a consequence of network behavior. The researchers thus concluded that sleep is a property of small networks.2
More recently, David Rector at Washington State University and colleagues provided support for the idea of locally occurring sleep-like states. In a series of experiments, they recorded electrical activity from single cortical columns using a small array of 60 electrodes placed over the rat somatosensory cortex. The sensory input from individual facial whiskers maps onto individual cortical columns. As expected, ERPs in the cortical columns induced by twitching a whisker were higher during sleep than during waking. But looking at the activity of individual columns, the researchers observed that they could behave somewhat independently of each other. When a rat slept, most—but not all—of the columns exhibited the sleep-like high-amplitude ERPs; during waking, most—but not all—of the columns were in a wake-like state. Interestingly, the individual cortical columns also exhibited patterns that resembled a sleep rebound response: the longer a column was in the wake-like state, the higher the probability that it would soon transition into a sleep-like state.3
To test how cortical-column state can affect whole-animal behavior, Rector and his team trained rats to lick a sucrose solution upon the stimulation of a single whisker, then characterized the whisker’s cortical-column state. If the column receiving input from the stimulated whisker was in a wake-like state (low-magnitude ERP), the rats did not make mistakes. But if the column was in the sleep-like state (high-magnitude ERP), the animals would fail to lick the sucrose when stimulated and would sometimes lick it even when their whisker was not flicked.4 Even though the animal was awake, if a cortical column receiving stimulation was asleep, it compromised the animal’s performance. These experiments indicate that even very small neuronal networks sleep and that the performance of learned behavior can depend on the state of such networks.
Given that sleep can manifest in relatively small brain regions, perhaps it should not be too surprising that co-cultures of neurons and glia possess many of the electrophysiological sleep phenotypes that are used to define sleep in intact animal brains. During sleep, cortical and thalamic neurons display bursts of action potentials lasting about 500 ms, followed by periods of hyperpolarization lasting about the same length of time. The synchronization of this firing pattern across many neurons is thought to generate the EEG activity characteristic of delta-wave sleep, and undisturbed co-cultures of glia and neurons display periodic bursts of action potentials, suggesting that the culture default state is sleep-like. In contrast, if neuronal and glia networks are stimulated with excitatory neurotransmitters, the culture’s “burstiness”—the fraction of all action potentials found within bursts—is reduced, indicating a transition to a wake-like state. Treatment of co-cultures with excitatory neurotransmitters also converts their gene expression profile from a spontaneous sleep-like pattern to a wake-like pattern.5
Cell cultures also respond to sleep-inducing agents similarly to whole organisms. If a neuronal and glial culture is treated with TNF-α, the synchronization and amplitudes of slow-wave electrical activity increase, indicating a deeper sleep-like state. Moreover, ERPs are of greater magnitude after cultures are treated with TNF-α than during the sleep-like default state, suggesting that the somnogen induces a deeper sleep-like state in vitro as it does in vivo.6
Researchers have even studied the developmental pattern of such sleep phenotypes, using multielectrode arrays to characterize network activity throughout the culture, and the emergence of network properties follows a similar time course as in intact mouse pups. Spontaneous action potentials occur during the first few days in culture, but network emergent properties are not evident until after about 10 days. Then, synchronization of electrical potentials begins to emerge, and the network’s slow waves begin to increase in amplitude. If the cultures are electrically stimulated, slow-wave synchronization and amplitudes are reduced, suggesting the networks wake up. This is followed by rebound-enhanced slow-wave synchronization and amplitudes the next day, suggesting sleep homeostasis is also a characteristic of cultured networks.6
Clearly, even small neural networks can exhibit sleep-like behavior, in a dish or in the brain. But the question remains: What is driving the oscillations between sleep- and wake-like states?
Sleep emerges
In the intact brain, communication among neurons and between neurons and other cells is ever changing. Bursts of action potentials trigger the release of multiple substances and changes in gene expression, both of which alter the efficacy of signal transmission. For instance, neural or glial activity induces the release of ATP into the local extracellular space. Extracellular ATP, in turn, induces changes in the expression of TNF-α and other somnogens known to induce a sleep-like state. Because these effects take place in the immediate vicinity of the cell activity, they target sleep to local areas that were active during prior wakefulness.
In 1993, Obál and I (J.K.) proposed that sleep is initiated within local networks as a function of prior activity.7 The following year, Derk-Jan Dijk and Alex Borbely of the University of Zurich provided support for this idea when they had volunteers hold hand vibrators in one hand during waking to stimulate one side of the somatosensory cortex. In subsequent sleep, the side of the brain that received input from the stimulated hand exhibited greater sleep intensity, determined from amplitudes of EEG slow waves, than the opposite side of the brain. And in 2006, Reto Huber, then at the University of Wisconsin, showed that if an arm is immobilized during waking, amplitudes of EEG slow waves from the side of the brain receiving input from that arm are lower in subsequent sleep.
These experiments indicate that local sleep depth is a function of the activity of the local network during waking—an idea that has been confirmed by multiple human and animal studies. Moreover, local network state oscillations strongly indicate that sleep is initiated within local networks such as cortical columns. But how do the states of a population of small networks translate into whole-animal sleep?
Small local clusters of neurons and glia are loosely connected with each other via electrophysiological and biochemical signaling, allowing for constant communication between local networks. Steven Strogatz of Cornell University showed that dynamically coupled entities, including small neuronal circuits, will synchronize with each other spontaneously without requiring direction by an external actor. Synchronization of loosely coupled entities occurs at multiple levels of complexity in nature from intact animals to molecules—for example, birds flocking, or the transition from water to ice. The patterns generated by bird flocking, or the hardness of ice, are called emergent properties.
We, Obál, and our colleagues proposed that whole-brain sleep is an emergent property resulting from the synchronization of local neuronal network states.7,8,9 This would explain why sleep continues to occur after brain damage: because the remaining local circuits will spontaneously synchronize with each other. This view also allows one to easily envision variations in the depth or degree of sleep and waking because it allows for some parts of the brain to be in sleep-like states while other areas are in wake-like states, just as Rector observed. These independent states of local networks may account for sleep inertia, the minutes-long period upon awakening of poor cognitive performance and fuzzy-mindedness, and may also play a role in the manifestation of dissociated states such as sleepwalking. Most importantly, this paradigm frees sleep regulation from the dualism trap of mind/brain separation: top-down imposition of state is not required for the initiation of local state oscillations or for subsequent whole-organism sleep to ensue.
Our theory is also consistent with the modulation of sleep and wakefulness by sleep regulatory circuits such as those in the hypothalamus. For example, if interleukin-1, a sleep regulatory substance, is applied locally to the surface of the rat cortex, it induces local high-amplitude EEG slow waves indicative of a greater local depth of sleep.10 The responses induced by interleukin-1 in the cortex enhanced neuronal activity in anterior hypothalamic sleep regulatory areas.11 That hypothalamic neuronal activity likely provides information on local sleep- and wake-like states occurring in the cortex to the hypothalamus, where it can modulate the orchestration of the sleep initiated within the smaller brain units.
Finally, our ideas may inform the study of how sleep influences the formation of memories. A fundamental problem a living brain faces is the incorporation of new memories and behaviors while conserving existing ones. We know that cell activity enhances neuronal connectivity and the efficacy of neurotransmission within active circuits, a phenomenon that has been posited to be a mechanism by which memories are formed and solidified. By themselves, however, these use-dependent mechanisms would lead to unchecked growth of connectivity (in response to activity patterns) and positive feedback (since increased connectivity leads to reuse), ultimately resulting in a rigid, non-plastic network.7 Instead, we suggest that biochemical mechanisms—specifically, the use-dependent expression of genes involved in sleep regulation and memory—induce oscillations, representing local wake- and sleep-like states, which serve to stabilize and preserve brain plasiticity.7
For more than a century, researchers have struggled to understand how sleep works and what it does. Perhaps this lack of answers stems from a fundamental misconception about what sleeps. By thinking about sleep in smaller units, such as individual networks in the brain, hopefully the field will start to understand what exactly is going on during this enigmatic—but very common—phenomenon.
James M. Krueger is a regents professor of neuroscience and Sandip Roy is an associate professor of electrical engineering at Washington State University.
K. Kristiansen, G. Courtois, “Rhythmic electrical activity from isolated cerebral cortex,” Electroen Clin Neuro, 1:265-72, 1949.
I.N. Pigarev et al., “Evidence for asynchronous development of sleep in cortical areas,” Neuroreport, 8:2557-60, 1997.
D.M. Rector et al., “Local functional state differences between rat cortical columns,” Brain Res, 1047:45-55, 2005.
J.M. Krueger et al., “Sleep: A synchrony of cell activity-driven small network states,” Eur J Neurosci, 38:2199-09, 2013.
V. Hinard et al., “Key electrophysiological, molecular, and metabolic signatures of sleep and wakefulness revealed in primary cortical cultures,” J Neurosci, 32:12506-17, 2012.
K.A. Jewett et al., “Tumor necrosis factor enhances the sleep-like state and electrical stimulation induces a wake-like state in co-cultures of neurons and glia,” Eur J Neurosci, 42:2078-90, 2015.
J.M. Krueger, F. Obál, “A neuronal group theory of sleep function,” J Sleep Res, 2:63-69, 1993.
J.M. Krueger et al., “Sleep as a fundamental property of neuronal assemblies,” Nat Rev Neurosci, 9: 910-19, 2008.
S. Roy et al., “A network model for activity-dependent sleep regulation,” J Theor Biol, 253:462-68, 2008.
T. Yasuda et al., “Interleukin-1 beta has a role in cerebral cortical state-dependent electro-encephalographic slow-wave activity,” Sleep, 28:177-84, 2005.
K. Yasuda et al., “Unilateral cortical application of interleukin-1β (IL1β) induces asymmetry in Fos- and IL1β-immunoreactivity: Implications for sleep regulation,” Brain Res, 1131:44-59, 2007.
In Dogged Pursuit of Sleep
Unearthing the root causes of narcolepsy keeps Emmanuel Mignot tackling one of sleep science’s toughest questions.
In November 1986, Emmanuel Mignot arrived at Stanford University’s Center for Sleep Sciences and Medicine for a 16-month stint as a research associate. His goal was to find effective drugs to treat narcolepsy; his study subjects belonged to a colony of canines that suffered from the malady. “[When I got there], the dogs were being maintained, but not much was being done with them other than some chemistry studies on known neurotransmitters,” says Mignot, a professor of psychiatry and behavioral sciences at Stanford University and now director of the center. “As a pharmacologist, I wanted to study potential treatments for narcolepsy and understand the molecular biology to improve treatment in humans.”
The first narcoleptic dog, a French poodle named Monique, was brought to Stanford in 1974 byWilliam Dement, the so-called “father of sleep medicine,” who had founded the center in 1970, the first in the world dedicated to the study of sleep. Dement and other researchers there established a full breeding colony in 1977 when dogs with a genetic form of the neurological disorder were discovered—initially, some puppies from a litter of Dobermans and, later, some Labradors. Narcoleptic dogs and humans both exhibit a combination of symptoms: perpetual sleepiness, cataplexy—muscle paralysis attacks triggered by emotions—and abnormal rapid eye movement (REM) sleep. While the condition in humans and dogs is treatable, there is no cure.
To study which narcolepsy drugs increased wakefulness and decreased cataplexy in the dogs, Mignot and psychiatry professor Seiji Nishino used a food-elicited cataplexy test: administration of the drug followed by release into a room with pieces of food on the floor and careful observation. “The dog would rush into the room and be so happy to eat the treats, and then would have an attack and collapse on the floor.” The researchers counted the number and duration of the attacks after treatment with a drug at various doses. In humans, cataplexy episodes are triggered by a positive emotion such as laughter at a joke or pleasant surprise. “For the dogs, it is food or the joy of playing. That is what is great about dogs as a model for this condition. When you give a treatment to a rat or mouse and they stop having cataplexy, you really don’t know if it is because they don’t feel good or if it is a genuine effect. But the dogs show you emotions like humans. I knew all of these dogs by name. They were my friends. I could see if they were worried or didn’t feel well.”
Mignot worked mostly with the Dobermans and Labs, but there were also dogs donated to the colony that seemed to have a sporadic form of narcolepsy, “There was Vern, a miniature poodle; Wally, a big poodle; Tucker, a mutt; and Beau, my beloved dachshund.” Using the cataplexy test in animals along with in vitro studies of the drugs’ chemical properties, Mignot and Nishino found that antidepressants suppress cataplexy by inhibiting adrenergic reuptake, and that amphetamine-like stimulants promote wakefulness in narcoleptics by increasing the availability of dopamine. “We improved the then-current treatments and started to understand the kinds of chemicals important to regulate narcolepsy symptoms.”
But Mignot wanted to understand the molecular mechanism of narcolepsy, so he turned his focus to the genetic basis of the disorder. A lack of genetics training and no map of the dog genome to guide him did not deter Mignot. He has tirelessly pursued this previously little-studied and, so far, only known neurological disorder that fundamentally perturbs the nature of sleep states.
Here, Mignot talks about pursuing a master’s, PhD, and MD simultaneously, the paper retraction that has been the most difficult episode in his career so far, and his unexpected devotion to a Chihuahua.
Mignot Motivated
Sir Mix-a-Lot. The youngest of six siblings, Mignot had a penchant for collecting fossils and for conducting chemistry experiments in the bathroom of his family’s home in Paris. “I bought chemicals sold by a Chinese shopkeeper on Rue Saint-Dominique to do all kinds of experiments, mixed them, and occasionally made mistakes. There were burn marks and projections on the walls of my bathroom.” In high school, the self-proclaimed “nerd with glasses” became interested in biology, and, after graduation in 1977, went to study for a medical degree at the René Descartes University Faculty of Medicine in Paris.
Collecting degrees. “In the second year of medical school, I got bored from all of the memorization.” He took the entrance exam for the prestigious École Normale Supérieure (ENS), which gives students freedom to pursue their academic interests at other institutions while providing a stipend, housing, and the support of professor mentors. He passed, and entered the ENS in 1979. Mignot worked towards a master’s in biochemistry, and then a PhD in molecular pharmacology while still continuing his medical studies. “Nothing was set up for MD-PhD programs at the time. It was all in parallel, which was crazy. I had an exam every few weeks,” says Mignot. In 1984, he received both his medical degree and, later, a PhD from Pierre and Marie Curie University.
New to narcolepsy. Mignot became interested in the effects of drugs on the brains of psychiatric patients, studying how different compounds affected the metabolism of neurotransmitters in the brains of rats, and pursued a residency in psychiatry to complement his laboratory research. In 1986, he was offered a professorship in pharmacology at the Paris V University School of Medicine. But first, Mignot needed to complete the mandatory military service that he had deferred. “Instead of going to a former French colony to practice medicine, I convinced the French government to send me to Stanford to study modafinil, a wakefulness-promoting drug created by a French pharmaceutical company called Lafon Laboratories for the treatment of narcolepsy. I had never heard about [narcolepsy] during medical school—it must have been a single line in my textbooks. I discovered that Stanford was doing work on sleep and that Dement had started a colony of narcoleptic dogs there. I thought I could study these animals and figure out how modafinil worked.”
So Mignot came to Stanford for 16 months as part of his military service with financial support from Lafon Laboratories. “The company had claimed modafinil worked by a novel mechanism, unrelated to how stimulants work,” says Mignot. But Mignot found that modafinil bound the dopamine transporter, inhibiting the reuptake of the neurotransmitter, boosting wakefulness. “This is a similar mode of action as Ritalin, but the company was claiming otherwise. It took 10 years for my results to be validated, finally, by Nora D. Volkow, now director of the National Institute on Drug Abuse, who showed . . . that indeed the drug displaces the dopamine transporter at doses that increase wakefulness in humans.”
Mignot Moves
Going to the dogs. At Stanford, Mignot immersed himself in his work with the dog colony. “I worked all the time and came home just to sleep. I was definitely not very successful with girls then, because I smelled like dog all the time. I spent all day with the dogs, going to the facility, hugging, playing, and working with them. When we bred them, sometimes the mothers rejected their puppies so we had to come in every few hours, even in the middle of the night, to bottle-feed the puppies. Even after I took a shower, you could still smell the dogs. It was a strange part of my life.”
From pharmacology to genetics. Mignot kept extending his stay at Stanford. “After a few years I realized our pharmacology studies were never going to lead to narcolepsy’s cause. We needed to find the genetic cause in the dog.” In 1988, he resigned a faculty position in Paris—which was being held for him even as he continued to extend his time at Stanford—deciding to search for the mutated gene responsible for narcolepsy in dogs. In 1993, Mignot became the head of the Center for Narcolepsy at Stanford. A connection between an immune gene, the human leukocyte antigen (HLA) allele HLA-DR2, and narcolepsy in humans had already been identified by Yutaka Honda at the University of Tokyo, so Mignot’s lab tried to ascertain whether the same connection was true in the dogs or if the immune gene was simply a genetic linkage marker. These were the days before the dog or human genome had been sequenced, so the work took Mignot’s lab 10 years, and almost 200 narcoleptic Dobermans and Labradors: years of painstaking chromosome walking experiments, DNA fingerprinting, and the construction of a bacterial artificial chromosome library of dog genomic pieces. “What helped us a lot was that we knew the Dobermans and Labs had the same genetic defect because we interbred and got narcoleptic puppies—what’s called a complementation test.” In 1999, Mignot’s team identified the mutated gene as hypocretin receptor 2, whose protein binds hypocretin (also called orexin), a neuropeptide that regulates arousal and wakefulness. Several weeks later, after seeing these findings, Masashi Yanagisawa’s lab independently published a confirmation, showing that hypocretin knockout mice also have narcolepsy.
In parallel narcolepsy studies across ethnic groups, Mignot’s lab found that it was not the initial HLA-DR2allele that predisposed humans to narcolepsy, but another, nearby HLA gene, DQB1*0602.
Humans are not like dogs. “After we found the gene, the research went fast. We decided to look at hypocretin itself and see if it’s abnormal in humans.” Mignot’s lab sequenced the genes for the hypocretin receptor and its ligand in narcoleptic patients, expecting mutations in either to be rare because of the known HLA-narcolepsy linkage and the fact that most cases in humans, unlike in dogs, are not familial. Only one documented case, a child who had narcolepsy onset at six months of age, has been found to harbor a hypocretin gene mutation. “I think you need to knock out both receptor 1 and 2 in humans to get the full narcoleptic phenotype,” says Mignot. “Those with just one mutation may be more prone to tiredness but not full narcolepsy.”
In 2000, Mignot’s and Nishino’s groups reported that hypocretin was not present in narcoleptic patients’ cerebrospinal fluid—a test still used diagnostically today. The same year, independent studies from Mignot’s laboratory and that of Jerome Siegel at the University of California, Los Angeles, found that the lack of hypocretin was not due to gene mutations but to the fact hypocretin cells were missing in the brains of narcoleptic patients. HLA genes were well known to be associated with many autoimmune diseases, and Mignot hypothesized that hypocretin was missing due to an autoimmune attack against hypocretin-secreting neurons. What the abnormality is in those narcolepsy patients with normal hypocretin levels remains a mystery.
Mignot Moves Forward
Still a missing link. “I have been working on this [autoimmunity] hypothesis for 10 years, and we see that this hypothesis is more and more likely, but we cannot find any direct proof. It’s frustrating, but that kind of struggle is the story of my life.” All known autoimmune diseases result in the generation of antibodies in patients, but antibodies against hypocretin or the hypocretin cells have never been detected. So Mignot’s lab tested whether T-cells were the immune component attacking hypocretin. In 2013, his lab published a study identifying the T-cell culprits. But the study was retracted by Mignot himself one year later, when Mignot’s group couldn’t reproduce the results after the scientist who did most of the experiments had left the lab. “It was really painful and the worst time in my career.”
A new lead. “In 2010, a lot of people suddenly started to develop narcolepsy after receiving the Pandemrix vaccine against swine flu. It’s very odd. We still don’t understand why this particular vaccine increased the risk of narcolepsy.” Mignot thinks that a component of the vaccine or the virus itself triggers the immune system to attack hypocretin-producing neurons. “So now I am doing a lot of studies comparing the different vaccines and the wild-type virus to try to understand what could be common to produce this response. I think the vaccine will give us a final clue to isolate the immune T-cells involved in narcolepsy.”
Genetics of sleep. Mignot’s lab is working on a genome-wide association study, which shows that the genetic variants linked to narcolepsy are mostly immune-related, similar to Type 1 diabetes, celiac disease and other autoimmune diseases, further supporting the autoimmune hypothesis. Mignot is also getting a large human study off the ground. “I want to study the genetics of 40,000 people with sleep issues to see if there are genetic traits that cause people to sleep well or not sleep well, to need more sleep or less sleep. This hasn’t been done yet. I think this will help us crack open the mysteries of sleep.”
A new companion. “The dog colony was officially dismantled in 2000 after we found the canine narcolepsy gene. The dogs were adopted and we got Bear, a narcoleptic Schipperke. He passed away over a year ago. I loved that dog and miss him a lot. He was an unusually kind soul. Three months later, a breeder from Vermont called and said he had a narcoleptic Chihuahua. I flew to Vermont and adopted Watson and he’s been with us ever since. I never would have thought to adopt a Chihuahua, but now I can’t think of life without Watson. He is faithful and cuddly. I really think you can bond with any dog.”
The journey continues. “This story of narcolepsy, it’s a difficult story. Finding the gene was very difficult, and finding the autoimmune connection should have been trivial, but it has been an ordeal because there is absolutely no collateral damage. As [Stanford neurologist] Larry Steinman said to me, it’s like a ‘hit and run’—it looks like it was cleaned up and the players disappear. It’s hard, but by learning about this disease, we may discover other diseases where a similar autoimmune destruction happens in the brain but we have never realized it. I wouldn’t be surprised if some forms of depression and schizophrenia have an autoimmune basis in the brain. By experience, the more difficult it is, the most interesting the answer will be.”
Greatest Hits
Identified the gene for hypocretin receptor 2, which, when mutated, causes an inherited form of narcolepsy in Dobermans and Labradors
Identified how antidepressant and stimulant drugs work as treatments for narcolepsy
Identified DQB1*0602 as the main human gene associated with narcolepsy
By genome-wide association, found immune polymorphisms, such as one in the T-cell receptor alpha, that also predispose people to the disease, further suggesting the disease is autoimmune
Found that human narcolepsy, unlike canine narcolepsy, is not caused by mutations in the hypocretin receptor 2 gene but is due to an immune-mediated destruction of hypocretin-producing neurons in the brain
DQB1*0602 and DQA1*0102 (DQ1) are better markers than DR2 for narcolepsy in Caucasian and black Americans.
In the present study, we tested 19 Caucasian and 28 Black American narcoleptics for the presence of the human leucocyte antigen (HLA) DQB1*0602 and DQA1*0102 (DQ1) genes using a specific polymerase chain reaction (PCR)-oligotyping technique. A similar technique was also used to identify DRB1*1501 and DRB1*1503 (DR2). Results indicate that all but one Caucasian patient (previously identified) were DRB1*1501 (DR2) and DQB1*0602/DQA1*102 (DQ1) positive. In Black Americans, however, DRB1*1501 (DR2) was a poor marker for narcolepsy. Only 75% of patients were DR2 positive, most of them being DRB1*1503, but not DRB1*1501 positive. DQB1*0602 was found in all but one Black narcoleptic patient. The clinical and polygraphic results for this patient were typical, thus confirming the existence of a rare, but genuine form of DQB1*0602 negative narcolepsy. These results demonstrate that DQB1*0602/DQA1*0102 is the best marker for narcolepsy across all ethnic groups.
Narcolepsy is a chronic neurologic disorder characterized by excessive daytime sleepiness and abnormal manifestations of REM sleep including cataplexy, sleep paralysis, and hypnagogic hallucinations. Narcolepsy is both a significant medical problem and a unique disease model for the study of sleep. Research in human narcolepsy has led to the identification of specific HLA alleles (DQB1*0602 and DQA1*0102) that predispose to the disorder. This has suggested the possibility that narcolepsy may be an autoimmune disorder, a hypothesis that has not been confirmed to date. Genetic factors other than HLA are also likely to be involved. In a canine model of narcolepsy, the disorder is transmitted as a non-MHC single autosomal recessive trait with full penetrance (canarc-1). A tightly linked marker for canarc-1 has been identified, and positional cloning studies are under way to isolate canarc-1 from a newly developed canine genomic BAC library. The molecular cloning of this gene may lead to a better understanding of sleep mechanisms, as has been the case for circadian rhythms following the cloning of frq, per, and Clock.
Sleep consumes almost one-third of any human lifetime, yet its biological function remains unknown. Electrophysiological studies have shown that sleep is physiologically heterogeneous. Sleep onset is first characterized by light nonrapid eye movement (NREM) sleep (stage I and II), followed by deep NREM sleep or slow-wave sleep (stage III and IV) and finally rapid eye movement (REM) sleep. This sleep cycle is ∼90 min long and is repeated multiple times during nocturnal sleep. REM sleep, also called paradoxical sleep, is characterized by low-voltage fast electroencephalogram activity, increased brain metabolism, skeletal muscle atonia, rapid eye movements, and dreaming. Total sleep deprivation and/or REM sleep deprivation are both lethal in animals.
NREM and REM sleep are mainly regulated by circadian and homeostatic processes. Recent studies have suggested that across the animal kingdom, circadian rhythms are regulated by similar negative feedback loops involving the rhythmic expression of RNAs encoding proteins that act to shut off the genes encoding them (Hall 1995; Dunlap 1996;Rosbash et al. 1996; Young et al. 1996). From a genetic perspective, much less progress has been made in the noncircadian aspects of sleep regulation. This review demonstrates that a genetic approach to narcolepsy will in time provide a novel insight into the molecular basis of sleep control.
Narcolepsy, a Disorder of REM Sleep Regulation
Narcolepsy most often begins in the second decade of life but may be observed at the age of 5 or younger (Honda 1988). The cardinal symptom in narcolepsy is a persistent and disabling excessive daytime sleepiness. Sleep attacks are unpredictable, irresistible, and may lead to continuing activities in a semiconscious manner, a phenomenon referred to as automatic behavior. Naps are usually refreshing, but the restorative effect vanishes quickly.
Sleepiness is not sufficient to diagnose the disorder. Narcoleptic patients also experience symptoms that are secondary to abnormal transitions to REM sleep (Aldrich 1992; Bassetti and Aldrich 1996). The most important of these symptoms is cataplexy, a pathognomonic symptom for the disorder. In cataplexy, humor, laughter, or anger triggers sudden episodes of muscle weakness ranging from sagging of the jaw, slurred speech, buckling of the knees or transient head dropping, to total collapse to the floor (Aldrich 1992; Bassetti and Aldrich 1996). Patients typically remain conscious during the attack, which may last a few seconds or a few minutes. Reflexes are abolished during the attack, as they are during natural REM sleep atonia. Sleep paralysis, another manifestation of REM sleep atonia, is characterized by an inability to move and speak while falling asleep or upon awakening. Episodes last a few seconds to several minutes and can be very frightening. Hypnagogic hallucinations are vivid perceptual dream-like experiences (generally visual) occurring at sleep onset. Sleep paralysis and hypnagogic hallucinations occasionally occur in normal individuals under extreme circumstances of sleep deprivation or after a change in sleep schedule (Aldrich 1992; Bassetti and Aldrich 1996) and thus have little diagnostic value in isolation.
Nocturnal sleep polysomnography is conducted to exclude other possible causes of daytime sleepiness such as sleep apnea or periodic limb movements (Aldrich 1992). The Multiple Sleep Latency Test (MSLT) is also carried out to demonstrate daytime sleepiness objectively. In this test, patients are requested to take four or five naps at 2-hr intervals, during which time to sleep onset (sleep latency) is measured. Short sleep latencies under 5 min are usually observed in narcoleptic patients, together with abnormal REM sleep episodes, referred to as sleep-onset REM periods (SOREMPs). The combination of a history of cataplexy, short sleep latencies, and two or more SOREMPs during MSLT is diagnostic for narcolepsy (Bassetti and Aldrich 1996;Mignot 1996). Note that many naps consist only of NREM sleep suggesting that there is also a broader problem of impaired sleep–wake regulation, with indistinct boundaries between sleep and wakefulness in narcolepsy (Broughton et al. 1986; Bassetti and Aldrich 1996).
The disorder has a large psychosocial impact. Two-thirds of patients have fallen asleep while driving, and 78% suffer from reduced performance at work (Broughton et al. 1981). Depression occurs in up to 23% of cases (Roth 1980). Treatment is purely symptomatic and generally involves amphetamine-like stimulants for excessive daytime sleepiness and antidepressive treatment for cataplexy and other symptoms of abnormal REM sleep (Bassetti and Aldrich 1996; Nishino and Mignot 1997).
Familial and Genetic Aspects of Human Narcolepsy
Narcolepsy–cataplexy affects 0.02%–0.18% of the general population in various ethnic groups (Mignot 1998). A familial tendency for narcolepsy has long been recognized (Roth 1980). The familial risk of a first-degree relative is 0.9%–2.3% for narcolepsy–cataplexy, which is 10–40 times higher than the prevalence in the general population (Mignot 1998).
In a Finnish twin cohort study consisting of 13,888 monozygotic (MZ) and same-sexed dizygotic (DZ) twin pairs, three narcoleptic individuals were found and each of them was discordant DZ with a negative family history (Hublin et al. 1994). In the literature, 16 MZ pairs with at least one affected twin have been reported and five of these pairs were concordant for narcolepsy (Mignot 1998). Although narcolepsy is likely to have a genetic predisposition, the low rate of concordance in narcoleptic MZ twins indicates that environmental factors play an important role in the development of the disease.
HLA DQA1*0102 andDQB1*0602 Are Primary Susceptibility Factors for Narcolepsy
Narcolepsy was shown to be associated with the human leukocyte antigen (HLA) DR2 in the Japanese population (Honda et al. 1984;Juji et al. 1984). DR2 is observed in all Japanese patients versus 33% of Japanese controls (Juji et al. 1984; Matsuki et al. 1988a). A similar association is observed in Caucasians, with >85% versus 22% DR2 positivity (Langdon et al. 1984; Billiard et al. 1986;Rogers et al. 1997). Strikingly however, the DR2association is much lower in African–Americans (65%–67% in narcoleptic patients vs. 27%–38% in controls) (Neely et al. 1987;Matsuki et al. 1992;Rogers et al. 1997). Further studies have shown that HLA DQalleles, located ∼80 kb from the DRregion, are more tightly associated with narcolepsy than HLADR subtypes. More than 90% of narcolepsy–cataplexy patients across all ethnic groups carry a specific allele of HLA DQB1, DQB1*0602 (Matsuki et al. 1992;Mignot et al. 1994); this allele is present in 12%–38% of the general population across many ethnic groups (Matsuki et al. 1992; Mignot et al. 1994; Lin et al. 1997).DQB1*0602 is associated almost exclusively with DR2in Japanese (Lin et al. 1997) and Caucasians (Begovich et al. 1992), whereas it is observed frequently in association with DR2, DR5, or other DRsubtypes in African–Americans (Mignot et al. 1994, 1997a). The increased DR–DQ haplotypic diversity in African–Americans explains the low DR2 association observed in this population.
To further characterize the DQB1 region in narcoleptic subjects, novel polymorphic markers were isolated and characterized (Mignot et al. 1997a). The markers tested included six novel microsatellite markers (DQCAR, DQCARII, G51152, DQRIV, T16CAR, and G411624R). DQA1, a DQ gene whose product is known to pair with DQB1-encoding polypeptides to form the biologically active DQ heterodimer molecule, was also studied. The results obtained are summarized in Figure1. The association with narcolepsy decreases in theT16CAR–DQB2 region (Mignot et al. 1997a) and in the DRB1 region (Mignot et al. 1994, 1997b). The G411624R andT16CAR microsatellites are complex repeats with drastically different sizes, all of which are frequently observed in narcolepsy susceptibility haplotypes, a result suggesting crossovers in the region. In the DRB1 region, association with narcolepsy is still tight with DRB1*1501 (DR2) in Caucasians and Asians but is significantly lower in African–Americans, which suggests crossovers in the region among ethnic groups.
Figure 1.
Schematic summary of the narcolepsy susceptibility region within the HLA complex. Genes and markers are depicted by vertical bars, alleles observed in narcoleptic patients are listed above each marker.DQB2, DQB3, DQB1, DQA1, andDRB1 are HLA genes and pseudogenes. QBP and QAP are the promoter regions ofDQB1 and DQA1, respectively. G411624R, T16CAR, G51152, DQCAR, and DQCARII are microsatellite CA repeats identified in the HLA DQ region (Mignot et al. 1997a).DQRIV is a compound tandem repeat of 4- and 2-bp units located between DQB1 and G51152. TheDQA1*0102allele is subdivided into 01021 and 01022 based on a codon 109 synonymous substitution. Genomic segments in which frequent recombination was detected are indicated by vertical solid lines. Broken lines indicate rare possible ancestral crossovers detected in the area. Crossovers betweenT16CAR and G51152 occur within ethnic groups; crossovers between QAP and DRB1are frequently observed among ethnic groups (Mignot et al. 1997a). Note that the genomic region shared by most narcoleptic patients extends from a region between T16CAR and G51152 to a region between QAP andDRB1. No other genes were found in 86 kb of genomic sequence surrounding the DQB1*0602 gene (Ellis et al. 1997). Additional diversity is also found at the level ofG51152 andDQRIV, this being most likely due to a slippage mechanism rather than crossover (Lin et al. 1997; Mignot et al. 1997a). (+, Δ, *) Frequent alleles found predominantly in Caucasian, Asian, and African–American populations, respectively; (kb) kilobase pairs. Alleles frequently observed in theDQB1*0602/DQA1*0121 haplotype are underlined.DRB1*1501, DRB1*1503, and DRB1*1602 are DR2subtypes.DRB1*1101 and DRB1*12022 are DR5 subtypes.
The DQA1*0102/DQB1*0602 haplotype is common in narcoleptic patients (Mignot et al. 1994). Other haplotypes withDQA1*0102but not DQB1*0602, such as DQA1*0102andDQB1*0604, are frequent in control populations in all ethnic groups and do not predispose to narcolepsy. DQA1*0102 alone is thus not likely to confer susceptibility but may be involved in addition to DQB1*0602 for the development of narcolepsy (Mignot et al. 1994, 1997a).
Microsatellite analysis in the HLA DQ region revealed that only the area surrounding the coding regions of DQB1 andDQA1 is well conserved across all susceptibility haplotypes. Polymorphism can be observed in microsatellite and/or in the promoter regions flanking the DQB1*0602 and DQA1*0102alleles and in the region between these two genes (Mignot et al. 1997a). Mutations by slippage for some loci, and rare ancestral crossovers in a few instances, contribute to this diversity (Mignot et al. 1997a). Sequence analysis of DQ genes from narcoleptic and control individuals has revealed no sequence variation that correlates with the disease (Lock et al. 1988; Uryu et al. 1989; Ellis et al. 1997;Mignot et al. 1997a). No new gene was found in 86 kb of genomic sequence surrounding the HLA DQ gene (Ellis et al. 1997). A study on the dosage effect of DQB1*0602 allele on narcolepsy susceptibility revealed that DQB1*0602 homozygous subjects are at two to four times greater risk than heterozygous subjects for developing narcolepsy (Pelin et al. 1998). Taken together, these results strongly suggest that the DQA1*0102 andDQB1*0602alleles themselves rather than an unknown gene in the region are the actual susceptibility genes for narcolepsy.
HLA DQB1*0602 Is Neither Sufficient nor Necessary for the Development of Narcolepsy
Of the general population, 12%–38% carry HLADQB1*0602, yet narcolepsy affects only 0.02%–0.18% of the general population. No sequence variation that correlates with the disease was detected in sequence analysis of DQ genes. Nevertheless, a few narcoleptic patients with cataplexy do not carry the DQB1*0602 allele (Mignot et al. 1992, 1997a). HLADQB1*0602 is thus neither necessary nor sufficient for development of narcolepsy–cataplexy.
…….
Canine Narcolepsy as a Model for the Human Disorder
….narcolepsy was identified in numerous canine breeds, including Doberman pinschers, Labrador retrievers, miniature poodles, dachshunds, beagles, and Saint Bernards. All animals display similar symptoms, but the age of onset, severity, and the clinical course vary significantly among breeds (Baker et al. 1982).
….Similar to human narcoleptic patients, animals affected with the disorder display emotionally triggered cataplexy, fragmented sleep, and increased daytime sleepiness. Sleep paralysis and hypnagogic hallucinations cannot be documented because of difficulties in assessing the symptoms in canines. The validity of this model of narcolepsy has also been established through neurophysiological and neuropharmacological similarities with the human disorder. Pharmacological and neurochemical studies suggest abnormal monoaminergic and cholinergic mechanisms in narcolepsy both in human and canines (Aldrich 1991; Nishino and Mignot 1997, 1998). Interestingly, it is also possible to induce brief episodes of cataplexy in otherwise asymptomatic canarc-1 heterozygous animals using specific drug combinations (Mignot et al. 1993).
……
Narcolepsy is both a significant medical problem and a unique disease model. Research in humans has led to the identification of specific HLA alleles that predispose to the disorder. This has suggested the possibility that narcolepsy may be an autoimmune disorder, a hypothesis that has not been confirmed to date. Cells of the central and peripheral nervous systems and immune systems are known to interact at multiple levels (Morganti-Kossmann et al. 1992; Wilder 1995). For example, peripheral immunity is modulated by the brain via autonomic or neuroendocrinal interactions, whereas the immune system affects the nervous system through the release of cytokines. Cytokines have been shown to modulate sleep directly and have established effects on neurotransmission and neuronal differentiation (Krueger and Karnovsky 1995; Mehler and Kessler 1997). It is therefore possible that neuroimmune interactions that are not autoimmune in nature might be involved in the pathophysiology of narcolepsy.
NREM and REM sleep are mainly regulated by circadian and homeostatic processes. Single gene circadian mutations have been isolated from species as diverse as Arabidopsis(toc1),Neurospora (frq), Drosophila (perand tim), and mouse (Clock) (Hall 1995). Theper and Clock genes isolated inDrosophiliaand mouse, respectively, have been shown to belong to the same family, the PAS domain family (Hall 1995; Rosbash et al. 1996; Young et al. 1996; King et al. 1997). Analysis of frq, tim,andper demonstrate that circadian rhythms of diverse species are regulated by similar negative feedback loops in which gene products negatively regulate their own transcripts (Hall 1995;Dunlap 1996;Rosbash et al. 1996; Young et al. 1996). Putative homologs of theper gene have also been isolated in mammals (Albrecht et al. 1997; Tei et al. 1997). In mouse, RNAs for two perhomologs are expressed rhythmically within the suprachiasmatic nucleus (SCN), a brain region with an established role in generating mammalian circadian rhythms (Shearman et al. 1997;Shigeyoshi et al. 1997; Tei et al. 1997).
Much less progress has been made in the noncircadian aspect of sleep regulation. Sleep can only be recognized and characterized electrophysiologically in mammals and birds, and single gene mutants for this behavior have not been described in the mouse. Canine narcolepsy is the only known single gene mutation affecting sleep state organization as opposed to circadian control of behavior. The molecular cloning of this gene may lead to a better understanding of the molecular basis and biological role of sleep, as has been the case for circadian rhythms following the cloning of frq, per, and Clock.
New insomnia drugs are coming on the market, but drug-free therapy remains the most durable treatment.
Selectively driving cholinergic fibers optically in the thalamic reticular nucleus promotes sleep
Kun-MingNi,
Xiao-JunHou
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China; Fuzhou Children’s Hospital, Fujian, China
Contribution: Acquisition of data, Contributed unpublished essential data or reagents
No competing interests declared
Contributed equally with: Kun-Ming Ni
</div>”>Xiao-JunHou,
Ci-HangYang
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Acquisition of data
No competing interests declared
</div>”>Ci-HangYang,
PingDong
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Acquisition of data, Analysis and interpretation of data
No competing interests declared
</div>”>PingDong,
YueLi
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Acquisition of data, Drafting or revising the article
No competing interests declared
</div>”>YueLi,
YingZhang
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Drafting or revising the article
No competing interests declared
</div>”>YingZhang,
PingJiang
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Acquisition of data
No competing interests declared
</div>”>PingJiang,
Darwin KBerg
Neurobiology Section, Division of Biological Sciences, Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, United States
Contribution: Drafting or revising the article
No competing interests declared
</div>”>Darwin KBerg,
ShuminDuan
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
Contribution: Drafting or revising the article
No competing interests declared
</div>”>ShuminDuan,
Xiao-MingLi
Department of Neurobiology, Institute of Neuroscience, Key Laboratory of Medical Neurobiology of the Ministry of Health of China, Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China; Soft Matter Research Center, Zhejiang University, Hangzhou, China
Contribution: Conception and design, Analysis and interpretation of data, Drafting or revising the article, Contributed unpublished essential data or reagents
Zhejiang University School of Medicine, China; Fuzhou Children’s Hospital, China; University of California, San Diego, United States; Zhejiang University, China
Cholinergic projections from the basal forebrain and brainstem are thought to play important roles in rapid eye movement (REM) sleep and arousal. Using transgenic mice in which channelrhdopsin-2 is selectively expressed in cholinergic neurons, we show that optical stimulation of cholinergic inputs to the thalamic reticular nucleus (TRN) activates local GABAergic neurons to promote sleep and protect non-rapid eye movement (NREM) sleep. It does not affect REM sleep. Instead, direct activation of cholinergic input to the TRN shortens the time to sleep onset and generates spindle oscillations that correlate with NREM sleep. It does so by evoking excitatory postsynaptic currents via α7-containing nicotinic acetylcholine receptors and inducing bursts of action potentials in local GABAergic neurons. These findings stand in sharp contrast to previous reports of cholinergic activity driving arousal. Our results provide new insight into the mechanisms controlling sleep.
A poor night’s sleep is enough to put anyone in a bad mood, and although scientists have long suspected a link between mood and sleep, the molecular basis of this connection remained a mystery. Now, new research has found several rare genetic mutations on the same gene that definitively connect the two.
Sleep goes hand-in-hand with mood. People suffering from depression and mania, for example, frequently have altered sleeping patterns, as do those with seasonal affective disorder (SAD). And although no one knows exactly how these changes come about, in SAD sufferers they are influenced by changes in light exposure, the brain’s time-keeping cue. But is mood affecting sleep, is sleep affecting mood, or is there a third factor influencing both? Although a number of tantalizing leads have linked the circadian clock to mood, there is “no definitive factor that proves causality or indicates the direction of the relationship,” says Michael McCarthy, a neurobiologist at the San Diego Veterans’ Affairs Medical Center and the University of California (UC), San Diego.
To see whether they could establish a link between the circadian clock, sleep, and mood, scientists in the new study looked at the genetics of a family that suffers from abnormal sleep patterns and mood disorders, including SAD and something called advanced sleep phase, a condition in which people wake earlier and sleep earlier than normal. The scientists screened the family for mutations in key genes involved in the circadian clock, and identified two rare variants of the PERIOD3 (PER3) gene in members suffering from SAD and advanced sleep phase. “We found a genetic change in people who have both seasonal affective disorder and the morning lark trait” says lead researcher Ying-Hui Fu, a neuroscientist at UC San Francisco. When the team tested for these mutations in DNA samples from the general population, they found that they were extremely rare, appearing in less than 1% of samples.
Fu and her team then created mice that carried the novel genetic variants. These transgenic mice showed an unusual sleep-wake cycle and struggled less when handled by the researchers, a typical sign of depression. They also had lower levels of PER2, a protein involved in circadian rhythms, than unmutated mice, providing a possible molecular explanation for the unusual sleep patterns in the family. Fu says this supports the link between the PER3 mutations and both sleep and mood. “PER3’s role in mood regulation has never been demonstrated directly before,” she says. “Our results indicate that PER3 might function in helping us adjust to seasonal changes,” by modifying the body’s internal clock.
To investigate further, the team studied mice lacking a functional PER3 gene. They found that these mice showed symptoms of SAD, exhibiting more severe depression when the duration of simulated daylight in the laboratory was reduced. Because SAD affects between 2% and 9% of people worldwide, the novel variants can’t explain it fully. But understanding the function of PER3 could yield insights into the molecular basis of a wide range of sleep and mood disorders, Fu says.
Together, these experiments show that the PERIOD3 gene likely plays a key role in regulating the sleep-wake cycle, influencing mood and regulating the relationship between depression and seasonal changes in light availability, the team reports today in the Proceedings of the National Academy of Sciences. “The identification of a mutation in PER3 with such a strong effect on mood is remarkable,” McCarthy says. “It suggests an important role for the circadian clock in determining mood.”
The next step will be to investigate how well these results generalize to other people suffering from mood and sleep disorders. “It will be interesting to see if other rare variants in PER3 are found, or if SAD is consistently observed in other carriers,” McCarthy says. That could eventually lead to new drugs that selectively target the gene, which McCarthy says, “could be a strategy for treating mood or sleep disorders.”
Organizers: Robert Martone (St. Jude Children’s Research Hospital) and Sonya Dougal (The New York Academy of Sciences)Presented by the Brain Dysfunction Discussion Group
Reported by Caitlin McOmish | Posted February 2, 2016
Microtubule-associated protein tau helps maintain the stability and flexibility of microtubules in neuronal axons. Alternative splicing of the tau gene, MAPT, produces 6 isoforms of tau in the brain and many more in the peripheral nervous system. Tau can be phosphorylated at over 30 sites, and it undergoes many posttranslational modifications to operate as a substrate for multiple enzymes. However, tau also mediates pathological functions including neuroinflammatory response, seizure, and amyloid-β (Aβ) toxicity, and tau pathology is a hallmark of conditions including frontotemporal dementia, traumatic brain injury (TBI), Down syndrome, focal cortical dysplasia, and Alzheimer’s disease (AD), as well as some tumors and infections. On September 18, 2015, speakers at the Brain Dysfunction Discussion Group’s Alzheimer’s Disease and Tau: Pathogenic Mechanisms and Therapeutic Approaches symposium discussed the mechanisms by which tau becomes pathological and how the pathology spreads. They also described emerging therapeutic strategies for AD focused on tau.
Microtubule-associated protein tau has a complex biology, including multiple splice variants and phosphorylation sites. Tau is a key component of microtubules, which contribute to neuronal stability. In AD, tau changes, causing microtubules to collapse, and tau proteins clump together to form neurofibrillary tangles. (Image presented by Robert Martone courtesy of the National Institute on Aging)
Tau is ubiquitous in the brain, with widespread effects, but has historically been overlooked as a driving force in AD. In his introduction to the symposium, Robert Martone from St. Jude Children’s Research Hospital highlighted tau’s activity and emergence as a treatment target for this devastating disorder. Hyperphosphorylated tau (p-tau) has long been recognized as a principle component of neurofibrillary tangles in AD; tau monomers are misfolded into oligomers that form tau filaments. As Hartmuth Kolb from Johnson & Johnson explained, the development in 2012 of a tau-specific positron emission tomography (PET) tracer led to important insights into the presence and spread of tau pathology over the course of tauopathies, including AD, in humans. Notably, researchers demonstrated that tau pathology propagates through the brain in a predictable pattern, corresponding to the Braak stages of AD.
Tau pathology spreads through the brain in a predictable pattern. Abnormal tau protein is first observed in the transentorhinal region (stages I and II) and spreads to the limbic regions in stages III and IV, when early signs of AD begin to be observed. Pathology subsequently extends throughout the neocortex, driving fully developed AD. This staging was first described by Braak and Braak in 1991. (Image courtesy of Hartmuth Kolb)
It is likely that the symptoms of AD are produced by the combined effects of tau and Aβ pathologies. George Bloom from the University of Virginia described how Aβ and tau interact to cause mature neurons to reenter the cell cycle, leading to cell death. In a healthy brain, insulin acts as a gatekeeper that maintains adult neurons in the G0 phase after the cells permanently exit the cell cycle. In AD, amyloid oligomers sequester neuronal insulin receptors, causing insulin resistance. In parallel, tau phosphorylation at key sites—pY18 (fyn site), pS409 (PKA site), pS416 (CAM Kinase site), and pS262—drives mTOR signaling at the plasma membrane but not at the lysosome, resulting in cell cycle reentry. In a normal cell, activation of mTOR at the lysosome overrides the cell cycle reentry signal—creating an important regulatory mechanism for maintaining healthy neurons. However, lysosomal activation of mTOR is insulin dependent and thus affected by Aβ-induced insulin insensitivity. Amyloid oligomers, via insulin regulation, release the brakes on a cascade of events driven by p-tau that leads to cell cycle reentry and cell death.
Hallmark dysfunction produced by Aβ is dependent on tau. Pathological Aβ drives the formation of p-tau in the brain, resulting in synaptic dysfunction, cell death, and broad neurocognitive symptoms. This process can be influenced by a range of factors including genetic predisposition, environmental risk factors, and biochemical signaling pathways. (Image courtesy of George Bloom)
Khalid Iqbal from the New York State Institute for Basic Research in Developmental Disabilities described research showing that p-tau spreads through the brain in a rodent model, well beyond the injection site, in a prion-like manner, and that the spread of pathology can be mitigated by the addition of PP2A—a phosphatase known to be decreased in gray and white matter in AD. PP2A regulation is affected in AD, stroke, and brain acidosis, providing a link between these disorders and tau pathology.
Discussion of the pathophysiology of AD commonly focuses on Aβ plaques and neurofibrillary tangles (NFTs) composed of misassembled hyperphosphorylated tau; it has generally been thought that these plaques and tangles are the primary causes of symptoms. However, recent evidence indicates that oligomeric variants of tau are actually far more toxic than the form of tau present in NFTs. Michael Hutton from Eli Lilly and Company studies the properties needed for tau to become pathological. He used animal models to show that the abnormal p-tau “seed,” from which a prion-like spread develops, must be of a high molecular weight (with at minimum three tau units) and highly phosphorylated to induce healthy tau to become pathological. These characteristics are necessary but not sufficient for effective seeding. There is also evidence that tau pathology propagates via an autocatalytic cycle of seeded aggregation and fragmentation.
Propagation, in addition to requiring a large number of p-tau units in aggregates, may be affected by the isomerization of those monomers. Kun Ping Lu from Harvard Medical School provided data suggesting that cis but not trans pT231-tau is a precursor of tauopathy, linking TBI to the later development of neurodegenerative diseases such as chronic traumatic encephalopathy and AD. He demonstrated a role for Pin1, a phosphorylation-specific prolyl isomerase, in this process using animal models of TBI and AD. Pin1, which is regulated in response to stress, prevents the accumulation of toxic cis p-tau by converting it to the trans isoform, but this process is inhibited in AD and TBI. Lu showed that cis p-tau’s ability to cause and spread neurodegeneration can be blocked by a cis p-tau monoclonal antibody in vitro and in animal models, pointing to the therapeutic potential of targetingcis p-tau for treatment of TBI and AD.
Culturing p-tau seeds in vitro produces a broad array of tau aggregate structures. Marc Diamond from the University of Texas Southwestern Medical Center discussed the diverse structures produced by different tau seeds, which his team has studied in a series of experiments using in vitro models, animal models, and human postmortem analyses. His lab showed that distinct conformations of aggregate seeds propagate stably, infecting normal cells and leading them to acquire abnormal tau aggregates with distinct, reproducible structures and different biochemical properties. In another study, the team showed that the morphology of the p-tau aggregates was related to diagnosis. Seeds sourced from postmortem human tissue produced reliable phenotypes in culture, which tracked with different diagnoses, retroactively predicting biological outcome. Thus, the characteristics of the p-tau seed have a large influence on the biological outcome, providing a new prospect for presymptomatic diagnosis.
Tau seeds obtained from postmortem brain tissue from AD, argyrophilic grain disease (AGD), corticobasal degeneration (CBD), Pick’s disease (PiD), and progressive supranuclear palsy (PSP) produce unique aggregate pathologies in cell culture, including toxic, mosaic, ordered, disordered, and speckled. AD-derived seeds largely produce the speckled phenotype. (Images courtesy of Marc Diamond)
With the mechanisms by which p-tau forms, converts healthy tau, and seeds dysfunction established, the question of how p-tau exits the cell and moves through the brain arises. The pattern of spread and the speed with which the pathology progresses suggests that p-tau propagates trans-synaptically. Nicole Leclerc from the University of Montreal provided evidence to support this view. It is likely, her lab has shown, that tau is secreted and taken up by neurons in an active process, in response to neuronal activity. Tau secretion in vitro increases under conditions such as starvation and lysosomal dysfunction, phenomena found in the early stages of AD. Moreover, hyperphosphorylation appears to increase the targeting of tau to the secretory pathway, potentially accelerating the spread of p-tau. Intriguingly, however, the extracellular tau is hypophosphorylated, suggesting large-scale dephosphorylation during the secretory process. This hypo-tau may activate muscarinic acetylcholine receptors, increasing intracellular Ca2+ and promoting cell death.
These findings suggest that the synapse plays a critical role in the development of AD; the extrasynaptic environment is known to be exquisitely regulated by microglia. The focus of studies into neurodegenerative disorders is often neurons, but genetic studies have repeatedly identified changes in expression of microglial genes in AD, including in one of the leading AD candidate genes, TREM2, demonstrating a fundamental contribution of these cells to AD. Richard Ransohoff of Biogen discussed the importance of this cell type. Microglia enter the brain at around embryonic day (E) 9.5 in rodents and are crucially involved in maintaining brain health. During development the cells play a major role in large-scale synaptic pruning required for effective neural maturation. They are also highly responsive to the environment, and stress in adulthood can reengage microglial synaptic pruning—a process that is adaptive during development but maladaptive in adulthood. The process is regulated by complement system cascades. TGF-β expressed by astrocytes drives neurons to express C1q presynaptically, initiating complement elements to accumulate at the site, ultimately activating microglia to prune the synaptic connection. In AD, inappropriate activation of this cascade may lead to the removal of otherwise healthy connections. Ransohoff described a role for CXCR3, the fractalkine receptor, in regulating reactivity of microglia, and thus mitigating pruning of adult synapses. Regulation of microglia reactivity is driven by epigenetically induced changes in inflammatory response genes. Correspondingly, in the absence of CXCR3, tau pathology is aggravated in htau mice (which express human tau isoforms), suggesting a protective effect of the CXCR3 pathway. Ransohoff closed with the caveat that microglia are not intrinsically helpful or harmful; their properties are context dependent and must be unraveled by empirical observations in appropriate models.
Peter Davies from the Feinstein Institute for Medical Research discussed the need to better incorporate current knowledge into research model design, particularly to develop monoclonal antibodies for the treatment of AD. Monoclonal antibodies are a promising strategy, but translating preclinical findings into successful clinical outcomes will require careful consideration of the context of the early research. Most transgenic animal models for AD express p-tau in all neurons, but such extensive p-tau spread is not found in human AD brains. There are several hurdles to determine the drugs’ efficacy and safety in humans; it is difficult to assess specificity and find appropriate dosages. In a series of studies with a focus on external reproducibility, Davies presented evidence from animal models showing that immunotherapy can block the spread of p-tau but cannot undo pathology already present in the brain. In the htau mouse model several putative antibodies lacked efficacy and in some cases appeared to worsen pathology. These findings underscore the need for both better models and improved understanding of mechanisms of action before moving drugs to the clinic.
The New York Academy of Sciences. Alzheimer’s Disease and Tau: Pathogenic Mechanisms and Therapeutic Approaches.Academy eBriefings. 2015. Available at: www.nyas.org/Tau2015-eB
One in 15 adults has moderate to severe obstructive sleep apnea, a disorder in which a person’s breathing is frequently interrupted during sleep — as many as 30 times per hour.
People with sleep apnea also often report problems with thinking such as poor concentration, difficulty with memory and decision-making, depression, and stress.
According to new research from the UCLA School of Nursing, published online in the Journal of Sleep Research, people with sleep apnea show significant changes in the levels of two important brain chemicals, which could be a reason that many have symptoms that impact their day-to-day lives.
UCLA researchers looked at levels of these neurotransmitters — glutamate and gamma-aminobutyric acid, known as GABA — in a brain region called the insula, which integrates signals from higher brain regions to regulate emotion, thinking and physical functions such as blood pressure and perspiration. They found that people with sleep apnea had decreased levels of GABA and unusually high levels of glutamate.
GABA is a chemical messenger that acts as an inhibitor in the brain, which can slow things down and help to keep people calm — like a brake pedal. GABA affects mood and helps make endorphins.
Glutamate, by contrast, is like an accelerator; when glutamate levels are high, the brain is working in a state of stress, and consequently doesn’t function as effectively. High levels of glutamate can also be toxic to nerves and neurons.
“In previous studies, we’ve seen structural changes in the brain due to sleep apnea, but in this study we actually found substantial differences in these two chemicals that influence how the brain is working,” said Paul Macey, the lead researcher on the study and an associate professor at the UCLA School of Nursing.
Macey said the researchers were taken aback by the differences in the GABA and glutamate levels.
“It is rare to have this size of difference in biological measures,” Macey said. “We expected an increase in the glutamate, because it is a chemical that causes damage in high doses and we have already seen brain damage from sleep apnea. What we were surprised to see was the drop in GABA. That made us realize that there must be a reorganization of how the brain is working.”
Macey said the study’s results are, in a way, encouraging. “In contrast with damage, if something is working differently, we can potentially fix it.”
The link between sleep apnea and changes in the state of the brain is important news for clinicians, Macey said.
“What comes with sleep apnea are these changes in the brain, so in addition to prescribing continuous positive airway pressure, or CPAP — a machine used to help an individual sleep easier, which is the gold standard treatment for sleep disturbance — physicians now know to pay attention to helping their patients who have these other symptoms,” Macey said. “Stress, concentration, memory loss — these are the things people want fixed.”
In future studies, the researchers hope to determine whether treating the sleep apnea — using CPAP or other methods — returns patients’ brain chemicals back to normal levels. If not, they will turn to the question of what treatments could be more effective. They are also studying the impacts of mindfulness exercises to see if they can reduce glutamate levels by calming the brain.
Results of exploratory whole-brain analysis. Parts (a) and (b) illustrate the results of an exploratory whole brain analysis, showing regions (red) where gray matter volume may be associated with fitness percentile or memory accuracy, respectively. Results are depicted within the group average brain. (credit: Andrew S. Whiteman et al./NeuroImage)
Young adults who have greater aerobic fitness also have greater volume of their entorhinal cortex, an area of the brain responsible for memory, Boston University School of medicine (BUSM) researchers have found.
While aerobic fitness is not directly associated with performance on a recognition memory task, the participants with a larger entorhinal cortex also performed better on a recognition memory task.
The entorhinal cortex is a brain area known to show early pathology in Alzheimer’s disease, which is characterized by profound memory impairment.
The researchers recruited healthy young adults (ages 18-35 years) who underwent a treadmill test to measure aerobic capacity. During this test, the amount of oxygen and carbon dioxide in the participants’ breath as they walked or ran on a treadmill was measured.
Participants then underwent magnetic resonance imaging and performed a recognition memory task. Entorhinal and hippocampal volume was determined using a method known as voxel-based morphometry and then regression analysis to examine whether recognition memory and aerobic fitness predicted brain volumes.
Effects of aerobic exercise
“Our results suggest that aerobic exercise may have a positive effect on the medial temporal lobe memory system (which includes the entorhinal cortex) in healthy young adults. This suggests that exercise training, when designed to increase aerobic fitness, might have a positive effect on the brain in healthy young adults,” explained corresponding author and principal investigator Karin Schon, PhD, BUSM assistant professor of anatomy and neurobiology.
Researchers said this work could support previous studies that suggest aerobic exercise may forestall cognitive decline in older individuals at risk of dementia, and extends the idea that exercise may be beneficial for brain health to younger adults. “This is critical given that obesity, which has recently been linked with cognitive deficits in young and middle-aged adults, and physical inactivity are on the rise in young adults,” Schon said.
These findings appear in the journal NeuroImage.
Abstract of Entorhinal volume, aerobic fitness, and recognition memory in healthy young adults: A voxel-based morphometry study
Converging evidence supports the hypothesis effects of aerobic exercise and environmental enrichment are beneficial for cognition, in particular for hippocampus-supported learning and memory. Recent work in humans suggests that exercise training induces changes in hippocampal volume, but it is not known if aerobic exercise and fitness also impact the entorhinal cortex. In animal models, aerobic exercise increases expression of growth factors, including brain derived neurotrophic factor (BDNF). This exercise-enhanced expression of growth hormones may boost synaptic plasticity, and neuronal survival and differentiation, potentially supporting function and structure in brain areas including but not limited to the hippocampus. Here, using voxel based morphometry and a standard graded treadmill test to determine cardio-respiratory fitness (Bruce protocol; VO2 max), we examined if entorhinal and hippocampal volumes were associated with cardio-respiratory fitness in healthy young adults (N = 33). In addition, we examined if volumes were modulated by recognition memory performance and by serum BDNF, a putative marker of synaptic plasticity. Our results show a positive association between volume in right entorhinal cortex and cardio-respiratory fitness. In addition, average gray matter volume in the entorhinal cortex, bilaterally, was positively associated with memory performance. These data extend prior work on the cerebral effects of aerobic exercise and fitness to the entorhinal cortex in healthy young adults thus providing compelling evidence for a relationship between aerobic fitness and structure of the medial temporal lobe memory system.
Two comprehensive reviews found little evidence of an intensity threshold for changes in HDL cholesterol, LDL cholesterol, or triglycerides, although most studies did not control for exercise volume, frequency and/or duration, and were conducted using intensities ≥40% VO2max[17]. The American College of Sports Medicine (ACSM) recommends that most adults engage in moderate-intensity cardio-respiratory exercise for at least 30 min/day, at least 5 days per week, for a total of over 150 min of exercise per week.
The levels of physical exercises are associated with lower total visceral fat, liver fat, and intramuscular body fat, with the active twin having on average 50% less visceral fat and 25% less subcutaneous abdominal fat than the inactive twin.
[17] Edwar MA, Clark N, Macfadyen MA. Lactate and ventilatory thresholds reflect the training status of professional soccer players where maximum aerobic power is unchanged. JSSM 2003; 2: 23-29.
Video games used in the experiment : screenshot of 2-D Angry Birds (left) and Super Mario 3D World (right) (credit: Gregory D. Clemenson and Craig E.L. Stark/The Journal of Neuroscience)
Playing three-dimensional video games can boost the formation of memories, especially for people who lose memory as they age or suffer from dementia, according to University of California, Irvine (UCI) neurobiologists.
Craig Stark and Dane Clemenson of UCI’s Center for the Neurobiology of Learning & Memory recruited non-gamer college students to play either a video game with a passive, two-dimensional environment (“Angry Birds”) or one with an intricate, 3-D setting (“Super Mario 3D World”) for 30 minutes per day over two weeks.
Before and after the two-week period, the students took memory tests that engaged the brain’s hippocampus, the region associated with complex learning and memory. They were given a series of pictures of everyday objects to study. Then they were shown images of the same objects, new ones, and others that differed slightly from the original items and asked to categorize them.
Students playing the 3-D video game improved their scores on the memory test by about 12 percent, the same amount it normally decreases between the ages of 45 and 70, while the 2-D gamers did not improve.
Recognition of the slightly altered images requires the hippocampus, Stark said, and his earlier research had demonstrated that the ability to do this clearly declines with age. This is a large part of why it’s so difficult to learn new names or remember where you put your keys as you get older.
In previous studies on rodents, postdoctoral scholar Clemenson and others showed that exploring the environment resulted in the growth of new neurons that became entrenched in the hippocampus’ memory circuit and increased neuronal signaling networks. Stark noted some commonalities between the 3-D game the humans played and the environment the rodents explored — qualities lacking in the 2-D game. “First, the 3-D games have … a lot more spatial information in there to explore. Second, they’re much more complex, with a lot more information to learn,” Stark noted.
Stark added that it’s unclear whether the overall amount of information and complexity in the 3-D game or the spatial relationships and exploration is stimulating the hippocampus. “This is one question we’re following up on,” he said.
Myths of “brain training”
“Results from this study add to the existing literature that playing video games may provide meaningful stimulation to the brain. However, it is important to be cautious when generalizing these results to other instances. Recently, 70 neuroscientists from universities and institutions around the world published a letter discussing the myths of “brain training” (Max Planck Institute for Human Development/Stanford Center on Longevity, 2014. A consensus on the brain training industry from the scientific community. Stanford, CA: Stanford Center on Longevity).
“In contrast to typical brain training, typical video games are not created with specific cognitive processes in mind but rather designed to captivate and immerse the user into charactersand adventure. Rather than isolate single brain processes, modern video games can naturally draw on or require many cognitive processes, including visual, spatial, emotional, motivational, attentional, critical thinking, problem solving, and working memory. It’s quite possible that by explicitly avoiding a narrow focus on a single … cognitive domain and by more closely paralleling natural experience, immersive video games may be better suited to provide enriching experiences that translate into functional gains.”
— Gregory D. Clemenson and Craig E.L. Stark. Virtual Environmental Enrichment through Video Games Improves Hippocampal-Associated Memory. The Journal of Neuroscience.
The next step is to determine if environmental enrichment — either through 3-D video games or real-world exploration experiences — can reverse the hippocampal-dependent cognitive deficits present in older populations.
“Can we use this video game approach to help improve hippocampus functioning?” Stark asked. “It’s often suggested that an active, engaged lifestyle can be a real factor in stemming cognitive aging. While we can’t all travel the world on vacation, we can do many other things to keep us cognitively engaged and active. Video games may be a nice, viable route.”
The research is described in a paper published today (Dec. 9) in The Journal of Neuroscience and is funded by a $300,000 Dana Foundation grant.
The positive effects of environmental enrichment and their neural bases have been studied extensively in the rodent (van Praag et al., 2000). For example, simply modifying an animal’s living environment to promote sensory stimulation can lead to (but is not limited to) enhancements in hippocampal cognition and neuroplasticity and can alleviate hippocampal cognitive deficits associated with neurodegenerative diseases and aging. We are interested in whether these manipulations that successfully enhance cognition (or mitigate cognitive decline) have similar influences on humans. Although there are many “enriching” aspects to daily life, we are constantly adapting to new experiences and situations within our own environment on a daily basis. Here, we hypothesize that the exploration of the vast and visually stimulating virtual environments within video games is a human correlate of environmental enrichment. We show that video gamers who specifically favor complex 3D video games performed better on a demanding recognition memory task that assesses participants’ ability to discriminate highly similar lure items from repeated items. In addition, after 2 weeks of training on the 3D video game Super Mario 3D World, naive video gamers showed improved mnemonic discrimination ability and improvements on a virtual water maze task. Two control conditions (passive and training in a 2D game, Angry Birds), showed no such improvements. Furthermore, individual performance in both hippocampal-associated behaviors correlated with performance in Super Mario but not Angry Birds, suggesting that how individuals explored the virtual environment may influence hippocampal behavior.
SIGNIFICANCE STATEMENT The hippocampus has long been associated with episodic memory and is commonly thought to rely on neuroplasticity to adapt to the ever-changing environment. In animals, it is well understood that exposing animals to a more stimulating environment, known as environmental enrichment, can stimulate neuroplasticity and improve hippocampal function and performance on hippocampally mediated memory tasks. Here, we suggest that the exploration of vast and visually stimulating environments within modern-day video games can act as a human correlate of environmental enrichment. Training naive video gamers in a rich 3D, but not 2D, video game, resulted in a significant improvement in hippocampus-associated cognition using several behavioral measures. Our results suggest that modern day video games may provide meaningful stimulation to the human hippocampus.
One quote stood out right away about “brain games”: “… In fact, the notion that performance on a single task cannot stand in for an entire ability is a cornerstone of scientific psychology …”
But playing a challenging 3-D video game or exploring a virtual reality like Second Life is far more “educational” and experiential, involving all sorts of cognitive (and, for that matter, psycho-motor and affective) skill domains. Good for the brain and often good for the spirit too.
As the baby boomers enter their golden years with mounting concerns about the potential loss of cognitive abilities, markets are responding with products promising to allay anxieties about potential decline. Computer-based cognitive-training software –popularly known as brain games– claim a growing share of the marketplace. The promotion of these products reassures and entices a worried public.
Consumers are told that playing brain games will make them smarter, more alert, and able to learn faster and better. In other words, the promise is that if you adhere to a prescribed regimen of cognitive exercise, you will reduce cognitive slowing and forgetfulness, and will fundamentally improve your mind and brain.
It is customary for advertising to highlight the benefits and overstate potential advantages of their products. In the brain-game market, advertisements also reassure consumers that claims and promises are based on solid scientific evidence, as the games are “designed by neuroscientists” at top universities and research centers. Some companies present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training. Often, however, the cited research is only tangentially related to the scientific claims of the company, and to the games they sell. In addition, even published peer-reviewed studies merit critical evaluation. A prudent approach calls for integrating findings over a body of research rather than relying on single studies that often include only a small number of participants.
The Stanford Center on Longevity and the Berlin Max Planck Institute for Human Development gathered many of the world’s leading cognitive psychologists and neuroscientists –people who have dedicated their careers to studying the aging mind and brain– to share their views about brain games and offer a consensus report to the public. What do expert scientists think about these claims and promises? Do they have specific recommendations for effective ways to boost cognition in healthy, older adults? Are there merits to the claimed benefits of the brain games and if so, do older adults benefit from brain-game learning in the same ways younger people do? How large are the gains associated with computer-based cognitive exercises? Are the gains restricted to specific skills or does general cognitive aptitude improve? How does playing games compare with other proposed means of mitigating age-related declines, such as physical activity and exercise, meditation, or social engagement?
The search for effective means of mitigating or postponing age-related cognitive declines has taught most of us to recognize the enormous complexity of the subject matter. Like many challenging scientific topics, this is a devil of many details. The consensus of the group is that claims promoting brain games are frequently exaggerated and at times misleading. Cognitive training produces statistically significant improvement in practiced skills that sometimes extends to improvement on other cognitive tasks administered in the lab. In some studies, such gains endure, while other reports document dissipation over time. In commercial promotion, these small, narrow, and fleeting advances are often billed as general and lasting improvements of mind and brain. The aggressive advertising entices consumers to spend money on products and to take up new behaviors, such as gaming, based on these exaggerated claims. As frequently happens, initial findings, based on small samples, generate understandable excitement by suggesting that some brain games may enhance specific aspects of behavior and even alter related brain structures and functions. However, as the findings accumulate, compelling evidence of general and enduring positive effects on the way people’s minds and brains age has remained elusive.
These conclusions do not mean that the brain does not remain malleable, even in old age. Any mentally effortful new experience, such as learning a language, acquiring a motor skill, navigating in a new environment, and, yes, playing commercially available computer games, will produce changes in those neural systems that support acquisition of the new skill. For example, there may be an increase in the number of synapses, the number of neurons and supporting cells, or a strengthening of the connections among them. This type of brain plasticity is possible throughout the life span, though younger brains seem to have an advantage over the older ones. It would be appropriate to conclude from such work that the potential to learn new skills remains intact throughout the life span. However at this point it is not appropriate to conclude that training-induced changes go significantly beyond the learned skills, that they affect broad abilities with real-world relevance, or that they generally promote “brain health”.
As we take a closer look at the evidence on brain games, one issue needs to be kept in mind: It is not sufficient to test the hypothesis of training-induced benefits against the assumption that training brings no performance increases at all. Rather, we need to establish that observed benefits are not easily and more parsimoniously explained by factors that are long known to benefit performance, such as the acquisition of new strategies or changes in motivation. It is well established, for example, that improvements on a particular memory task often result from subtle changes instrategy thatreflect improvement in managing the demands of that particular task. Such improvement is rewarding for players (the fun factor) but does not imply a general improvement in memory. In fact, the notion that performance on a single task cannot stand in for an entire ability is a cornerstone of scientific psychology. Claims about brain games often ignore this tenet. In psychology, it is good scientific practice to combine information provided by many tasks to generate an overall index representing a given ability. According to the American Psychological Association, newly developed psychological tests must meet specific psychometric standards, including reliability and validity. The same standards should be extended into the brain game industry, but this is not the state of affairs today.
To date, there is little evidence that playing brain games improves underlying broad cognitive abilities, or that it enables one to better navigate a complex realm of everyday life. Some intriguing isolated reports do inspire additional research, however. For instance, some studies suggest that both non-computerized reasoning and computerized speed-of-processing training are associated with improved driving in older adults and a reduction in the number of accidents. Another study revealed, for a sample of younger adults, that 100 days of practicing 12 different computerized cognitive tasks resulted in small general improvements in the cognitive abilities of reasoning and episodic memory, some of which were maintained over a period of two years. In other studies, older adults have reported that they felt better about everyday functioning after cognitive training, but no objective measures supported that impression. Additional systematic research is needed to replicate, clarify, consolidate, and expand such results. To be fully credible, an empirical test of the usefulness of brain games needs to address the following questions. Does the improvement encompass a broad array of tasks that constitute a particular ability, or does it just reflect the acquisition of specific skills? Do the gains persist for a reasonable amount of time? Are the positive changes noticed in real life indices of cognitive health? What role do motivation and expectations play in bringing about improvements in cognition when they are observed?
In a balanced evaluation of brain games, we also need to keep in mind opportunity costs. Time spent playing the games is time not spent reading, socializing, gardening, exercising, or engaging in many other activities that may benefit cognitive and physical health of older adults. Given that the effects of playing the games tend to be task-specific, it may be advisable to train an activity that by itself comes with benefits for everyday life. Another drawback of publicizing computer games as a fix to deteriorating cognitive performance is that it diverts attention and resources from prevention efforts. The promise of a magic bullet detracts from the message that cognitive vigor in old age, to the extent that it can be influenced by the lives we live, reflects the long-term effects of a healthy and active lifestyle.
We also must keep in mind that studies reporting positive effects of brain games on cognition are more likely to be published than studies with null results –the so-called “file drawer effect”– such that even the available evidence is likely to draw an overly positive picture of the true state of affairs. Statistical methods such meta-analysis, which integrates the results of many studies in a given field of inquiry, allow estimation of effect magnitude as well as the likelihood of the file-drawer effect. While some meta-analyses report small positive effects of training on cognition, others note substantial disparities in methodological rigor among the studies that cast doubt on any firm conclusion. Further, the problems that haunt individual studies do not simply disappear when results from such studies are summarized in a meta-analysis. In particular, the practice of assessing specific tests rather than broader assays of ability is just as problematic on the level of meta-analytic integration as it is on the level of individual studies.
In summary, research on aging has shown that the human mind is malleable throughout life span. In developed countries around the world, later-born cohorts live longer and reach old age with higher levels of cognitive functioning than those who were born in earlier times. When researchers follow people across their adult lives, they find that those who live cognitively active, socially connected lives and maintain healthy lifestyles are less likely to suffer debilitating illness and early cognitive decline in their golden years than their sedentary, cognitively and socially disengaged counterparts. The goal of research on the effectiveness of computer-based cognitive exercise is to provide experimental evidence to support or qualify these observations. Some of the initial results are promising and make further research highly desirable. However, at present, these findings do not provide a sound basis for the claims made by commercial companies selling brain games. Many scientists cringe at exuberant advertisements claiming improvements in the speed and efficiency of cognitive processing and dramatic gains in “intelligence”, in particular when these appear in otherwise trusted news sources. In the judgment of the signatories below, exaggerated and misleading claims exploit the anxiety of adults facing old age for commercial purposes. Perhaps the most pernicious claim, devoid of any scientifically credible evidence, is that brain games prevent or reverse Alzheimer’s disease.
In closing, we offer five recommendations. Some of these recommendations reflect experimental findings in human populations, whereas others are based on a synthesis of correlational evidence in humans and mechanistic knowledge about risks and protective factors.
Much more research needs to be done before we understand whether and what types of challenges and engagements benefit cognitive functioning in everyday life. In the absence of clear evidence, the recommendation of the group, based largely on correlational findings, is that individuals lead physically active, intellectually challenging, and socially engaged lives, in ways that work for them. Before investing time and money on brain games, consider what economists call opportunity costs: If an hour spent doing solo software drills is an hour not spent hiking, learning Italian, making a new recipe, or playing with your grandchildren, it may not be worth it. But if it replaces time spent in a sedentary state, like watching television, the choice may make more sense for you.
Physical exercise is a moderately effective way to improve general health, including brain fitness. Scientists have found that regular aerobic exercise increases blood flow to the brain, and helps to support formation of new neural and vascular connections. Physical exercise has been shown to improve attention, reasoning, and components of memory. All said, one can expect small but noticeable gains in cognitive performance, or attenuation of loss, from taking up aerobic exercise training.
A single study, conducted by researchers with financial interests in the product, or one quote from a scientist advocating the product, is not enough to assume that a game has been rigorously examined. Findings need to be replicated at multiple sites, based on studies conducted by independent researchers who are funded by independent sources. Moreover, participants of training programs should show evidence of significant advantage over a comparison group that does not receive the treatment but is otherwise treated exactly the same as the trained group.
No studies have demonstrated that playing brain games cures or prevents Alzheimer’s disease or other forms of dementia.
Do not expect that cognitively challenging activities will work like one-shot treatments or vaccines; there is little evidence that you can do something once (or even for a concentrated period) and be inoculated against the effects of aging in an enduring way. In all likelihood, gains won’t last long after you stop the challenge.
In summary: We object to the claim that brain games offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline when there is no compelling scientific evidence to date that they do. The promise of a magic bullet detracts from the best evidence to date, which is that cognitive health in old age reflects the long-term effects of healthy, engaged lifestyles. In the judgment of the signatories, exaggerated and misleading claims exploit the anxiety of older adults about impending cognitive decline. We encourage continued careful research and validation in this field.