Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘cognitive science’


Brain, learning and memory

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

March 23, 2016   Exploring long-range communications in the brain
http://www.kurzweilai.net/exploring-long-range-communications-in-the-brain

Red and green dots reveal a region in the brain that that is very dense with synapses. A optically activated fluorescent protein allows Ofer Yizhar, PhD, and his group to record the activity of the synapses. (credit: Weizmann Institute of Science)

Weizmann Institute of Science researchers have devised a new way to track long-distance communications between nerve cells in different areas of the brain. They used optogenetic techniques (using genetic engineering of neurons and laser light in thin optical fibers to temporarily silence long-range axons, effectively leading to a sustained “disconnect” between two distant brain nodes.

By observing what happens when crucial connections are disabled, the researchers could begin to determine the axons’ role in the brain. Mental and neurological diseases are often thought to result from changes in long-range brain connectivity, so these studies could contribute to a better understanding of the mechanisms behind health and disease in the brain.

The study, published in Nature Neuroscience, “led us to a deeper understanding of the unique properties of the axons and synapses that form the connections between neurons,” said Ofer Yizhar, PhD, in the Weizmann Institute of Science’s Neurobiology Department. “We were able to uncover the responses of axons to various optogenetic manipulations. Understanding these differences will be crucial to unraveling the mechanisms for long-distance communication in the brain.”


Abstract of Biophysical constraints of optogenetic inhibition at presynaptic terminals

We investigated the efficacy of optogenetic inhibition at presynaptic terminals using halorhodopsin, archaerhodopsin and chloride-conducting channelrhodopsins. Precisely timed activation of both archaerhodopsin and halorhodpsin at presynaptic terminals attenuated evoked release. However, sustained archaerhodopsin activation was paradoxically associated with increased spontaneous release. Activation of chloride-conducting channelrhodopsins triggered neurotransmitter release upon light onset. Thus, the biophysical properties of presynaptic terminals dictate unique boundary conditions for optogenetic manipulation.

 

DARPA’s ‘Targeted Neuroplasticity Training’ program aims to accelerate learning ‘beyond normal levels’

The transhumanism-inspired goal: train superspy agents to rapidly master foreign languages and cryptography
New DARPA “TNT” technology will be designed to safely and precisely modulate peripheral nerves to control synaptic plasticity during cognitive skill training. (No mention of NZT.) (credit: DARPA)

DARPA has announced a new program called Targeted Neuroplasticity Training (TNT) aimed at exploring how to use peripheral nerve stimulation and other methods to enhance learning.

DARPA already has research programs underway to use targeted stimulation of the peripheral nervous system as a substitute for drugs to treat diseases and accelerate healing*, to control advanced prosthetic limbs**, and to restore tactile sensation.

But now DARPA plans to to take an even more ambitious step: It aims to enlist the body’s peripheral nerves to achieve something that has long been considered the brain’s domain alone: facilitating learning — specifically, training in a wide range of cognitive skills.

The goal is to reduce the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes. If successful, TNT could accelerate learning and reduce the time needed to train foreign language specialists, intelligence analysts, cryptographers, and others.

“Many of these skills, such as understanding and speaking a new foreign language, can be challenging to learn,” says the DARPA statement. “Current training programs are time consuming, require intensive study, and usually require evidence of a more-than-minimal aptitude for eligibility. Thus, improving cognitive skill learning in healthy adults is of great interest to our national security.”

Going beyond normal levels of learning

The program is also notable because it will not just train; it will advance capabilities beyond normal levels — a transhumanist approach.

“Recent research has shown that stimulation of certain peripheral nerves, easily and painlessly achieved through the skin, can activate regions of the brain involved with learning,” by releasing neurochemicals in the brain that reorganize neural connections in response to specific experiences, explained TNT Program Manager Doug Weber,

“This natural process of synaptic plasticity is pivotal for learning, but much is unknown about the physiological mechanisms that link peripheral nerve stimulation to improved plasticity and learning,” Weber said. “You can think of peripheral nerve stimulation as a way to reopen the so-called ‘Critical Period’ when the brain is more facile and adaptive. TNT technology will be designed to safely and precisely modulate peripheral nerves to control plasticity at optimal points in the learning process.”

The goal is to optimize training protocols that expedite the pace of learning and maximize long-term retention of even the most complicated cognitive skills. DARPA intends to take a layered approach to exploring this new terrain:

  • Fundamental research will focus on gaining a clearer and more complete understanding of how nerve stimulation influences synaptic plasticity, how cognitive skill learning processes are regulated in the brain, and how to boost these processes to safely accelerate skill acquisition while avoiding potential side effects.
  • The engineering side of the program will target development of a non-invasive device that delivers peripheral nerve stimulation to enhance plasticity in brain regions responsible for cognitive functions.

Proposers Day

TNT expects to attract multidisciplinary teams spanning backgrounds such as cognitive neuroscience, neural plasticity, electrophysiology, systems neurophysiology, biomedical engineering, human performance, and computational modeling.

To familiarize potential participants with the technical objectives of TNT, DARPA will host a Proposers Day on Friday, April 8, 2016, at the Westin Arlington Gateway in Arlington, Va. (registration closes on Thursday, March 31, 2016). ADARPA Special Notice announces the Proposers Day and describes the specific capabilities sought. A Broad Agency Announcement with full technical details on TNT will be forthcoming. For more information, please email DARPA-SN-16-20@darpa.mil.

* DARPA’s ElectRx program is looking for “demonstrations of feedback-controlled neuromodulation strategies to establish healthy physiological states,” along with “disruptive biological-interface technologies required to monitor biomarkers and peripheral nerve activity … [and] deliver therapeutic signals to peripheral nerve targets, using in vivo, real-time biosensors and novel neural interfaces using optical, acoustic, electromagnetic, or engineered biology strategies to achieve precise targeting with potentially single-axon resolution.”

** DARPA’s HAPTIX (Hand Proprioception and Touch Interfaces) program “seeks to create a prosthetic hand system that moves and provides sensation like a natural hand. … HAPTIX technologies aim to tap in to the motor and sensory signals of the arm, allowing users to control and sense the prosthesis via the same neural signaling pathways used for intact hands and arms. … The system will include electrodes for measuring prosthesis control signals from muscles and motor nerves, and sensory feedback will be delivered through electrodes placed in sensory nerves.”

 

Fading of Epigenetic Memories across Generations Is Regulated
Neurons involved in working memory fire in bursts, not continuously
http://www.genengnews.com/gen-news-highlights/fading-of-epigenetic-memories-across-generations-is-regulated/81252537

  • Epigenetic “remembering” is better understood than epigenetic “forgetting,” and so it is an open question whether epigenetic forgetting is, like epigenetic remembering, active—a distinct biomolecular process—or passive—a matter of dilution or decay. New research, however, suggests that epigenetic forgetting is an active process, one in which a feedback mechanism determines the duration of transgenerational epigenetic memories.

    The new research comes out of Tel Aviv University, where researchers have been working with the nematode worm Caenorhabditis elegans to elucidate epigenetic mechanisms. In particular, the researchers, led by Oded Rechavi, Ph.D., have been preoccupied with how the effects of stress, trauma, and other environmental exposures are passed from one generation to the next.

    In previous work, Dr. Rechavi’s team enhanced the state of knowledge of small RNA molecules, short sequences of RNA that regulate the expression of genes. The team identified a “small RNA inheritance” mechanism through which RNA molecules produced a response to the needs of specific cells and how they were regulated between generations.

    “We previously showed that worms inherited small RNAs following the starvation and viral infections of their parents. These small RNAs helped prepare their offspring for similar hardships,” Dr. Rechavi explained. “We also identified a mechanism that amplified heritable small RNAs across generations, so the response was not diluted. We found that enzymes called RdRPs [RNA-dependent RNA polymerases] are required for re-creating new small RNAs to keep the response going in subsequent generations.”

    Most inheritable epigenetic responses in C. elegans were found to persist for only a few generations. This created the assumption that epigenetic effects simply “petered out” over time, through a process of dilution or decay. “But this assumption,” said Dr. Rechavi, “ignored the possibility that this process doesn’t simply die out but is regulated instead.”

    This possibility was explored in the current study, in which C. elegans were treated with small RNAs that target the GFP (green fluorescent protein) gene, a reporter gene commonly used in experiments. “By following heritable small RNAs that regulated GFP—that ‘silenced’ its expression—we revealed an active, tunable inheritance mechanism that can be turned ‘on’ or ‘off,'” declared Dr. Rechavi.

    Details of the work appeared March 24 in the journal Cell, in an article entitled, “A Tunable Mechanism Determines the Duration of the Transgenerational Small RNA Inheritance in C. elegans.” The article shows that exposure to double-stranded RNA (dsRNA) activates a feedback loop whereby gene-specific RNA interference (RNAi) responses “dictate the transgenerational duration of RNAi responses mounted against unrelated genes, elicited separately in previous generations.”

    Essentially, amplification of heritable exo-siRNAs occurs at the expense of endo-siRNAs. Also, a feedback between siRNAs and RNAi genes determines heritable silencing duration.

    “RNA-sequencing analysis reveals that, aside from silencing of genes with complementary sequences, dsRNA-induced RNAi affects the production of heritable endogenous small RNAs, which regulate the expression of RNAi factors,” wrote the authors of the Cell paper. “Manipulating genes in this feedback pathway changes the duration of heritable silencing.”

    The scientists also indicated that specific genes, which they named MOTEK (Modified Transgenerational Epigenetic Kinetics), were involved in turning on and off epigenetic transmissions.

    “We discovered how to manipulate the transgenerational duration of epigenetic inheritance in worms by switching ‘on’ and ‘off’ the small RNAs that worms use to regulate genes,” said Dr. Rechavi. “These switches are controlled by a feedback interaction between gene-regulating small RNAs, which are inheritable, and the MOTEK genes that are required to produce and transmit these small RNAs across generations.

    “The feedback determines whether epigenetic memory will continue to the progeny or not, and how long each epigenetic response will last.”

    Although its research was conducted on worms, the team believes that understanding the principles that control the inheritance of epigenetic information is crucial for constructing a comprehensive theory of heredity for all organisms, humans included.

    “We are now planning to study the MOTEK genes to know exactly how these genes affect the duration of epigenetic effects,” said Leah Houri-Ze’evi, a Ph.D. student in Dr. Rechavi’s lab and first author of the paper. “Moreover, we are planning to examine whether similar mechanisms exist in humans.”

    The current study notes that the active control of transgenerational effects could be adaptive, because ancestral responses would be detrimental if the environments of the progeny and the ancestors were different.

    A Tunable Mechanism Determines the Duration of the Transgenerational Small RNA Inheritance in C. elegans

    Leah Houri-Ze’ev, Yael Korem, Hila Sheftel,…, Luba Degani, Uri Alon, Oded Rechavi
    Cell 24 March 2016; Volume 165, Issue 1, p88–99.  http://dx.doi.org/10.1016/j.cell.2016.02.057
    Highlights
  • New RNAi episodes extend the duration of heritable epigenetic effects
  • Amplification of heritable exo-siRNAs occurs at the expense of endo-siRNAs
  • A feedback between siRNAs and RNAi genes determines heritable silencing duration
  • Modified transgenerational epigenetic kinetics (MOTEK) mutants are identified

Figure thumbnail fx1

In C. elegans, small RNAs enable transmission of epigenetic responses across multiple generations. While RNAi inheritance mechanisms that enable “memorization” of ancestral responses are being elucidated, the mechanisms that determine the duration of inherited silencing and the ability to forget the inherited epigenetic effects are not known. We now show that exposure to dsRNA activates a feedback loop whereby gene-specific RNAi responses dictate the transgenerational duration of RNAi responses mounted against unrelated genes, elicited separately in previous generations. RNA-sequencing analysis reveals that, aside from silencing of genes with complementary sequences, dsRNA-induced RNAi affects the production of heritable endogenous small RNAs, which regulate the expression of RNAi factors. Manipulating genes in this feedback pathway changes the duration of heritable silencing. Such active control of transgenerational effects could be adaptive, since ancestral responses would be detrimental if the environments of the progeny and the ancestors were different.

How we are able to keep several things simultaneously in working memory
Pictured is an artist’s interpretation of neurons firing in sporadic, coordinated bursts. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” Earl Miller says. (credit: Jose-Luis Olivares/MIT)

Think of a sentence you just read. Like that one. You’re now using your working memory, a critical brain system that’s roughly analogous to RAM memory in a computer.

Neuroscientists have believed that as information is held in working memory, brain cells associated with that information must be firing continuously. Not so — they fire in sporadic, coordinated bursts, says Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences.

That makes sense. These different bursts could help the brain hold multiple items in working memory at the same time, according to the researchers. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” says Miller, the senior author of a study that appears in the March 17 issue of Neuron.

Bursts of activity, not averaged activity

So why hasn’t anyone noticed this before? Because previous studies averaged the brain’s activity over seconds or even minutes of performing the task, Miller says. “We looked more closely at this activity, not by averaging across time, but from looking from moment to moment. That revealed that something way more complex is going on.”

To do that, Miller and his colleagues recorded neuron activity in animals as they were shown a sequence of three colored squares, each in a different location. Then, the squares were shown again, but one of them had changed color. The animals were trained to respond when they noticed the square that had changed color — a task requiring them to hold all three squares in working memory for about two seconds.

The researchers found that as items were held in working memory, ensembles of neurons in the prefrontal cortex were active in brief bursts, and these bursts only occurred in recording sites in which information about the squares was stored. The bursting was most frequent at the beginning of the task, when the information was encoded, and at the end, when the memories were read out.

The findings fit well with a model that Lundqvist had developed as an alternative to the model of sustained activity as the neural basis of working memory. According to the new model, information is stored in rapid changes in the synaptic strength of the neurons. The brief bursts serve to “imprint” information in the synapses of these neurons, and the bursts reoccur periodically to reinforce the information as long as it is needed.

The bursts create waves of coordinated activity at the gamma frequency (45 to 100 hertz), like the ones that were observed in the data. These waves occur sporadically, with gaps between them, and each ensemble of neurons, encoding a specific item, produces a different burst of gamma waves, like a fingerprint.

Implications for other cognitive functions

The findings suggest that it would be worthwhile to look for this kind of cyclical activity in other cognitive functions such as attention, the researchers say. Oscillations like those seen in this study may help the brain to package information and keep it separate so that different pieces of information don’t interfere with each other.

Robert Knight, a professor of psychology and neuroscience at the University of California at Berkeley, says the new study “provides compelling evidence that nonlinear oscillatory dynamics underlie prefrontal dependent working memory capacity.”

“The work calls for a new view of the computational processes supporting goal-directed behavior,” adds Knight, who was not involved in the research. “The control processes supporting nonlinear dynamics are not understood, but this work provides a critical guidepost for future work aimed at understanding how the brain enables fluid cognition.”


editor’s comments: I’m curious how this relates to forgetting things to make space to learn new things. (Turns out the hippocampus works closely with the prefrontal cortex in working memory, as this open-access Nature paper explains.) Also, what’s the latest on how many things we can keep in working memory (it used to be around five)? Is that number limited by forgetting or by the capacity to differentiate different spike trains? Any tricks for keeping more things in working memory?


Abstract of Gamma and Beta Bursts Underlie Working Memory

Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45–100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20–35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity.

Gamma and Beta Bursts Underlie Working Memory

Mikael Lundqvist5, Jonas Rose5, Pawel Herman, Scott L. Brincat, Timothy J. Buschman, Earl K. Miller

Highlights
  • Working memory information in neuronal spiking is linked to brief gamma bursts
  • The narrow-band gamma bursts increase during encoding, decoding, and with WM load
  • Beta bursting reflects a default network state interrupted by gamma
  • Support for a model of WM is based on discrete dynamics and not sustained activity

 

References  Authors  Title   Source

Amit, D.J. and Brunel, N.Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex.

Cereb. Cortex. 1997; 7: 237–252

Asaad, W.F. and Eskandar, E.N.A flexible software tool for temporally-precise behavioral control in Matlab.

J. Neurosci. Methods. 2008;174: 245–258

Axmacher, N., Henseler, M.M., Jensen, O., Weinreich, I., Elger, C.E., and Fell, J.Cross-frequency coupling supports multi-item working memory in the human hippocampus.

Proc. Natl. Acad. Sci. USA.2010; 107: 3228–3233

Brunel, N. and Wang, X.J.What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance.

J. Neurophysiol. 2003; 90:415–430

Buschman, T.J., Siegel, M., Roy, J.E., and Miller, E.K.Neural substrates of cognitive capacity limitations.

Proc. Natl. Acad. Sci. USA.2011; 108: 11252–11255

Johan Eriksson, Edward K. Vogel, Anders Lansner, Fredrik Bergström, Lars Nyberg
Neuron, Vol. 88, Issue 1, p33–46
Published in issue: October 07, 2015
Abstract Image
Alexander Bratch, Spencer Kann, Joshua A. Cain, Jie-En Wu, Nilda Rivera-Reyes, Stefan Dalecki, Diana Arman, Austin Dunn, Shiloh Cooper, Hannah E. Corbin, Amanda R. Doyle, Matthew J. Pizzo, Alexandra E. Smith, Jonathon D. Crystal
Current Biology, Vol. 26, Issue 3, p351–355
Published online: January 14, 2016
Abstract Image
Diego Lozano-Soldevilla, Niels ter Huurne, Roshan Cools, Ole Jensen
Current Biology, Vol. 24, Issue 24, p2878–2887
Published online: November 26, 2014

Open Archive

Abstract Image
Lily Kahsai, Troy Zars
Current Biology, Vol. 23, Issue 18, R843–R845
Published in issue: September 23, 2013

Open Archive

Abstract Image
Dimitry Fisher, Itsaso Olasagasti, David W. Tank, Emre R.F. Aksay, Mark S. Goldman
Neuron, Vol. 79, Issue 5, p987–1000
Published in issue: September 04, 2013

Open Archive

Synaptic Amplifier

Gene discovery reveals mechanism behind how we think

By ELIZABETH COONEY   March 16, 2016
http://hms.harvard.edu/news/synaptic-amplifier

Skyler Jackman and colleagues studied the phenomenon known as synaptic facilitation by using light to turn neuronal connections on and off. The optogenetic protein used in this technique appears yellow. Image: Regehr lab

Our brains are marvels of connectivity, packed with cells that continually communicate with one another. This communication occurs across synapses, the transit points where chemicals called neurotransmitters leap from one neuron to another, allowing us to think, to learn and to remember.
Now Harvard Medical School researchers have discovered a gene that provides that boost by increasing neurotransmitter release in a phenomenon known as synaptic facilitation. And they did so by turning on a light or two.
Image: Jasmine Vazquez
image: Jasmine Vazquez

The gene is synaptotagmin 7 (syt7 for short), a calcium sensor that dynamically increases neurotransmitter release; each release serves to strengthen communication between neurons for about a second. These swift releases are thought to be critical for the brain’s ability to perform computations involved in short-term memory, spatial navigation and sensory perception.

A team of researchers who made this discovery was led by Skyler Jackman, a postdoctoral researcher in the lab of Wade Regehr, professor of neurobiology at HMS. They recently reported their findings in Nature.

 The calcium sensor synaptotagmin 7 is required for synaptic facilitationSkyler L. JackmanJosef TurecekJustine E. Belinsky & Wade G. Regehr
Nature 529, 88–91 (07 January 2016)
        doi:10.1038/nature16507
It has been known for more than 70 years that synaptic strength is dynamically regulated in a use-dependent manner1. At synapses with a low initial release probability, closely spaced presynaptic action potentials can result in facilitation, a short-term form of enhancement in which each subsequent action potential evokes greater neurotransmitter release2. Facilitation can enhance neurotransmitter release considerably and can profoundly influence information transfer across synapses3, but the underlying mechanism remains a mystery. One proposed mechanism is that a specialized calcium sensor for facilitation transiently increases the probability of release24, and this sensor is distinct from the fast sensors that mediate rapid neurotransmitter release. Yet such a sensor has never been identified, and its very existence has been disputed56. Here we show that synaptotagmin 7 (Syt7) is a calcium sensor that is required for facilitation at several central synapses. In Syt7-knockout mice, facilitation is eliminated even though the initial probability of release and the presynaptic residual calcium signals are unaltered. Expression of wild-type Syt7 in presynaptic neurons restored facilitation, whereas expression of a mutated Syt7 with a calcium-insensitive C2A domain did not. By revealing the role of Syt7 in synaptic facilitation, these results resolve a longstanding debate about a widespread form of short-term plasticity, and will enable future studies that may lead to a deeper understanding of the functional importance of facilitation.

“We really think one of the most important things the brain can do is change the strength of connections between neurons,” Jackman said. “Now that we have a tool to selectively turn off facilitation, we can test some long-held beliefs about its importance for thinking and working memory.”

Although synaptic facilitation was first described 70 years ago by Te-Pei Feng, known as the father of Chinese physiology, Jackman and colleagues were able to identify the mechanism behind synaptic strengthening by taking advantage of advanced laboratory techniques unavailable to previous generations of scientists.

A dozen years ago, Regehr suspected that syt7 might drive this synaptic strengthening process: The gene turns on slowly and then ramps up in speed, which would fit gradual release of neurotransmitters.

About eight years ago scientists in another lab engineered “knockout” mice that lack the syt7 gene, setting the stage for experiments to test Regehr’s speculations. But when grown in a lab dish, neurons from these knockout mice behaved no differently than other neurons; results that, at the time, dashed hopes that syt7 could explain the synaptic boost.

A year ago Jackman took another tack. He tested synaptic connections in brain tissue taken from the knockout mice but still having intact brain circuits, an experiment more reflective of how neurons and synapses might work in a living animal.

“It was striking. It was amazing,” Jackman said. “As soon as we probed these connections we saw there was a huge deficit, a complete lack of synaptic facilitation in the knockout mice, completely different from their wild-type brothers and sisters.”

To be certain that knocking out syt7 was responsible for this change, Jackman had to find a way to reinsert syt 7 and restore its function. He did that by using optogenetics, a genetic manipulation tool that allows neuronal connections to be turned on and off with light. He augmented this technique with bicistronic expression, a method that packages one optogenetic protein and one syt7protein into a single virus that infects all neurons equally. Using these two techniques, Jackman could selectively study what happened when syt7 was reinserted into a neuron and measure its effects reliably.

 

We need to forget things to make space to learn new things, scientists discover

Mice study, if confirmed in people, might help forget traumatic experiences
http://www.kurzweilai.net/we-need-to-forget-things-to-make-space-to-learn-new-things-scientists-discover

The three routes into the hippocampus seem to be linked to different aspects of learning: forming memories (green), recalling them (yellow) and forgetting them (red) (credit: John Wood)

While you’re reading this (and learning about this new study), your brain is actively trying to forget something.

We apologize, but that’s what scientists at the European Molecular Biology Laboratory (EMBL) and the University Pablo Olavide in Sevilla, Spain, found in a new study published Friday (March 18) in an open-access paper in Nature Communications.

“This is the first time that a pathway in the brain has been linked to forgetting — to actively erasing memories,” says Cornelius Gross, who led the work at EMBL.

Working with mice, Gross and colleagues studied the hippocampus, a region of the brain known to help form memories. Information enters this part of the brain through three different routes. As memories are formed, connections between neurons along the “main” route become stronger.

When they blocked this main route (dentate gyrus granule cells), the scientists found that the mice were no longer capable of learning (in this case, a specific Pavlovian response).* But surprisingly, blocking that main route  also resulted in its connections weakening, meaning the memory was actually being erased.

Limited space in the brain

Gross proposes that one explanation: “There is limited space in the brain, so when you’re learning, you have to weaken some connections to make room for others,” says Gross.

Interestingly, this active push for forgetting only happens in learning situations. When the scientists blocked the main route into the hippocampus under other circumstances, the strength of its connections remained unaltered.

The findings were made using genetically engineered mice, but the scientists demonstrated that it is possible to produce a drug that activates this “forgetting” route in the brain without the need for genetic engineering. This approach, they say, might help people forget traumatic experiences.

* But if the mice had learned that association before the scientists stopped information flow in that main route, they could still retrieve that memory. This confirmed that this route is involved in forming memories, but isn’t essential for recalling those memories. The latter probably involves the second route into the hippocampus, the scientists surmise.


Abstract of Rapid erasure of hippocampal memory following inhibition of dentate gyrus granule cells

The hippocampus is critical for the acquisition and retrieval of episodic and contextual memories. Lesions of the dentate gyrus, a principal input of the hippocampus, block memory acquisition, but it remains unclear whether this region also plays a role in memory retrieval. Here we combine cell-type specific neural inhibition with electrophysiological measurements of learning-associated plasticity in behaving mice to demonstrate that dentate gyrus granule cells are not required for memory retrieval, but instead have an unexpected role in memory maintenance. Furthermore, we demonstrate the translational potential of our findings by showing that pharmacological activation of an endogenous inhibitory receptor expressed selectively in dentate gyrus granule cells can induce a rapid loss of hippocampal memory. These findings open a new avenue for the targeted erasure of episodic and contextual memories.

 

Rapid erasure of hippocampal memory following inhibition of dentate gyrus granule cells

Noelia MadroñalJosé M. Delgado-GarcíaAzahara Fernández-GuizánJayanta ChatterjeeMaja KöhnCamilla Mattucci, et al.

Nature Communications7,Article number:10923    http://www.nature.com/ncomms/2016/160318/ncomms10923/full/ncomms10923.html

The hippocampus is an evolutionarily ancient part of the cortex that makes reciprocal excitatory connections with neocortical association areas and is critical for the acquisition and retrieval of episodic and contextual memories. The hippocampus has been the subject of extensive investigation over the last 50 years as the site of plasticity thought to be critical for memory encoding. Models of hippocampal function propose that sensory information reaching the hippocampus from the entorhinal cortex via dentate gyrus (DG) granule cells is encoded in CA3 auto-association circuits and can in turn be retrieved via Schaffer collateral (SC) projections linking CA3 and CA1 (refs 1, 2, 3, 4; Fig. 1a). Learning-associated plasticity in CA3–CA3 auto-associative networks encodes the memory trace, and plasticity in SC connections is necessary for the efficient retrieval of this trace2, 5, 6, 7, 8, 9, 10. In addition, both CA3 and CA1 regions receive direct, monosynaptic inputs from entorhinal cortex that are thought to convey information about ongoing sensory inputs that could modulate CA3 memory trace acquisition and/or retrieval via SC (refs 11,12, 13; Fig. 1a). In DG granule cells, sensory information is thought to undergo pattern separation into orthogonal cell ensembles before encoding (or reactivating, in the case of retrieval) memories in CA3 (ref. 14). However, how the hippocampus executes both the acquisition and recall of memories stored in CA3 remains a question of debate with some models attributing a role for DG inputs in memory acquisition, but not retrieval2, 15, 16, 17.

Rapid and selective inhibition of DG neurotransmission in vivo.

(a) The hippocampal tri-synaptic circuit receives PP inputs from entorhinal cortex to DG, CA3 and CA1. (b) A stimulating electrode was implanted in the PP and a recording electrode in CA3 pyramidal layer. (c) Strength of CA3 pyramidal layer fEPSPs evoked in anaesthetized mice by electrical stimulation of PP inputs showed fast and slow latency population spike components corresponding to direct PP-CA3 and indirect PP–DG-CA3 inputs, respectively. Systemic administration of the selective Htr1a agonist, 8-OH-DPAT (0.3mgkg−1, subcutaneous), to Htr1aDG (Tg) mice caused a rapid and selective decrease in the long-latency component that persisted for several hours. Quantification indicated a significant decrease in DG neurotransmission following agonist treatment of Htr1aDG, but not Htr1aKO (KO) littermates or vehicle treated wild-type mice that reached 80% suppression and persisted for >2h (mean±s.e.m.; n=10;*P<0.05; two-way analysis of variance followed by Holm–Sidak post hoc test). (d) Representative fEPSPs evoked at CA3 pyramidal layer after stimulation of PP inputs before and after agonist treatment. The fast and the slow latency population spike components are indicated (black arrow, short; grey arrow, long).

 

Figure 2

Inhibition of DG induces rapid and persistent loss of hippocampal memory and plasticity.

Figure 4

Loss of plasticity depends on entorhinal cortex inputs and local adenosine signalling.

In the present study we examined the contribution of DG granule cells to learning and recall and its associated synaptic plasticity in animals that had previously acquired a hippocampal memory. We found that transient pharmacogenetic inhibition of DG granule cells did not impair conditioned responding to CS presentation nor alter SC synaptic plasticity demonstrating that DG is not required for memory recall (Fig. 3c,d). However, when DG inhibition occurred during paired presentation of CS and US, we observed a rapid loss of SC synaptic plasticity and conditioned responding to CS (Fig. 2d,e and Supplementary Fig. 3). Strikingly, the synaptic plasticity and behavioural impairment persisted in the absence of further stimulus presentation and later relearning occurred at a rate indistinguishable from initial learning, suggesting a loss of the memory trace (Fig. 2f,g).

One possible explanation for the memory loss seen on DG inhibition is that presentation of paired CS–US has a dual effect on CA1 plasticity, on the one hand strengthening SC synapses via a DG-dependent mechanism (indirect inputs to CA1 via the tri-synaptic circuit) and on the other hand weakening SC synapses in a non-DG-dependent manner (direct PP-CA1 inputs). This explanation is consistent with several studies in the literature reporting mechanistic and functional differences between the direct and the indirect inputs to CA1 (refs 12, 13, 30, 31, 32). Furthermore, earlier in vitro12, 23 and in vivo33 electrophysiology studies found that stimulation of PP-CA1 inputs to the hippocampus could depotentiate synaptic plasticity that had been previously acquired at SC synapses suggesting that the direct PP pathway might promote depotentiation during hippocampal learning. To test this possibility, we used dual, orthogonal pharmacogenetic inhibition of DG and entorhinal cortex to show that the memory loss phenomenon we observed depended on PP inputs (Fig. 4e). Furthermore, one of the earlier studies23 had shown that PP stimulation-induced SC depotentiation could be inhibited by blockade of adenosine A1 receptors, but not several other receptors, and we found that bilateral administration of DPCPX to the CA1 region of the hippocampus blocked synaptic depotentiation in our model (Fig. 4g).

Our data lead us to propose a novel function for PP-CA1 inputs to the hippocampus. During CS–US presentation, but not during presentation of unpaired CS–US or CS alone, information arriving via this pathway actively promotes depotentiation of SC synapses, while information arriving via the DG pathway opposes this depotentiation. Thus, in an animal that has successfully acquired a hippocampal-dependent memory, and in which the direct and indirect pathways are intact, SC synaptic strength is stable and memories can be retrieved. However, when the DG pathway is blocked, as we have done artificially in our study, depotentiation is favoured and memory is lost (see scheme, Fig. 6). The precise function of PP-dependent SC depotentiation remains unclear at this point, but we speculate that it may play a role in weakening previously acquired associations to facilitate the encoding of new memories. Existing data show that selective blockade of synaptic activity in entorhinal cortex neurons projecting to CA1 impairs the acquisition of trace fear conditioning34 and support our hypothesis of a positive role for this pathway in learning13, 30, 32, 33. Moreover, our DPCPX experiments suggest that blockade of the depotentiation mechanism promotes SC synaptic plasticity during CS–US presentation in otherwise intact animals (Fig. 4g). However, further loss and gain-of-function manipulations of this pathway coupled with in vivoelectrophysiology and learning behaviour are needed to directly test a role of PP-CA1 inputs in memory clearing.

Figure 6: Model for function of PP-CA1 inputs to the hippocampus.

Figure 6

Model for function of PP-CA1 inputs to the hippocampus.

Model for function of PP-CA1 inputs to the hippocampus.

Area CA1 of the hippocampus receives information directly from the entorhinal cortex (direct PP-CA1 pathway) and also indirectly via the tri-synaptic circuit. (a) Presentation of paired CS–US promotes potentiation of SC synapses (+) via the indirect pathway depotentiation of SC synapses (–) via the PP-CA1 pathway. In an animal having successfully undergone learning, potentiation and depotentiation are balanced, SC synaptic strength is stable and memories can be retrieved. (b) Inhibition of DG during CS–US presentation suppresses potentiation via the indirect pathway, unmasking depotentiation of SC synapses and promoting memory loss.

Our finding that DG granule cells are not required for retrieval of hippocampal memory is consistent with previous data arguing that retrieval of associative information encoded in CA3–CA3 and SC plasticity is achieved via direct PP projections to CA3 (refs 1, 2, 3, 4, 35, 36, 37, 38). However, our data appear to contradict at least one recent study demonstrating a role for DG granule cells in retrieval during contextual fear conditioning39. We believe this discrepancy is due to a requirement for DG granule cells in the processing of the contextual CS (ref. 40). However, to rule out the possibility that other methodological differences between the studies underlie the discrepancy, it would be important to determine whether the cell-type specific optogenetic inhibition method used in their study left intact the recall of hippocampal-dependent memories for discrete cues.

Our study raises several questions. First, while we show SC depotentiation is adenosine receptor dependent, the location of adenosine signalling is not clear. Adenosine A1 receptors are expressed highly in CA3 pyramidal cells as well as more modestly in CA1 (ref. 28), and a study in which this receptor was selectively knocked out in one or the other of these structures demonstrated a role for presynaptic CA3, but not postsynaptic CA1 receptors in dampening SC neurotransmission41suggesting a presynaptic mechanism for our effect. The source of adenosine, on the other hand, could involve pre- and/or postsynaptic release as well as release from non-neuronal cells such as astrocytes27, 42. Second, although our DPCPX experiment pointed to a role for PP-CA1 projections in SC depotentiation, our entorhinal cortex pharmacogenetic inhibition experiment did not allow us to distinguish between contributions of PP-CA1 and PP-CA3 inputs. Although we cannot rule out a contribution of PP-CA3 projections to SC depotentiation, earlier in vitro and in vivo electrophysiology studies clearly demonstrate a role for PP-CA1 in SC depotentiation12, 22, 33. Third, the method we used to assess SC postsynaptic strength, namely electrical stimulation evoked field potentials does not allow us to rule out that changes in synaptic plasticity at non-SC inputs underlie our plasticity effects. Experiments using targeted optogenetic stimulation of CA3 efferents could be used to more selectively measure SC synaptic strength. Fourth, our observation that SC depotentiation and memory loss occurred only during paired, but not unpaired CS–US presentation (Fig. 2d,e) suggests that the memory loss phenomenon we describe is distinct from other well-described avenues for memory degradation, including enhancement of extinction43 and blockade of reconsolidation44. Finally, our findings demonstrating generalization of DG inhibition-induced memory loss across tasks coupled with our identification of an endogenous pharmacological target that can induce similar memory loss raise the possibility that the novel memory mechanism we have uncovered may be useful for erasing unwanted memories in a clinical setting.

Advertisements

Read Full Post »


Brain-motion mobility enhancement

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

 

 

Implantable ‘stentrode’ to allow paralyzed patients to control an exoskeleton with their mind

UC Berkeley spinoff also announces lighter, lower-cost Phoenix exoskeleton
February 10, 2016

The “stentrode,” created by the University of Melbourne’s Vascular Bionics Laboratory, adapts an off-the-shelf self-expanding stent to include a recording electrode array. The device is delivered to the brain through blood vessels in the neck, thus avoiding many of the risks associated with traditional placement of neural implants through open-brain surgery. (credit: University of Melbourne)

A DARPA-funded research team has created a novel minimally invasive brain-machine interface and recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier when treating patients for physical disabilities and neurological disorders.

 

(credit: University of Melbourne)

 

A DARPA-funded research team has created a novel minimally invasive brain-machine interface and recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier when treating patients for physical disabilities and neurological disorders.

 

The new technology, developed by University of Melbourne medical researchers under DARPA’s Reliable Neural-Interface Technology (RE-NET) program, promises to give people with spinal cord injuries new hope to walk again.

The brain-machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.

The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.

The research results, published Monday Feb. 8 in Nature Biotechnology, show the device is capable of recording high-quality signals emitted from the brain’s motor cortex without the need for open brain surgery.

“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery,” said Thomas Oxley, principal author and neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne.

Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

 

https://youtu.be/hB3H3wHwO24

The University of Melbourne | Stentrode in action

 

Stentrode with 8 × 750 micrometer electrode discs (yellow arrow) self-expanding during deployment from catheter (green arrow). Scale bar, 3 mm. (credit: Thomas J. Oxley et al./Nature Biotechnology)

 

“The electrode array self-expands to stick to the inside wall of a vein, enabling the researchers to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers. In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis,” he said.

Thought control

“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking — stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices.”

Professor Clive May, neurophysiologist at The Florey, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use. “Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open-brain surgery.

The authors note that “avoiding direct contact with cortical neurons may mitigate brain trauma and chronic local inflammation,” subject to additional evaluation.

 

https://youtu.be/kYbPb4XtAVI

The University of Melbourne | Stentrode: Moving with the power of thought

 

In addition to DARPA, the research was supported by Australia’s National Health and Medical Research Council, the U.S. Office of Naval Research Global, The Australian Defence Health Foundation, The Brain Foundation, and The Royal Melbourne Hospital Neuroscience Foundation.

Lighter, more agile exoskeleton helps the paralyzed to walk

http://www.kurzweilai.net/images/SuitXPhoenix-exoskeleton.jpg

Steven Sanchez, who was paralyzed from the waist down after a BMX accident, wears SuitX’s light, more agile Phoenix exoskeleton. (credit: SuitX)

 

Meanwhile, in related research (also based on initial funding from DARPA), SuitX, a spinoff of UC Berkeley’s Robotics and Human Engineering Laboratory robotics lab, introduced last week the Phoenix — a new lighter, more agile and lower-cost manually controlled exoskeleton.

The Phoenix is lightweight and has two motors at the hips and electrically controlled tension settings that tighten when the wearer is standing and swing freely when they’re walking. Users can control the movement of each leg and walk up to 1.1 miles per hour by pushing buttons integrated into a pair of crutches. It’s powered for up to eight hours by a battery pack worn in a backpack.

Developed from the Berkeley Lower Extremity Exoskeleton (BLEEX), the Phoenix is one of the lightest and most accessible exoskeletons available, according to SuitX. It can be adjusted to fit varied weights, heights, and leg sizes and can be used for a range of mobility hindrances. At $40,000, it’s about the half the cost of other exoskeletons that help restore mobility.

 

Abstract of Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity

High-fidelity intracranial electrode arrays for recording and stimulating brain activity have facilitated major advances in the treatment of neurological conditions over the past decade. Traditional arrays require direct implantation into the brain via open craniotomy, which can lead to inflammatory tissue responses, necessitating development of minimally invasive approaches that avoid brain trauma. Here we demonstrate the feasibility of chronically recording brain activity from within a vein using a passive stent-electrode recording array (stentrode). We achieved implantation into a superficial cortical vein overlying the motor cortex via catheter angiography and demonstrate neural recordings in freely moving sheep for up to 190 d. Spectral content and bandwidth of vascular electrocorticography were comparable to those of recordings from epidural surface arrays. Venous internal lumen patency was maintained for the duration of implantation. Stentrodes may have wide ranging applications as a neural interface for treatment of a range of neurological conditions.

 

 

 

Read Full Post »


Physical activity enhances learning

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Can physical activity make you learn better?

Apparently so — at least for speed of recovery of vision after an eye-patch test; may offer hope for people with traumatic brain injury or eye conditions such as amblyopia
This is an artistic representation of the take home messages in Lunghi and Sale: “A cycling lane for brain rewiring,” which is that physical activity (such as cycling) is associated with increased brain plasticity. (credit: Dafne Lunghi Art)

Exercise may enhance plasticity of the adult brain — the ability of our neurons to change with experience — which is essential for learning, memory, and brain repair, Italian researchers report in an open-access paper in the Cell Press journal Current Biology.

Their research, which focused on the the visual cortex, may offer hope for people with traumatic brain injury or eye conditions such as amblyopia, the researchers suggest. “We provide the first demonstration that moderate levels of physical activity enhance neuroplasticity in the visual cortex of adult humans,” says Claudia Lunghi of the University of Pisa in Italy.

Brain plasticity is generally thought to decline with age, especially in the sensory region of the brain (such as vision). But previous studies by research colleague Alessandro Sale of the National Research Council’s Neuroscience Institute  showed that animals performing physical activity — for example rats running on a wheel — showed elevated levels of plasticity in the visual cortex and had improved recovery from amblyopia compared  to more sedentary animals.

Binocular rivalry test

 

http://www.kurzweilai.net/images/binocular-rivaltry-test.jpg

Binocular rivalry before and after “monocular deprivation” (reduced vision due to a patch) for inactive and active groups (credit: Claudia Lunghi and Alessandro Sale/Current Biology)

 

To find out whether the same might hold true for people, the researchers used a simple test of binocular rivalry. When people have one eye patched for a short period of time, the closed eye becomes stronger as the visual brain attempts to compensate for the lack of visual input. This recovered strength (after the eye patch is removed) is a measure of the brain’s visual plasticity.

In the new study, Lunghi and Sale put 20 adults through this test twice. In one test, participants with the dominant eye patched with a translucent material watched a movie while relaxing in a chair. In the other test, participants with one eye patched also watched a movie, but while exercising on a stationary bike for ten-minute intervals during the movie.

Exercise enhances brain plasticity (at least for vision)

Result: brain plasticity in the patched eye was enhanced by the exercise. After physical activity, the patched eye was strengthened more quickly (indicating increased levels of brain plasticity) than with the couch potatoes.

While further study is needed, the researchers think this stronger vision may have resulted from a decrease in an inhibitory neurotransmitter called GABA caused by exercise, allowing the brain to become more responsive.

The findings suggest that exercise may play an important role in brain health and recovery. This could be especially good news for people with amblyopia (called “lazy eye” because the brain “turns off” the visual processing of the weak eye to prevent double vision) — generally considered to be untreatable in adults.

Lunghi and Sale say they now plan to investigate the effects of moderate levels of physical exercise on visual function in amblyopic adult patients and to look deeper into the underlying neural mechanisms.

Time for a walk or bike ride?

UPDATE Dec. 10, 2o15: title wording changed from “smarter” to “learn better.”


Abstract of A cycling lane for brain rewiring

Brain plasticity, defined as the capability of cerebral neurons to change in response to experience, is fundamental for behavioral adaptability, learning, memory, functional development, and neural repair. The visual cortex is a widely used model for studying neuroplasticity and the underlying mechanisms. Plasticity is maximal in early development, within the so-called critical period, while its levels abruptly decline in adulthood. Recent studies, however, have revealed a significant residual plastic potential of the adult visual cortex by showing that, in adult humans, short-term monocular deprivation alters ocular dominance by homeostatically boosting responses to the deprived eye. In animal models, a reopening of critical period plasticity in the adult primary visual cortex has been obtained by a variety of environmental manipulations, such as dark exposure, or environmental enrichment, together with its critical component of enhanced physical exercise. Among these non-invasive procedures, physical exercise emerges as particularly interesting for its potential of application to clinics, though there has been a lack of experimental evidence available that physical exercise actually promotes visual plasticity in humans. Here we report that short-term homeostatic plasticity of the adult human visual cortex induced by transient monocular deprivation is potently boosted by moderate levels of voluntary physical activity. These findings could have a bearing in orienting future research in the field of physical activity application to clinical research.

 

Read Full Post »


The Neurogenetics of Language – Patricia Kuhl

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Innovation

Series E. 2; 5.7

 

2015 George A. Miller Award

In neuroimaging studies using structural (diffusion weighted magnetic resonance imaging or DW-MRI) and functional (magnetoencephalography or MEG) imaging, my laboratory has produced data on the neural connectivity that underlies language processing, as well as electrophysiological measures of language functioning during various levels of language processing (e.g., phonemic, lexical, or sentential). Taken early in development, electrophysiological measures or “biomarkers” have been shown to predict future language performance in neurotypical children as well as children with autism spectrum disorders (ASD). Work in my laboratory is now combining these neuroimaging approaches with genetic sequencing, allowing us to understand the genetic contributions to language learning.

http://www.ted.com/talks/patricia_kuhl_the_linguistic_genius_of_babies%3Flanguage%3Den

http://www.youtube.com/watch%3Fv%3DG2XBIkHW954

http://www.youtube.com/watch%3Fv%3DM-ymanHajN8

Patricia Kuhl shares astonishing findings about how babies learn one language over another — by listening to the humans around them

Kuhl Constructs: How Babies Form Foundations for Language

MAY 3, 2013

by Sarah Andrews Roehrich, M.S., CCC-SLP

Years ago, I was captivated by an adorable baby on the front cover of a book, The Scientist in the Crib: What Early Learning Tells Us About the Mind, written by a trio of research scientists including Alison Gopknik, PhDAndrew Meltzoff, PhD, and Patricia Kuhl, PhD.

At the time, I was simply interested in how babies learn about their worlds, how they conduct experiments, and how this learning could impact early brain development.  I did not realize the extent to which interactions with family, caretakers, society, and culture could shape the direction of a young child’s future.

Now, as a speech-language pathologist in Early Intervention in Massachusetts, more cognizant of the myriad of factors that shape a child’s cognitive, social-emotional, language, and literacy development, I have been absolutely delighted to discover more of the work of Dr. Kuhl, a distinguished speech-and-language pathologist at The University of Washington.  So, last spring, when I read that Dr. Kuhl was going to present “Babies’ Language Skills” as one part of a 2-part seminar series sponsored by the Mind, Brain, and Behavior Annual Distinguished Lecture Series at Harvard University1, I was thrilled to have the opportunity to attend. Below are some highlights from that experience and the questions it has since sparked for me:

Lip ‘Reading’ Babies
According to a study by Dr. Patricia Kuhl and Dr. Andrew Meltzoff, “Bimodal Perception of Speech in Infancy” (Science, 1982), cited in the 2005 Seattle Times article, “Infant Science: How do Babies Learn to Talk?” by Paula Bock, Drs. Patricia Kuhl and Andrew Meltzoff showed that babies as young as 18 weeks of age could listen to “Ah ah ah” or “Ee ee ee” vowel sounds and gaze at the correct, corresponding lip shape on a video monitor.
This image from Kuhl’s 2011 TED talk shows how a baby is trained to turn his head in response to a change in such vowel sounds, and is immediately rewarded by watching a black box light up while a panda bear inside pounds a drum.  Images provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Who is Dr. Patricia Kuhl and how has her work re-shaped our knowledge about how babies learn language?

Dr. Kuhl, who is co-director of the Institute for Learning and Brain Sciences at The University of Washington, has been internationally recognized for her research on early language and brain development, and for her studies on how young children learn.  In her most recent research experiments, she’s been using magnetoencephalography (MEG)–a relatively new neuroscience technology that measures magnetic fields generated by the activity of brain cells–to investigate how, where, and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in their native and non-native languages.

A 6-month-old baby sits in a magnetoencephalography machine, which maps brain activity, while listening to various languages in earphones and playing with a toy. Image originally printed in “Brain Mechanisms in Early Language Acquisition” (Neuron review, Cell Press, 2010) and provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Not only does Kuhl’s research point us in the direction of how babies learn to process phonemes, the sound units upon which many languages are built, but it is part of a larger body of studies looking at infants across languages and cultures that has revolutionized our understanding of language development over the last half of the 20th century—leading to, as Kuhl puts it, “a new view of language acquisition, that accounts for both the initial state of linguistic knowledge in infants, and infants’ extraordinary ability to learn simply by listening to their native language.”2

What is neuroplasticity and how does it underlie child development?

Babies are born with 100 billion neurons, about the same as the number of stars in the Milky Way.3 In The Whole Brain Child,Daniel Siegel, MD and Tina Payne Bryson, PhD explain that when we undergo an experience, these brain cells respond through changes in patterns of electrical activity—in other words, they “fire” electrical signals called “action potentials.”4

In a child’s first years of life, the brain exhibits extraordinary neuroplasticity, refining its circuits in response to environmental experiences. Synapses—the sites of communication between neurons—are built, strengthened, weakened and pruned away as needed. Two short videos from the Center on the Developing Child at Harvard, “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry”, nicely depict how some of this early brain development happens.5

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, exposing babies to a variety of positive experiences (such as talking, cuddling, reading, singing, and playing in different environments) not only helps tune babies in to the language of their culture, but it also builds a foundation for developing the attention, cognition, memory, social-emotional, language and literacy, and sensory and motor skills that will help them reach their potential later on.

When and how do babies become “language-bound” listeners?

In her 2011 TED talk, “The Linguistic Genius of Babies,” Dr. Kuhl discusses how babies under 8 months of age from different cultures can detect sounds in any language from around the world, but adults cannot do this. 6   So when exactly do babies go from being “citizens of the world”, as Kuhl puts it, to becoming “language-bound” listeners, specifically focused on the language of their culture?”

Between 8-10 months of age, when babies are trying to master the sounds used in their native language, they enter a critical period for sound development.1  Kuhl explains that in one set of experiments, she compared a group of babies in America learning to differentiate the sounds “/Ra/” and “/La/,” with a group of babies in Japan.  Between 6-8 months, the babies in both cultures recognized these sounds with the same frequency.  However, by 10-12 months, after multiple training sessions, the babies in Seattle, Washington, were much better at detecting the “/Ra/-/La/” shift than were the Japanese babies.

Kuhl explains these results by suggesting that babies “take statistics” on how frequently they hear sounds in their native and non-native languages.  Because “/Ra/” and “/La/” occur more frequently in the English language, the American babies recognized these sounds far more frequently in their native language than the Japanese babies.  Kuhl believes that the results in this study indicate a shift in brain development, during which babies from each culture are preparing for their own languages and becoming “language-bound” listeners.

In what ways are nurturing interactions with caregivers more valuable to babies’ early language development than interfacing with technology?

If parents, caretakers, and other children can help mold babies’ language development simply by talking to them, it is tempting to ask whether young babies can learn language by listening to the radio, watching television, or playing on their parents’ mobile devices. I mean, what could be more engaging than the brightly-colored screens of the latest and greatest smart phones, iPads, iPods, and computers? They’re perfect for entertaining babies.  In fact, some babies and toddlers can operate their parents’ devices before even having learned how to talk.

However, based on her research, Kuhl states that young babies cannot learn language from television and it is necessary for babies to have lots of face-to-face interaction to learn how to talk.1  In one interesting study, Kuhl’s team exposed 9 month old American babies to Mandarin in various forms–in person interactions with native Mandarin speakers vs. audiovisual or audio recordings of these speakers–and then looked at the impact of this exposure on the babies’ ability to make Mandarin phonetic contrasts (not found in English) at 10-12 months of age. Strikingly, twelve laboratory visits featuring in person interactions with the native Mandarin speakers were sufficient to teach the American babies how to distinguish the Mandarin sounds as well as Taiwanese babies of the same age. However, the same number of lab visits featuring the audiovisual or audio recordings made no impact. American babies exposed to Mandarin through these technologies performed the same as a control group of American babies exposed to native English speakers during their lab visits.

This diagram depicts the results of a Kuhl study on American infants exposed to Mandarin in various forms–in person interactions with native speakers versus television or audio recordings of these speakers. As the top blue triangle shows, the American infants exposed in person to native Mandarin speakers performed just as well on a Mandarin phoneme distinction task as age-matched Taiwanese counterparts. However, those American infants exposed to television or audio recordings of the Mandarin speakers performed the same as a control group of American babies exposed to native English speakers during their lab visits. Diagram displayed in Kuhl’s TED TAlk 6, provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Kuhl believes that this is primarily because a baby’s interactions with others engages the social brain, a critical element for helping children learn to communicate in their native and non-native languages. 6  In other words, learning language is not simply a technical skill that can be learned by listening to a recording or watching a show on a screen.  Instead, it is a special gift that is handed down from one generation to the next.

Language is learned through talking, singing, storytelling, reading, and many other nurturing experiences shared between caretaker and child.  Babies are naturally curious; they watch every movement and listen to every sound they hear around them.  When parents talk, babies look up and watch their mouth movements with intense wonder.  Parents respond in turn, speaking in “motherese,” a special variant of language designed to bathe babies in the sound patterns and speech sounds of their native language. Motherese helps babies hear the “edges” of sound, the very thing that is difficult for babies who exhibit symptoms of dyslexia and auditory processing issues later on.

Over time, by listening to and engaging with the speakers around them, babies build sound maps which set the stage for them to be able to say words and learn to read later on.  In fact, based on years of research, Kuhl has discovered that babies’ abilities to discriminate phonemes at 7 months-old is a predictor of future reading skills for that child at age 5.7

I believe that educating families about brain development, nurturing interactions, and the benefits and limits of technology is absolutely critical to helping families focus on what is most important in developing their children’s communication skills.  I also believe that Kuhl’s work is invaluable in this regard.  Not only has it focused my attention on how babies form foundations for language, but it has illuminated my understanding of how caretaker-child interactions help set the stage for babies to become language-bound learners.

Sources

(1) Kuhl, P. (April 3, 2012.) Talk on “Babies’ Language Skills.” Mind, Brain, and Behavior Annual Distinguished Lecture Series, Harvard University.

(2) Kuhl, P. (2000). “A New View of Language Acquisition.” This paper was presented at the National Academy of Sciences colloquium “Auditory Neuroscience: Development, Transduction, and Integration,” held May 19–21, 2000, at the Arnold and Mabel Beckman Center in Irvine, CA. Published by the National Academy of Sciences.

(3) Bock, P. (2005.)  “The Baby Brain.  Infant Science: How do Babies Learn to Talk?” Pacific Northwest: The Seattle Times Magazine.

(4) Siegel, D., Bryson, T. (2011.)  The Whole-Brain Child: 12 Revolutionary Strategies to Nurture Your Child’s Developing Mind. New York, NY:  Delacorte Press, a division of Random House, Inc.

(5) Center on the Developing Child at Harvard University. “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry” videos, two parts in the three-part series, “Three Core Concepts in Early Development.

http://developingchild.harvard.edu/resources/multimedia/videos

(6) Kuhl, P.  (February 18, 2011.) “The Linguistic Genius of Babies,” video talk on TED.com, a TEDxRainier event.

www.ted.com/talks/patricia_kuhl_the_linguistic_genius_of_babies.html

(7) Lerer, J. (2012.) “Professor Discusses Babies’ Language Skills.”  The Harvard Crimson.

 

Andrew Meltzoff & Patricia Kuhl: Joint attention to mind

Sarah DeWeerdt  11 Feb 2013

Power couple: In addition to a dizzying array of peer-reviewed publications, Andrew Meltzoff and Patricia Kuhl have written a popular book on brain development, given TED talks and lobbied political leaders.

Andrew Meltzoff shares many things with his wife — research dollars, authorship, a keen interest in the young brain — but he does not keep his wife’s schedule.

“It’s one of the agreements we have,” he says, laying out the rule with a twinkle in his eye that conveys both the delights and the complications of working with one’s spouse.

Meltzoff, professor of psychology at the University of Washington in Seattle, and his wife, speech and hearing sciences professor Patricia Kuhl, are co-directors of the university’s Institute for Learning and Brain Sciences, which focuses on the development of the brain and mind during the first five years of life.

Between them, they have shown that learning is a fundamentally social process, and that babies begin this social learning when they are just weeks or even days old.

You could say the couple is attached at the cerebral cortex, but not at the hip: They take equal roles in running the institute, but they each have their own daily rhythms and distinct, if overlapping, scientific interests.

Kuhl studies how infants “crack the language code,” as she puts it — how they figure out sounds and meanings and eventually learn to produce speech. Meltzoff’s work focuses on social building blocks such as imitation and joint attention, or a shared focus on an object or activity. Meltzoff says these basic behaviors help children develop theory of mind, a sophisticated awareness and understanding of others’ thoughts and feelings.

All of these abilities are impaired in children with autism. Most of the couple’s studies have focused on typically developing infants, because, they say, it’s essential to understand typical development in order to appreciate the irregularities in autism.

Both also study autism, which can in turn help explain typical development.

In addition to a dizzying array of peer-reviewed publications, the duo have written a popular book on developmental psychiatry, The Scientist in the Crib, and promote their ideas through TED talks and by lobbying political leaders.

Geraldine Dawson, chief science officer of the autism science and advocacy organization Autism Speaks and a longtime collaborator, calls Meltzoff and Kuhl “the dynamic duo.” “They’re sort of bigger-than-life type people, who fill the room when they walk into it,” she says.

Making a match:

Meltzoff and Kuhl’s story began with a scientific twist on a standard rom-com meet cute.

It was the early 1980s, and Kuhl, who had recently joined the faculty at the University of Washington, wanted to understand how infants hear and see vowels. But she was having trouble designing an effective experiment.

“I kept running into Andy’s office,” which was near hers, to talk it through, Kuhl recalls.

Meltzoff had done some research on how babies integrate what they see with what they touch, a process called cross-modal matching1. Soon he and Kuhl realized that they could adapt his experimental design to her question, and decided to collaborate.

They showed babies two video screens, each featuring a person mouthing a different vowel sound – “ahhh” or “eeee.” A speaker placed between the two screens played one of those two vowel sounds.

They found that babies as young as 18 to 20 weeks look longer at the face that matches the sound they hear, integrating faces with voices2.

But that wasn’t the only significant result from those experiments.

“Speaking only for myself, I will say I became very interested in the very attractive, smart blonde that I was collaborating with,” Meltzoff says. “Criticizing each other’s scientific writing at the same time the relationship was building was… interesting.”

And effective: Their paper appeared in Science in 1982, and the couple married three years later.

Listening to Meltzoff tell that story, it’s easy to understand why some colleagues say he is funny but they can’t quite explain why. His humor is subtle and wry. More obvious is his passion, not just for science, but for working out the theory underlying empirical results. Even his wife describes his personality as “cerebral.”

“He just has this laser vision for homing in on what is the heart of the issue,” says Rechele Brooks, research assistant professor of psychiatry and behavioral sciences at the University of Washington, who collaborates with Meltzoff on studies of gaze.

For example, in one of his earliest papers, Meltzoff wanted to investigate how babies learn to imitate. He found that infants just 12 to 21 days old can imitate both facial expressions and hand gestures, much earlier than previously thought3.

“It really turned the scientific community on its head,” Brooks says.

Early insights:

Face to face: Meltzoff and Kuhl are developing a method to simultaneously record the brain activity of two people as they interact.

Meltzoff continued to study infants, tracing back the components of theory of mind to their earliest developmental source. That sparked the interest of Dawson, who had gotten to know Meltzoff as a student at the University of Washington in the 1970s, and became the first director of the university’s autism center in 1996.

Meltzoff and Dawson together applied his techniques to study young, often nonverbal, children with autism. In one study, they found that children with autism have more trouble imitating others than do either typically developing children or those with Down syndrome4.

In another study, they found that children with autism are less interested in social sounds such as clapping or hearing their name called than are their typically developing peers5.  They also found that how children with autism imitate and play with toys when they are 3 or 4 years old predicts their communication skills two years later6.

Most previous studies of autism had focused on older children, Dawson says, and this work helped paint a picture of the disorder earlier in childhood.

Kuhl began her career with studies showing that monkeys7 and even chinchillas8 can distinguish the difference between speech sounds, or phonemes, such as “ba” and “pa,” just as human infants can.

“The bottom line was that animals were sharing this aspect of perception,” Kuhl says.

So why are people so much better than animals at learning language? Kuhl has been trying to answer that question ever since, first through behavioral studies and then by measuring brain activity using imaging techniques.

Kuhl is soft-spoken, but a listener wants to lean in to catch every word. Scientists who have worked with her describe her as poised and perfectly put together, a master of gentle yet effective diplomacy.

“She has her sort of magnetic power to pull people together,” says Yang Zhang, associate professor of speech-language-hearing sciences at the University of Minnesota in Rochester, who was a graduate student and postdoctoral researcher in Kuhl’s lab beginning in the late 1990s.

Listen and learn:

At one point, Kuhl turned her considerable powers of persuasion on a famously smooth negotiator, then-President Bill Clinton.

Kuhl had shown that newborns hear virtually all speech sounds, but by 6 months of age they lose the ability to distinguish sounds that aren’t part of their native language9.

At the White House Conference on Early Childhood Development and Learning in 1997, she described how infants learn by listening, long before they can speak.

Clinton, ever the policy wonk, asked her how much babies need to hear in order to learn. Kuhl said she didn’t know — but if Clinton gave her the funds, she would find out. “Even the president could see that research on the effects of language input on the young brain had impact on society,” she says.

Kuhl used the funds Clinton gave her to design a study in which 9-month-old babies in the U.S. received 12 short Mandarin Chinese ‘lessons.’ The babies quickly learned to distinguish speech sounds in the second language, her team found — but only if the speaker was live, not in a video10.

Those results contributed to Kuhl’s ‘social gating’ hypothesis, which holds that social interaction is necessary for picking up on the sounds and patterns of language. “We’re saying that social interaction is a kind of gate to an interest in learning, the kind that humans are completely masters of,” she says.

Her results also suggest that the language problems in children with autism may be the result of their social deficits.

“Children with autism will have a very difficult time acquiring language if language requires the social gate to be open,” she says.

Over the years, Kuhl and Meltzoff have had largely independent research programs, but her recent focus on the social roots of language dovetails with his long-time focus on social interaction.

These days, they are trying to develop ‘face-to-face neuroscience,’ which involves simultaneously recording brain activity from two people as they interact with each other.

This approach would allow researchers to observe, for example, what happens in an infant’s brain when she hears her mother’s voice, and what happens in the mother’s brain as she sees her infant respond to her. “It’s going to be very special to do,” Meltzoff says enthusiastically, even though the effort is more directly related to Kuhl’s work than to his own.

It’s clear that this fervor for each other’s work goes both ways.

“That’s one of the great things about being married to a scientist,” Meltzoff says. “When you come home and think, ‘God, I really nailed this methodologically,’ your wife, instead of yawning, leans forward and says, ‘You did? Tell me about the method, that’s so exciting.’”

News and Opinion articles on SFARI.org are editorially independent of the Simons Foundation.

References:

1: Meltzoff A.N. and R.W. Borton Nature 282, 403-404 (1979) PubMed

2: Kuhl P.K. and A.N. Meltzoff Science 218, 1138-1141 (1982) PubMed

3: Meltzoff A.N. and M.K. Moore Science 198, 75-78 (1977) PubMed

4: Dawson G. et al. Child Dev. 69, 1276-1285 (1998) PubMed

5: Dawson G. et al. J. Autism Dev. Disord. 28, 479-485 (1998) PubMed

6: Toth K. et al. J. Autism Dev. Disord. 36, 993-1005 (2006) PubMed

7: Kuhl P.K. and D.M. Padden Percept. Psychophys. 32, 542-550 (1982) PubMed

8: Kuhl P.K. and J.D. Miller Science 190, 69-72 (1975) PubMed

9: Kuhl P.K. et al. Science 255, 606-608 (1992) PubMed

10: Kuhl P.K. et al. Proc. Natl. Acad. Sci. U.S.A. 100, 9096-9101 (2003) PubMed

 

 

Using genetic data in cognitive neuroscience: from growing pains to genuine insights

Adam E. Green, Marcus R. Munafò, Colin G. DeYoung, John A. Fossella, Jin Fan & Jeremy R. Gray
Nature Reviews Neuroscience 2008 Sep; 9, 710-720
http://dx.doi.org:/10.1038/nrn2461

Research that combines genetic and cognitive neuroscience data aims to elucidate the mechanisms that underlie human behaviour and experience by way of ‘intermediate phenotypes’: variations in brain function. Using neuroimaging and other methods, this approach is poised to make the transition from health-focused investigations to inquiries into cognitive, affective and social functions, including ones that do not readily lend themselves to animal models. The growing pains of this emerging field are evident, yet there are also reasons for a measured optimism.

NSF – Cognitive Neuroscience Award

The cross-disciplinary integration and exploitation of new techniques in cognitive neuroscience has generated a rapid growth in significant scientific advances. Research topics have included sensory processes (including olfaction, thirst, multi-sensory integration), higher perceptual processes (for faces, music, etc.), higher cognitive functions (e.g., decision-making, reasoning, mathematics, mental imagery, awareness), language (e.g., syntax, multi-lingualism, discourse), sleep, affect, social processes, learning, memory, attention, motor, and executive functions. Cognitive neuroscientists further clarify their findings by examining developmental and transformational aspects of such phenomena across the span of life, from infancy to late adulthood, and through time.

New frontiers in cognitive neuroscience research have emerged from investigations that integrate data from a variety of techniques. One very useful technique has been neuroimaging, including positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), optical imaging (near infrared spectroscopy or NIRS), anatomical MRI, and diffusion tensor imaging (DTI). A second class of techniques includes physiological recording such as subdural and deep brain electrode recording, electroencephalography (EEG), event-related electrical potentials (ERPs), and galvanic skin responses (GSRs). In addition, stimulation methods have been employed, including transcranial magnetic stimulation (TMS), subdural and deep brain electrode stimulation, and drug stimulation. A fourth approach involves cognitive and behavioral methods, such as lesion-deficit neuropsychology and experimental psychology. Other techniques have included genetic analysis, molecular modeling, and computational modeling. The foregoing variety of methods is used with individuals in healthy, neurological, psychiatric, and cognitively-impaired conditions. The data from such varied sources can be further clarified by comparison with invasive neurophysiological recordings in non-human primates and other mammals.

Findings from cognitive neuroscience can elucidate functional brain organization, such as the operations performed by a particular brain area and the system of distributed, discrete neural areas supporting a specific cognitive, perceptual, motor, or affective operation or representation. Moreover, these findings can reveal the effect on brain organization of individual differences (including genetic variation), plasticity, and recovery of function following damage to the nervous system.

Read Full Post »