Posts Tagged ‘babies learn’

The Neurogenetics of Language – Patricia Kuhl

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Innovation

Series E. 2; 5.7


2015 George A. Miller Award

In neuroimaging studies using structural (diffusion weighted magnetic resonance imaging or DW-MRI) and functional (magnetoencephalography or MEG) imaging, my laboratory has produced data on the neural connectivity that underlies language processing, as well as electrophysiological measures of language functioning during various levels of language processing (e.g., phonemic, lexical, or sentential). Taken early in development, electrophysiological measures or “biomarkers” have been shown to predict future language performance in neurotypical children as well as children with autism spectrum disorders (ASD). Work in my laboratory is now combining these neuroimaging approaches with genetic sequencing, allowing us to understand the genetic contributions to language learning.

Patricia Kuhl shares astonishing findings about how babies learn one language over another — by listening to the humans around them

Kuhl Constructs: How Babies Form Foundations for Language

MAY 3, 2013

by Sarah Andrews Roehrich, M.S., CCC-SLP

Years ago, I was captivated by an adorable baby on the front cover of a book, The Scientist in the Crib: What Early Learning Tells Us About the Mind, written by a trio of research scientists including Alison Gopknik, PhDAndrew Meltzoff, PhD, and Patricia Kuhl, PhD.

At the time, I was simply interested in how babies learn about their worlds, how they conduct experiments, and how this learning could impact early brain development.  I did not realize the extent to which interactions with family, caretakers, society, and culture could shape the direction of a young child’s future.

Now, as a speech-language pathologist in Early Intervention in Massachusetts, more cognizant of the myriad of factors that shape a child’s cognitive, social-emotional, language, and literacy development, I have been absolutely delighted to discover more of the work of Dr. Kuhl, a distinguished speech-and-language pathologist at The University of Washington.  So, last spring, when I read that Dr. Kuhl was going to present “Babies’ Language Skills” as one part of a 2-part seminar series sponsored by the Mind, Brain, and Behavior Annual Distinguished Lecture Series at Harvard University1, I was thrilled to have the opportunity to attend. Below are some highlights from that experience and the questions it has since sparked for me:

Lip ‘Reading’ Babies
According to a study by Dr. Patricia Kuhl and Dr. Andrew Meltzoff, “Bimodal Perception of Speech in Infancy” (Science, 1982), cited in the 2005 Seattle Times article, “Infant Science: How do Babies Learn to Talk?” by Paula Bock, Drs. Patricia Kuhl and Andrew Meltzoff showed that babies as young as 18 weeks of age could listen to “Ah ah ah” or “Ee ee ee” vowel sounds and gaze at the correct, corresponding lip shape on a video monitor.
This image from Kuhl’s 2011 TED talk shows how a baby is trained to turn his head in response to a change in such vowel sounds, and is immediately rewarded by watching a black box light up while a panda bear inside pounds a drum.  Images provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Who is Dr. Patricia Kuhl and how has her work re-shaped our knowledge about how babies learn language?

Dr. Kuhl, who is co-director of the Institute for Learning and Brain Sciences at The University of Washington, has been internationally recognized for her research on early language and brain development, and for her studies on how young children learn.  In her most recent research experiments, she’s been using magnetoencephalography (MEG)–a relatively new neuroscience technology that measures magnetic fields generated by the activity of brain cells–to investigate how, where, and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in their native and non-native languages.

A 6-month-old baby sits in a magnetoencephalography machine, which maps brain activity, while listening to various languages in earphones and playing with a toy. Image originally printed in “Brain Mechanisms in Early Language Acquisition” (Neuron review, Cell Press, 2010) and provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Not only does Kuhl’s research point us in the direction of how babies learn to process phonemes, the sound units upon which many languages are built, but it is part of a larger body of studies looking at infants across languages and cultures that has revolutionized our understanding of language development over the last half of the 20th century—leading to, as Kuhl puts it, “a new view of language acquisition, that accounts for both the initial state of linguistic knowledge in infants, and infants’ extraordinary ability to learn simply by listening to their native language.”2

What is neuroplasticity and how does it underlie child development?

Babies are born with 100 billion neurons, about the same as the number of stars in the Milky Way.3 In The Whole Brain Child,Daniel Siegel, MD and Tina Payne Bryson, PhD explain that when we undergo an experience, these brain cells respond through changes in patterns of electrical activity—in other words, they “fire” electrical signals called “action potentials.”4

In a child’s first years of life, the brain exhibits extraordinary neuroplasticity, refining its circuits in response to environmental experiences. Synapses—the sites of communication between neurons—are built, strengthened, weakened and pruned away as needed. Two short videos from the Center on the Developing Child at Harvard, “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry”, nicely depict how some of this early brain development happens.5

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, exposing babies to a variety of positive experiences (such as talking, cuddling, reading, singing, and playing in different environments) not only helps tune babies in to the language of their culture, but it also builds a foundation for developing the attention, cognition, memory, social-emotional, language and literacy, and sensory and motor skills that will help them reach their potential later on.

When and how do babies become “language-bound” listeners?

In her 2011 TED talk, “The Linguistic Genius of Babies,” Dr. Kuhl discusses how babies under 8 months of age from different cultures can detect sounds in any language from around the world, but adults cannot do this. 6   So when exactly do babies go from being “citizens of the world”, as Kuhl puts it, to becoming “language-bound” listeners, specifically focused on the language of their culture?”

Between 8-10 months of age, when babies are trying to master the sounds used in their native language, they enter a critical period for sound development.1  Kuhl explains that in one set of experiments, she compared a group of babies in America learning to differentiate the sounds “/Ra/” and “/La/,” with a group of babies in Japan.  Between 6-8 months, the babies in both cultures recognized these sounds with the same frequency.  However, by 10-12 months, after multiple training sessions, the babies in Seattle, Washington, were much better at detecting the “/Ra/-/La/” shift than were the Japanese babies.

Kuhl explains these results by suggesting that babies “take statistics” on how frequently they hear sounds in their native and non-native languages.  Because “/Ra/” and “/La/” occur more frequently in the English language, the American babies recognized these sounds far more frequently in their native language than the Japanese babies.  Kuhl believes that the results in this study indicate a shift in brain development, during which babies from each culture are preparing for their own languages and becoming “language-bound” listeners.

In what ways are nurturing interactions with caregivers more valuable to babies’ early language development than interfacing with technology?

If parents, caretakers, and other children can help mold babies’ language development simply by talking to them, it is tempting to ask whether young babies can learn language by listening to the radio, watching television, or playing on their parents’ mobile devices. I mean, what could be more engaging than the brightly-colored screens of the latest and greatest smart phones, iPads, iPods, and computers? They’re perfect for entertaining babies.  In fact, some babies and toddlers can operate their parents’ devices before even having learned how to talk.

However, based on her research, Kuhl states that young babies cannot learn language from television and it is necessary for babies to have lots of face-to-face interaction to learn how to talk.1  In one interesting study, Kuhl’s team exposed 9 month old American babies to Mandarin in various forms–in person interactions with native Mandarin speakers vs. audiovisual or audio recordings of these speakers–and then looked at the impact of this exposure on the babies’ ability to make Mandarin phonetic contrasts (not found in English) at 10-12 months of age. Strikingly, twelve laboratory visits featuring in person interactions with the native Mandarin speakers were sufficient to teach the American babies how to distinguish the Mandarin sounds as well as Taiwanese babies of the same age. However, the same number of lab visits featuring the audiovisual or audio recordings made no impact. American babies exposed to Mandarin through these technologies performed the same as a control group of American babies exposed to native English speakers during their lab visits.

This diagram depicts the results of a Kuhl study on American infants exposed to Mandarin in various forms–in person interactions with native speakers versus television or audio recordings of these speakers. As the top blue triangle shows, the American infants exposed in person to native Mandarin speakers performed just as well on a Mandarin phoneme distinction task as age-matched Taiwanese counterparts. However, those American infants exposed to television or audio recordings of the Mandarin speakers performed the same as a control group of American babies exposed to native English speakers during their lab visits. Diagram displayed in Kuhl’s TED TAlk 6, provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Kuhl believes that this is primarily because a baby’s interactions with others engages the social brain, a critical element for helping children learn to communicate in their native and non-native languages. 6  In other words, learning language is not simply a technical skill that can be learned by listening to a recording or watching a show on a screen.  Instead, it is a special gift that is handed down from one generation to the next.

Language is learned through talking, singing, storytelling, reading, and many other nurturing experiences shared between caretaker and child.  Babies are naturally curious; they watch every movement and listen to every sound they hear around them.  When parents talk, babies look up and watch their mouth movements with intense wonder.  Parents respond in turn, speaking in “motherese,” a special variant of language designed to bathe babies in the sound patterns and speech sounds of their native language. Motherese helps babies hear the “edges” of sound, the very thing that is difficult for babies who exhibit symptoms of dyslexia and auditory processing issues later on.

Over time, by listening to and engaging with the speakers around them, babies build sound maps which set the stage for them to be able to say words and learn to read later on.  In fact, based on years of research, Kuhl has discovered that babies’ abilities to discriminate phonemes at 7 months-old is a predictor of future reading skills for that child at age 5.7

I believe that educating families about brain development, nurturing interactions, and the benefits and limits of technology is absolutely critical to helping families focus on what is most important in developing their children’s communication skills.  I also believe that Kuhl’s work is invaluable in this regard.  Not only has it focused my attention on how babies form foundations for language, but it has illuminated my understanding of how caretaker-child interactions help set the stage for babies to become language-bound learners.


(1) Kuhl, P. (April 3, 2012.) Talk on “Babies’ Language Skills.” Mind, Brain, and Behavior Annual Distinguished Lecture Series, Harvard University.

(2) Kuhl, P. (2000). “A New View of Language Acquisition.” This paper was presented at the National Academy of Sciences colloquium “Auditory Neuroscience: Development, Transduction, and Integration,” held May 19–21, 2000, at the Arnold and Mabel Beckman Center in Irvine, CA. Published by the National Academy of Sciences.

(3) Bock, P. (2005.)  “The Baby Brain.  Infant Science: How do Babies Learn to Talk?” Pacific Northwest: The Seattle Times Magazine.

(4) Siegel, D., Bryson, T. (2011.)  The Whole-Brain Child: 12 Revolutionary Strategies to Nurture Your Child’s Developing Mind. New York, NY:  Delacorte Press, a division of Random House, Inc.

(5) Center on the Developing Child at Harvard University. “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry” videos, two parts in the three-part series, “Three Core Concepts in Early Development.

(6) Kuhl, P.  (February 18, 2011.) “The Linguistic Genius of Babies,” video talk on, a TEDxRainier event.

(7) Lerer, J. (2012.) “Professor Discusses Babies’ Language Skills.”  The Harvard Crimson.


Andrew Meltzoff & Patricia Kuhl: Joint attention to mind

Sarah DeWeerdt  11 Feb 2013

Power couple: In addition to a dizzying array of peer-reviewed publications, Andrew Meltzoff and Patricia Kuhl have written a popular book on brain development, given TED talks and lobbied political leaders.

Andrew Meltzoff shares many things with his wife — research dollars, authorship, a keen interest in the young brain — but he does not keep his wife’s schedule.

“It’s one of the agreements we have,” he says, laying out the rule with a twinkle in his eye that conveys both the delights and the complications of working with one’s spouse.

Meltzoff, professor of psychology at the University of Washington in Seattle, and his wife, speech and hearing sciences professor Patricia Kuhl, are co-directors of the university’s Institute for Learning and Brain Sciences, which focuses on the development of the brain and mind during the first five years of life.

Between them, they have shown that learning is a fundamentally social process, and that babies begin this social learning when they are just weeks or even days old.

You could say the couple is attached at the cerebral cortex, but not at the hip: They take equal roles in running the institute, but they each have their own daily rhythms and distinct, if overlapping, scientific interests.

Kuhl studies how infants “crack the language code,” as she puts it — how they figure out sounds and meanings and eventually learn to produce speech. Meltzoff’s work focuses on social building blocks such as imitation and joint attention, or a shared focus on an object or activity. Meltzoff says these basic behaviors help children develop theory of mind, a sophisticated awareness and understanding of others’ thoughts and feelings.

All of these abilities are impaired in children with autism. Most of the couple’s studies have focused on typically developing infants, because, they say, it’s essential to understand typical development in order to appreciate the irregularities in autism.

Both also study autism, which can in turn help explain typical development.

In addition to a dizzying array of peer-reviewed publications, the duo have written a popular book on developmental psychiatry, The Scientist in the Crib, and promote their ideas through TED talks and by lobbying political leaders.

Geraldine Dawson, chief science officer of the autism science and advocacy organization Autism Speaks and a longtime collaborator, calls Meltzoff and Kuhl “the dynamic duo.” “They’re sort of bigger-than-life type people, who fill the room when they walk into it,” she says.

Making a match:

Meltzoff and Kuhl’s story began with a scientific twist on a standard rom-com meet cute.

It was the early 1980s, and Kuhl, who had recently joined the faculty at the University of Washington, wanted to understand how infants hear and see vowels. But she was having trouble designing an effective experiment.

“I kept running into Andy’s office,” which was near hers, to talk it through, Kuhl recalls.

Meltzoff had done some research on how babies integrate what they see with what they touch, a process called cross-modal matching1. Soon he and Kuhl realized that they could adapt his experimental design to her question, and decided to collaborate.

They showed babies two video screens, each featuring a person mouthing a different vowel sound – “ahhh” or “eeee.” A speaker placed between the two screens played one of those two vowel sounds.

They found that babies as young as 18 to 20 weeks look longer at the face that matches the sound they hear, integrating faces with voices2.

But that wasn’t the only significant result from those experiments.

“Speaking only for myself, I will say I became very interested in the very attractive, smart blonde that I was collaborating with,” Meltzoff says. “Criticizing each other’s scientific writing at the same time the relationship was building was… interesting.”

And effective: Their paper appeared in Science in 1982, and the couple married three years later.

Listening to Meltzoff tell that story, it’s easy to understand why some colleagues say he is funny but they can’t quite explain why. His humor is subtle and wry. More obvious is his passion, not just for science, but for working out the theory underlying empirical results. Even his wife describes his personality as “cerebral.”

“He just has this laser vision for homing in on what is the heart of the issue,” says Rechele Brooks, research assistant professor of psychiatry and behavioral sciences at the University of Washington, who collaborates with Meltzoff on studies of gaze.

For example, in one of his earliest papers, Meltzoff wanted to investigate how babies learn to imitate. He found that infants just 12 to 21 days old can imitate both facial expressions and hand gestures, much earlier than previously thought3.

“It really turned the scientific community on its head,” Brooks says.

Early insights:

Face to face: Meltzoff and Kuhl are developing a method to simultaneously record the brain activity of two people as they interact.

Meltzoff continued to study infants, tracing back the components of theory of mind to their earliest developmental source. That sparked the interest of Dawson, who had gotten to know Meltzoff as a student at the University of Washington in the 1970s, and became the first director of the university’s autism center in 1996.

Meltzoff and Dawson together applied his techniques to study young, often nonverbal, children with autism. In one study, they found that children with autism have more trouble imitating others than do either typically developing children or those with Down syndrome4.

In another study, they found that children with autism are less interested in social sounds such as clapping or hearing their name called than are their typically developing peers5.  They also found that how children with autism imitate and play with toys when they are 3 or 4 years old predicts their communication skills two years later6.

Most previous studies of autism had focused on older children, Dawson says, and this work helped paint a picture of the disorder earlier in childhood.

Kuhl began her career with studies showing that monkeys7 and even chinchillas8 can distinguish the difference between speech sounds, or phonemes, such as “ba” and “pa,” just as human infants can.

“The bottom line was that animals were sharing this aspect of perception,” Kuhl says.

So why are people so much better than animals at learning language? Kuhl has been trying to answer that question ever since, first through behavioral studies and then by measuring brain activity using imaging techniques.

Kuhl is soft-spoken, but a listener wants to lean in to catch every word. Scientists who have worked with her describe her as poised and perfectly put together, a master of gentle yet effective diplomacy.

“She has her sort of magnetic power to pull people together,” says Yang Zhang, associate professor of speech-language-hearing sciences at the University of Minnesota in Rochester, who was a graduate student and postdoctoral researcher in Kuhl’s lab beginning in the late 1990s.

Listen and learn:

At one point, Kuhl turned her considerable powers of persuasion on a famously smooth negotiator, then-President Bill Clinton.

Kuhl had shown that newborns hear virtually all speech sounds, but by 6 months of age they lose the ability to distinguish sounds that aren’t part of their native language9.

At the White House Conference on Early Childhood Development and Learning in 1997, she described how infants learn by listening, long before they can speak.

Clinton, ever the policy wonk, asked her how much babies need to hear in order to learn. Kuhl said she didn’t know — but if Clinton gave her the funds, she would find out. “Even the president could see that research on the effects of language input on the young brain had impact on society,” she says.

Kuhl used the funds Clinton gave her to design a study in which 9-month-old babies in the U.S. received 12 short Mandarin Chinese ‘lessons.’ The babies quickly learned to distinguish speech sounds in the second language, her team found — but only if the speaker was live, not in a video10.

Those results contributed to Kuhl’s ‘social gating’ hypothesis, which holds that social interaction is necessary for picking up on the sounds and patterns of language. “We’re saying that social interaction is a kind of gate to an interest in learning, the kind that humans are completely masters of,” she says.

Her results also suggest that the language problems in children with autism may be the result of their social deficits.

“Children with autism will have a very difficult time acquiring language if language requires the social gate to be open,” she says.

Over the years, Kuhl and Meltzoff have had largely independent research programs, but her recent focus on the social roots of language dovetails with his long-time focus on social interaction.

These days, they are trying to develop ‘face-to-face neuroscience,’ which involves simultaneously recording brain activity from two people as they interact with each other.

This approach would allow researchers to observe, for example, what happens in an infant’s brain when she hears her mother’s voice, and what happens in the mother’s brain as she sees her infant respond to her. “It’s going to be very special to do,” Meltzoff says enthusiastically, even though the effort is more directly related to Kuhl’s work than to his own.

It’s clear that this fervor for each other’s work goes both ways.

“That’s one of the great things about being married to a scientist,” Meltzoff says. “When you come home and think, ‘God, I really nailed this methodologically,’ your wife, instead of yawning, leans forward and says, ‘You did? Tell me about the method, that’s so exciting.’”

News and Opinion articles on are editorially independent of the Simons Foundation.


1: Meltzoff A.N. and R.W. Borton Nature 282, 403-404 (1979) PubMed

2: Kuhl P.K. and A.N. Meltzoff Science 218, 1138-1141 (1982) PubMed

3: Meltzoff A.N. and M.K. Moore Science 198, 75-78 (1977) PubMed

4: Dawson G. et al. Child Dev. 69, 1276-1285 (1998) PubMed

5: Dawson G. et al. J. Autism Dev. Disord. 28, 479-485 (1998) PubMed

6: Toth K. et al. J. Autism Dev. Disord. 36, 993-1005 (2006) PubMed

7: Kuhl P.K. and D.M. Padden Percept. Psychophys. 32, 542-550 (1982) PubMed

8: Kuhl P.K. and J.D. Miller Science 190, 69-72 (1975) PubMed

9: Kuhl P.K. et al. Science 255, 606-608 (1992) PubMed

10: Kuhl P.K. et al. Proc. Natl. Acad. Sci. U.S.A. 100, 9096-9101 (2003) PubMed



Using genetic data in cognitive neuroscience: from growing pains to genuine insights

Adam E. Green, Marcus R. Munafò, Colin G. DeYoung, John A. Fossella, Jin Fan & Jeremy R. Gray
Nature Reviews Neuroscience 2008 Sep; 9, 710-720

Research that combines genetic and cognitive neuroscience data aims to elucidate the mechanisms that underlie human behaviour and experience by way of ‘intermediate phenotypes’: variations in brain function. Using neuroimaging and other methods, this approach is poised to make the transition from health-focused investigations to inquiries into cognitive, affective and social functions, including ones that do not readily lend themselves to animal models. The growing pains of this emerging field are evident, yet there are also reasons for a measured optimism.

NSF – Cognitive Neuroscience Award

The cross-disciplinary integration and exploitation of new techniques in cognitive neuroscience has generated a rapid growth in significant scientific advances. Research topics have included sensory processes (including olfaction, thirst, multi-sensory integration), higher perceptual processes (for faces, music, etc.), higher cognitive functions (e.g., decision-making, reasoning, mathematics, mental imagery, awareness), language (e.g., syntax, multi-lingualism, discourse), sleep, affect, social processes, learning, memory, attention, motor, and executive functions. Cognitive neuroscientists further clarify their findings by examining developmental and transformational aspects of such phenomena across the span of life, from infancy to late adulthood, and through time.

New frontiers in cognitive neuroscience research have emerged from investigations that integrate data from a variety of techniques. One very useful technique has been neuroimaging, including positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), optical imaging (near infrared spectroscopy or NIRS), anatomical MRI, and diffusion tensor imaging (DTI). A second class of techniques includes physiological recording such as subdural and deep brain electrode recording, electroencephalography (EEG), event-related electrical potentials (ERPs), and galvanic skin responses (GSRs). In addition, stimulation methods have been employed, including transcranial magnetic stimulation (TMS), subdural and deep brain electrode stimulation, and drug stimulation. A fourth approach involves cognitive and behavioral methods, such as lesion-deficit neuropsychology and experimental psychology. Other techniques have included genetic analysis, molecular modeling, and computational modeling. The foregoing variety of methods is used with individuals in healthy, neurological, psychiatric, and cognitively-impaired conditions. The data from such varied sources can be further clarified by comparison with invasive neurophysiological recordings in non-human primates and other mammals.

Findings from cognitive neuroscience can elucidate functional brain organization, such as the operations performed by a particular brain area and the system of distributed, discrete neural areas supporting a specific cognitive, perceptual, motor, or affective operation or representation. Moreover, these findings can reveal the effect on brain organization of individual differences (including genetic variation), plasticity, and recovery of function following damage to the nervous system.


Read Full Post »