Share This Page
Finding the Rhythm of Literacy
Q&A with Nina Kraus, Ph.D.
Nina Kraus, Ph.D.
Hugh Knowles Professor
Director, Auditory Neuroscience Lab
Northwestern University
Dana Foundation Grantee: 2014-17
For most elementary schoolers, learning how to read seems to occur almost naturally. But for the approximately 5-10 percent of children who are diagnosed with reading disorders, gaining critical literacy skills can be a significant challenge—a challenge that can directly affect learning in other academic subjects, overall educational attainment, and future socioeconomic status. Nina Kraus, Ph.D., director of the auditory neuroscience lab at Northwestern University, has long studied how the brain processes sound. She is currently investigating how an individual’s rhythm synchronization abilities might affect his or her reading skills, which may offer new insights into how to best help children who may be struggling with their ABCs.
Your work has looked quite a bit at music’s effect on the brain, as well as second languages and rhythmic abilities. How do all these things fit together?
I’m a biologist. The overarching theme of my work is sound processing in the brain and the impact that sound has on our lives. I grew up in a household where more than one language was spoken. My mother was a pianist. I must have managed to infuse myself with the idea early on that sound was important, even though it wasn’t something I consciously considered until later on. I originally majored in comparative literature because I knew some languages and liked to read. But then I took a biology class and I was hooked. I eventually decided to pursue a doctorate to learn how the brain processes sound—and, initially, I was interested in the connection with language. I began recording from individual neurons while an animal learned a sound-based task. The animal would hear a tone and then learn to associate a reward or behavior with that sound. It was fascinating to see the brain’s response to sound firsthand: how the same neuron could respond to the same sound in a very different way once a sound-to-meaning connection was made. Language is rooted in sound. So is music. And my lab, Brainvolts, cares about practical applications of how sound affects cognition and learning.
There is evidence that children who make music are better readers. Historically, why did we think that was the case?
It’s an interesting connection—and may not seem so obvious at first. Reading involves your eyes, right? So why would learning to read have anything to do with processing sound? But we learn to speak before we learn to read. We make sound-to-meaning connections there. As we read the letters on the page, we are connecting those images with the letter sounds. That provides the foundation for later literacy. If there are not good sound-to-meaning connections, if language is not strong, it will be more difficult for a child to learn to read. If we could find a way to strengthen the sound-to-meaning connections in the brain—because, as we know, the brain is very malleable—we might be able to help children learn to read more easily. Let me tell you about an experiment that made a big impression on me. It was a study done on tonal languages speakers. These are languages that have changes in pitch that signal changes in meaning. These tonal changes occur within a single syllable. Those of us who don’t speak tonal languages might not even notice those differences—we don’t need to. But the investigators in this study discovered that even when tonal language speakers were asleep, their brains responded much more accurately to pitch changes in speech sounds than non-tonal language speakers. The brain of tonal language speakers had developed a highly tuned default, or automatic way to respond to these sounds, because the sounds have meaning. If you are not a tonal language speaker, but a musician, you also have this benefit. And even though it may not seem so at first, this really speaks to reading. In another study we asked if auditory-visual processing might be enhanced in people with musical training. For example, if a person both heard and saw someone playing a musical instrument, the brain might respond more precisely and vigorously if they had music training. While this turned out to be the case, the more interesting question was, what if that person both heard and saw someone speaking? Would their auditory/visual connections be stronger? The answer is yes – audio-visual sound processing was stronger in musician brains.
Why do you think that rhythm might help with reading acquisition?
Simply put, we know there is rhythm in music. That’s obvious. But it turns out there is also a lot of rhythm in speech. As I speak to you, the rhythm tells you where the emphasis is. It holds certain syllables longer or shorter to show you where important words stop and start. You can pretty easily drum along when someone is speaking. You can follow the beat—and that beat can help you understand and make those crucial sound-to-meaning connections. Rhythm is also important when conversing with others. When you are going back and forth, you are in a rhythm. It’s a pattern of sorts. Originally, we thought that if you were good at one type of rhythm, you’d be good at any rhythm. But when we tested people’s rhythmic abilities in different ways—pattern skills and beat keeping skills—we found out that wasn’t the case. The ability to follow rhythmic patterns and the ability to follow the beat are two very different phenomena. When we looked at what was happening in the brain, we measured how the brain responds to sound on different rhythmic time scales. Rhythm ability corresponded not only with specific brain activity but importantly, people’s communication skills. In particular, we discovered that synchronization ability—e.g. tapping your foot along to a beat—aligns with rapid brain activity linked to reading, language and phonological skills. The ability to follow a pattern is also important for language, helping to fill in any gaps in sound, so you can understand what is being said to you in a noisy place. Understanding children’s rhythmic strengths and bottlenecks is a step towards helping to improve language skills.
What does this mean for language and reading acquisition?
There are huge implications. First, this work helps us understand the biological processes that align with communication skills. Second, it enables us to assess those abilities objectively—so we can measure, for example, in a three-year-old, how that child’s brain responds to sound and use that to predict future language skills. If we can figure out which children are going to struggle to read before they go through the pain and suffering of struggling with reading, we could intervene early and help. We know that early intervention works. Third, there are ways to train rhythmic skills. The most straightforward way is through music. But there are also computer-based programs that can train rhythmic skills, including one called the Interactive Metronome. I was initially skeptical about this program. It couldn’t be as easy as,
“You can strengthen a child’s rhythmic skills on the computer and it will help him or her learn better!”
We decided to do an in-depth investigation of the brain-behavior relationships of this program. The participant listens to a series of sounds and uses their whole body to synchronize to the rhythm they are hearing. They get feedback if they are too slow or too fast that can be used to modify their movement. We discovered that children’s interactive rhythmic abilities corresponded to certain strengths and weaknesses in language skills. Importantly, the research gave us insight into the underlying biological processes.
How might this work be one day be applied in a classroom?
Music training is the best way to develop rhythmic skills. But my expectation is that Interactive Metronome training can strengthen rhythmic skills in any child. From a therapeutic standpoint, if a child is at risk for a language problem, this program may help. It provides a concrete course of treatment. Some of the lab’s other work involves concussion and head injury. We are investigating how head injuries disrupt sound processing in the brain. Making sense of sound is one of the most computationally delicate and complex jobs for the brain. We reasoned that a blow to the head would disrupt this delicate infrastructure, and it turns out we are right. There is some evidence that rhythmic activities, such as Interactive Metronome, might provide treatment to help people who are continuing to have concussion symptoms. We’d like to investigate this further.
Given all your work in language, music, and sound, what are you learning about sound’s neuroprotective abilities?
It is all coming together. That’s one reason why my lab looks at different ways that sound affects our lives and our nervous system. What happens during development? What happens in aging? What happens in concussion? When you are learning a second language? Learning music? Sound plays a vital role in all of it. Sound is an unrecognized force in our lives. But study after study shows it plays an important role in shaping cognitive, sensory, motor, and reward systems in the brain. Ultimately from a biological standpoint, we are what we do. We reinforce various brain circuits for better and for worse. It’s clear that sound plays an important role in how we engage with the world. Understanding this rhythm-reading link is just one more way that we are seeing how sound shapes our brains.
Publications
Bonacina S, Krizman J, White-Schwoch T, and Kraus N (2018) Clapping in time parallels literacy and calls upon similar neural mechanisms in early readers. Annals of the New York Academy of Sciences 1423: 338–348.
Kraus N, White-Schwoch T (2017) Neurobiologyof everyday communication: what have we learned from music? The Neuroscientist 23(3): 287-298.
Kraus N, Nicol T (2017) The power of sound for brain health. Nature Human Behaviour 1: 700- 702
Woodruff Carr K, Tierney A, White-Schwoch T, Kraus N. (2016) Intertrial auditory neural stability supports beat synchronization in preschoolers. Developmental Cognitive Neuroscience 17: 76- 82.