Visualizing Hearing

Q&A with John S. Oghalai, M.D.
Brenda Patoine
June 11, 2012

John S. Oghalai, M.D.
Associate Professor, Otolaryngology–Head and Neck Surgery
Director, The Lucile Packard Children’s Hearing Center
Stanford University
Dana Grantee: 2007-2010

Cochlear implants have been called the original brain-machine interfaces. Can you start by explaining what a cochlear implant is and how it works?

Oghalai portraitJohn S. Oghalai: The cochlea is an organ of the inner ear that turns the mechanical energy of sound vibrations into electrical signals, which are then carried to the brain by the auditory nerve. If the cochlea stops working, auditory signals don’t reach the brain and hearing loss occurs. Merely making sounds louder with a hearing aid doesn’t solve the problem.

Cochlear malfunction is the most common reason for hearing loss, which is the fourth most common developmental malformation. Roughly two out of every 1,000 children are born deaf, and many more people develop deafness as they get older.

A cochlear implant has a microphone to hear sounds and a processor that converts that mechanical signal to electrical impulses to stimulate the auditory nerve directly. It’s kind of like a bionic cochlea.

Cochlear implants have transformed the treatment of profound hearing loss, but they don’t work for everyone, and children pose special challenges. Why?

One of the limitations with cochlear implants is that they don’t restore normal hearing. They take sound and break it up into different frequency bands, corresponding to high-, medium- or low-frequency sounds. The implant sends energy in those different frequency ranges to different sections of the cochlea, which stimulates different auditory nerve fibers. The problem is that we don’t really have a clear picture of which auditory nerve fibers are being stimulated.

In order for a patient with a cochlear implant to understand what they’re hearing, the device has to be properly programmed to stimulate the right nerve fibers for each individual. With an adult who was at one point able to hear, programming the device properly is highly dependent upon the patient’s feedback: you ask them how they’re hearing and adjust the programming accordingly.

This is much more complicated with a child who has been deaf since birth–they don’t know what sounds are or what words are supposed to be, and they can’t tell you whether it is working or not. Programming is therefore an extremely iterative process: we do the best we can based on what we know and on feedback from the family, the speech therapist, and others involved in the child’s care. If the child is beginning to learn to talk and understand what we’re saying, then we think the implant is probably programmed pretty well. If the child is not learning well, then we might need to adjust the implant’s programming.

This process takes a lot of time. Most concerning, it sometimes takes months to really set a good program map. By that time the child may have already missed critical learning periods and may not do well developmentally even with a properly programmed implant.

It would be far better if we could determine if the implant is optimally programmed at the time it is first turned on. That’s why we’re interested in developing this new imaging approach. It will allow us to tackle one obstacle to better care.

What led you to look at Near-Infrared Spectroscopy (NIRS) as a possible solution to this problem?

We want to know if the cochlear implant is stimulating the right nerve fibers to accurately transmit speech information to the brain. Ideally, we would be using functional MRI–it would tell us what we need to know with extremely high resolution. Unfortunately, we’re not able to use MRI because the cochlear implant is magnetic; it can’t go into an MRI scanner. Even if it could, using fMRI in children requires sedation, because they won’t sit still for the scan.

While we have ways of testing if the implant is stimulating the auditory nerve, these methods don’t tell us if that stimulation enables the child to understand words. We really want to look at the brain. That’s where Near-Infrared Spectroscopy comes in. It allows us to image functional activity in the auditory cortex while the child is awake and listening to sentences being read to them.

This is a big advantage. It means that NIRS can be done in a very short period of time during a clinic visit, and it can be done repetitively. This makes it a good candidate as a diagnostic tool that the audiologist could use in the clinic to improve care.

How does Near-Infrared Spectroscopy work?

NIRS measures changes in blood-oxygen levels in tissue, similar to fMRI, but NIRS works by beaming light into the head and measuring how much comes back out. When a part of the brain is used, more oxygenated blood flows to that region and light is absorbed differently. By mapping those differences, NIRS can detect which part of the brain is activated.

Is this the first time that NIRS has been applied to patients with cochlear implants?

Yes. One of my collaborators, Heather Bortfeld, a psychologist who studies language development in normal children, has been using this on children for many years. I heard about her work at a meeting and thought it might help improve the care of children with cochlear implants. She would drive down with her machine to test patients in our clinic. Over time, we were able to demonstrate the feasibility of the technique, so we applied for the Dana Foundation grant.

With the Dana funds we were able to purchase our own machine and hire a post-doc to run the experiments every day, enabling us to collect a lot more data from our patients. We published the results in 2010, [i] and the data formed the basis for a successful grant application to the National Institutes of Health [ii] to expand the study using a bigger machine with better resolution.

The seed money provided by the Dana Foundation was critically important to the success of this research. It was generous enough to allow us to take the necessary steps to make significant progress and move ahead to the next funding level.

Where are you now in this research?

Right now we are conducting a follow-up study using the higher-resolution NIRS machine. It has nearly 300 channels vs. four in the previous one. That means we can see which nerves are being stimulated with much greater precision. It allows us to not only see whether the auditory cortex as a whole is being activated, but which specific part of it is being activated.

This is likely to be important. We want to know if the patient with a cochlear implant can understand what words really are, as opposed to just hearing scrambled, garbled speech. I think we’re going to be able to see the difference on the higher-resolution machine.

You have begun a clinical trial that uses NIRS to track auditory stimulation after cochlear implant in developmentally delayed children. Why this population?

It’s very common to see newborns, especially those born prematurely, with multiple congenital issues in addition to deafness. These are kids who have many other substantial handicaps and may even eventually end up with a diagnosis of mental retardation.

Since a “good” outcome from a cochlear implant is defined as learning to talk, many treatment centers won’t use cochlear implants on children who are so severely impaired that it’s unlikely they will ever learn to talk, even if they could hear normally. At the other extreme are facilities that will implant essentially any child who is deaf without regard for whether the implant is going to help the child.

When do you know if it’s really appropriate to implant a child with multiple handicaps or not? What are the risks and benefits in this complex patient population? We’ve struggled with this and so have several other pediatric academic hospitals around the country. We need objective data to better inform evidence-based medicine for these patients and to better counsel parents. This study is designed to provide answers to these difficult questions.

How will this research impact clinical practice?

My hope for this research is that we will end up with a tool that would be a routine part of every audiologist’s or pediatric implant center’s armamentarium. They would use it on every child with a cochlear implant, maybe even at every programming visit, to make sure that when a child is listening to spoken language, the part of the brain that processes those sounds is appropriately stimulated. Right now there is no way to assess that.

The clinical significance is that children who are deaf will learn to talk earlier and be able to attend regular schools earlier at a higher rate. They’ll be able to lead more normal lives.


[i] Sevy ABG, Bortfeld H, Huppert TJ, Beauchamp MS, Tonini RE, Oghalai JS. Neuroimaging with Near-Infrared Spectroscopy demonstrates speech-evoked activity in the auditory cortex of deaf children following cochlear implantation. Hearing Research 2010 December 1;270(1-2):39-47.

[ii] Translation of near-infrared spectroscopy for use in clinical neuro-imaging of deaf children after cochlear implantation. NIH Grant #R56 DC010164-01A1 (8/1/10-7/31/12). John S. Oghalai, Principal Investigator