The cochlear implant
is a near-miraculous device, widely considered the most effective brain-machine
interface technology yet developed.
It works in the inner
ear, where specialized “hair cells” of the cochlea normally transform sound
waves into nerve impulses. In people whose hair cells are severely damaged or
congenitally absent, the implant feeds tiny currents into an array of
electrodes to stimulate auditory nerve fibers in their place. With the device,
profoundly deaf people can hear well enough to understand speech—and children
who had never heard at all can learn to speak.
It’s far from perfect, however. “Results with
the cochlear implant are astounding,” says Allen
Ryan, professor and director of research in otolaryngology in the
department of surgery at University of California–San Diego. “But it doesn’t
make hearing normal. Children never catch up to their peers in terms of
language perception. They’re not able to follow melodies.”
working to make this very good implant even better.
Among the implant’s key
limitations, Ryan says, are its ability to register differences in pitch and in
volume. In the normal cochlea, sounds stimulate hair cells in a highly organized
manner, the highest frequencies at the base and progressively lower frequencies
as the cochlea winds toward its apex. Thousands of hair cells are distributed
along the length of the cochlea; each responds to a narrow frequency band, and
stimulates a single auditory nerve fiber in close proximity to it.
The implant has a
processor that divides sound into frequency channels and distributes them among
22 electrodes at most. But the loss of precision just begins there. When hair
cells die or fail to develop, nearby auditory nerve fibers atrophy, creating a
wide “neural gap” between the electrodes and the nerves they are to stimulate.
Across this gap,
"the current spread is very broad," says Ryan. Impulses from each
electrode "stimulate large areas of the cochlea at one time,” merging
frequencies, effectively, into five or six channels.
In recent years,
researchers have tried to encourage auditory nerve fibers, or neurites, to
regrow toward the cochlea, reasoning that closing the neural gap might sharpen
users’ pitch definition and increase dynamic range. Much of this work has used
diverse ways to deliver neurotrophins—nerve growth factors—to the area, with
some success, he says.
promising approach, reported
in Science Translational Medicine earlier this year, enlists
the implant itself to place genes expressing brain-derived nerve growth factor
(BDNF) directly in cells of the cochlea.
The researchers, led by Gary
Housley, director of the translational neuroscience facility at University
of New South Wales, Australia, used a process called electroporation: When exposed to electric current, pores in the
cell membrane open, allowing large molecules to enter.
electroporation requires substantial currents and affects a wide swath of
cells. The Housley team found "by chance and discovery" that far smaller
currents delivered by the implant’s electrodes would open up relatively few
Their method, which they termed “close-field
electroporation,” seemed a promising way to target gene therapy to the cochlear
implant, Housley says: “it provided a reliable, controlled way to have DNA
taken up by cells in close proximity to the electrode array." By injecting BDNF-expressing genes into the
cochlea, he hoped to create a population of cells that would stimulate auditory
nerve fibers to regenerate and grow toward the electrodes, narrowing the gap.
In fact, when the procedure was done with
guinea pigs fitted with cochlear implants, within days atrophied auditory nerve fibers
"swelled up and extended peripheral processes to where cells were
producing neurotrophins, where the electrodes were," Housley says.
Functional assessment tests indicated “a very
exciting" change in hearing, he says. In treated animals, it took much
smaller currents through the implant to stimulate brain areas that respond to
sound. As current increased, so did brain activation, suggesting that dynamic
range—the ability to register differences in loudness—was much closer to normal
than the narrow band typical with implants.
improved pitch sensitivity as well was not determined in this study. “It’s something we hope to measure in
the near future,” he says. In theory, at least, more precise delivery of reduced
current could "lend itself to a denser array of smaller electrodes, which
would improve pitch perception further."
Beyond the cochlear
implant, Housley’s electroporation technique could have broader bionic applications, a commentary in Science Translational Medicine suggests:
Artificial retinas and deep brain stimulation also use electrodes, which might be
similarly recruited to deliver therapeutic genes with precision.
"This is a really
exciting paper," says Allen Ryan. For one thing, "the method of gene
delivery into the ear is so compatible with the device itself, a simple
procedure" that could be included while doing the surgery to place the
A number of issues
remain to be resolved: Would the treatment bring nerve fibers close enough to
electrodes to improve frequency specificity? Could neurotrophin expression be
extended indefinitely to maintain regenerated neurites? (The DNA effects in the
experiment apparently ended after several months. )
That said, Ryan
foresees clinical trials of strategies using neurotrophins within the next few years.
Ryan, who has
collaborated with Housley on earlier research, is currently exploring other
ways to bring nerve fibers and electrodes together, using biodegradable gels
that incorporate microchannels. "Once neurites start to grow, they tend to
grow randomly... we're providing a focused source to guide them toward the
The possibility of
combining his approach with Housley's is "something we've talked
about," Ryan says; gene therapy could initiate neurite regrowth, which the
gel might then direct for optimal effect.
Meanwhile, Ryan says,
"there's lots of research going on with other strategies… People are
talking about focusing electrical signals to reduce the area of stimulation.
And processing is an area that's been very successful." Fine-turning the
way the sound signal is processed, in particular, could make the existing
device more effective.
The distribution of
electrical impulses among frequency channels is a complex business, says John Oghalai, associate
professor of otolaryngology and director of the Children’s Hearing Center at
Stanford University. "The question is how much power each electrode should
“Because the interface
between electrode and nerve varies within and among patients, you have to
program the device individually to send the proper amount of current, and
sometimes change the frequency bands as well."
The usual programming
method, which relies on patient feedback, is "very nonscientific at this
point," he says. It "works surprisingly well with adults" who
have lost their hearing but can remember well enough to say when it sounds
right; "the real problem is with children who were born deaf and who don't
know what sound is."
which was funded in
its early stages by the Dana Foundation, aims to program the implant by monitoring
activation patterns in the auditory cortex itself, using a non-invasive imaging
technique called near infrared spectroscopy (NIRS). [See briefing paper “Visualizing
Hearing: Brain Imaging May Improve Outcomes in Deaf Children with Cochlear
Research at Oghalai’s lab, published this year in Hearing
Research, showed that for normally hearing individuals, auditory cortex
activation was significantly stronger in response to natural speech than to
In subsequent, as yet
unpublished studies, his team found a similar pattern in people with cochlear
implants who could understand speech well. In those whose implants were working
poorly, however, the brain activated equally regardless of whether speech was
natural or distorted.
This kind of
information could, he hopes, point toward an objective way to program the
implant, making adjustments at the time of insertion "according to the
data NIRS is giving us." Ultimately, Oghalai speculates, the process might
use machine learning protocols, so the device would fine-tune itself automatically,
using feedback from the brain. "It could be like programming a voice
recognition dictation device," he says.