In the past decade, scientists and engineers have made great strides in the design and function of brain/machine interfaces (BMI). The much lauded BrainGate trial, for example, has demonstrated that stroke survivors who have been unable to move their arms or legs for more than a decade can, with training, develop the ability to move a robotic arm using only their minds. What is happening in the brain while someone learns to use such a BMI, and how does it differ from normal motor learning? New research of the University of Washington suggests that the two types of learning may not be that different.
A cognitive neuroscience approach
To date, clinicians, engineers, and neuroscientists involved with BMI have focused on the complex computational algorithms that can translate brain signals from an electroencephalogram (EEG) cap or a more invasive electrode array under the scalp into output that can control a computer cursor or robotic limb. Jeremiah Wander, a doctoral candidate at the University of Washington’s bioengineering program, argues that examining the cognitive processes surrounding the use of a BMI has been somewhat overlooked.
His lab is taking a different approach. “We are using a cognitive neuroscience lens to try to understand representations in the brain when someone is learning how to use a BMI,” he says.
To better understand how the brain can learn to control a BMI, Wander and colleagues recruited seven people with severe epilepsy who had had a grid of electrodes implanted across their brains to help identify the source of their seizures. The team connected one of those electrodes to a simple BMI linked to a computer cursor, and asked the participant to try to move the cursor in a very simple Pong-like game. Over time, participants using only their own brain activity could move that cursor.
As participants practiced, the group recorded brain activity from the various other electrodes in their brains that covered areas like prefrontal cortex, motor cortex, and somatosensory cortex. They found that learning to control the BMI used the same distributed neural network as is activated in more everyday type motor learning tasks like learning to swing a golf club or learning to ride a bike. What’s more, the group saw changes in activation patterns once the study participants had mastered the task—with decreased activity in areas of the brain involved with attention and learning over time. The results were published in the June 25 issue of the Proceedings of the National Academy of Sciences.
Eberhard Fetz, one of the co-authors on the paper, says the results aren’t so surprising. “You can anticipate that, during learning, there are many brain areas that are involved in paying attention, improving performance, processing information, and optimizing learning that don’t need to be activated once you learn something,” he says.
But Wander says he was somewhat surprised to see such a similar activation pattern with limited sensory feedback. “While it makes sense that the same kind of learning network would be involved, intrinsically, the use of a BMI is very different. You don’t have the same kinds of receptive feedback you see in normal motor learning, you don’t have sensory feedback,” he says. “In this case, the participants only received visual feedback. The fact that the same class of network was still recruited is certainly intriguing.”
Building a better BMI
Nicholas Hatsopoulos, a professor of computational neuroscience at the University of Chicago, says this study is quite novel—and not only because it looks at human brain activity instead of that in animal models.
“They recorded from multiple brain areas, even those that were not directly controlling the BMI,” he says. “And the fact that they found that the areas that were active in the early exposure to the BMI became less active learning mirrors what we’ve seen in motor skill acquisition in other tasks. You go from a very deliberate, conscious, cognitive awareness of doing something to a more automatic behavior that doesn’t require the same kind of cognition.”
Fetz argues that this result speaks more to how the brain learns different tasks than to offer specific advice for designing better BMIs. Yet, he acknowledges that it could help researchers come up with better training paradigms. Wander agrees.
“This could help us be more intelligent about the way we train up humans to use these BMIs,” he says. “Such that we can leverage the changes we’re seeing in the brain to indicate that the patient has acquired a certain skill and has moved on to more of an automatic execution of it. Then we could increase the difficulty or complexity.”
Hatsopoulos argues that building better BMIs will have to involve harnessing the brain’s natural plasticity—and by doing so, scientists may be able to help BMI users learn to use the devices better and faster. “Let’s face it, we’re never going to, at least in the foreseeable future, come up with a BMI that you can control as well as you can control an actual arm as soon as you turn it on,” he says. “But there’s this ability to learn. And that ability to learn will help us use the plasticity in the brain to get you from that initial point to a point where you are proficient. That’s what you need so you can design a BMI that is more useable—something that people will actually want to use.”
Wander agrees. “By taking this cognitive neuroscience approach, we can maybe build a system that’s a little more appropriate for the brain. Current devices aren’t in mainstream clinical use—there’s still a lot of room for improvement. But if we could build the right system that taps into these brain networks, we may be able to get some performance gains so these devices can really start helping people.”