Share This Page
When astronaut Frank Poole, deputy commander of the USS Discovery, miraculously wakes after his 1,000-year slumber in deep space, the first thing he notices is that the medical personnel who kindly tend to him rarely speak to one another. In this scene of Arthur C. Clarke’s science-fiction novel 3001: The Final Odyssey, Poole’s attendants utter words—more like whispers—only when they have to communicate with their stunned, severely obsolete patient. Their inaudible speech may be absolutely natural for these men and women of the 31st century. However, for someone whose last enduring memories include an upsetting (and audible) argument with a temperamental supercomputer named HAL while orbiting Jupiter in 2001, this form of communication certainly feels spooky.
In time, with a mix of trepidation and awe, Poole learns that by 3001, thanks to the invention of the so-called Braincap, virtually every Earth inhabitant had acquired the ability to communicate directly with computers—and even with other humans—simply by thinking! Mankind had mastered an amazing range of additional tricks, including uploading gargantuan amounts of information and knowledge directly into brain circuits or, conversely, downloading an entire life’s history into some sort of perpetual storage medium.
When 3001: The Final Odyssey was first published in 1997, Clarke surprised many of his readers by dedicating a significant chunk of his plot to Braincaps and their impact on daily human behavior a thousand years from now. Judging by some of the reviews that followed the book’s publication, many readers and critics thought that Clarke had gone too far, even considering his impressive record as a futurist.
Yet Clarke’s vindication came much sooner than his critics would have expected. After all, just a couple of years after the publication of his book, American and European laboratories started reporting on pioneering experiments that employed real-time links connecting living brain tissue with artificial devices. Such brain-machine interfaces (BMIs), as this new paradigm was named, allowed either animals or severely disabled patients to use the brain’s electrical activity to control the movements of artificial devices in order to execute simple tasks.
For instance, in 1999, John Chapin’s laboratory at Hahnemann University in Philadelphia and my own laboratory at Duke University collaborated in the first experimental demonstrations of a brain-machine interface in animals. In these experiments, rats learned to use the combined electrical activity of a handful of cortical neurons to move a robotic arm in order to obtain a water reward. Around the same time, Niels Birbaumer at the University of Tübingen in Germany reported how completely paralyzed patients learned to use brain-derived signals (recorded though a classic method known as electroencephalography, or EEG) to write messages on a computer screen. Even in its initial version, this brain-computer interface was the only way for these locked-in patients to communicate with the external world. It was an early indication of BMIs’ significant potential as new rehabilitation tools.
In subsequent years, further animal experiments with BMIs indicated that monkeys could learn to employ the combined electrical activity of hundreds of their cortical neurons to move multiple degree-of-freedom robotic arms, entire humanoid robots, and even avatar limbs and bodies without the need for any overt movement of their own bodies. Soon, initial clinical studies also reported that patients could rely on BMIs to control the movements of computer cursors and robotic arms.
As the BMI field rose to the forefront of modern neuroscience, the possibility of establishing a bidirectional dialogue between brains and artificial devices was also realized. In 2011, through a technique called cortical electrical microstimulation, my laboratory was able to deliver simple “tactile messages” directly into the brains of monkeys. Every time one of our monkeys used its brain activity to move a virtual hand to scan the surface of a virtual sphere, a simple electrical wave, proportional to the virtual texture of the touched object, was immediately delivered to the animal’s primary somatosensory cortex, an area known to be fundamental for the definition of one’s tactile perceptions. After a few weeks of training, by taking advantage of this direct and continuous inflow of tactile information into their brains, a pair of monkeys became capable of discriminating the fine texture of the virtual objects by using their virtual hands, as if they were using their own biological fingertips. We called this new paradigm a brain-machine-brain interface (BMBI, Figure 1).
A Major Breakthrough
In 2009, as a direct result of this auspicious first decade of BMI research, the Duke University Center for Neuroengineering and the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN, in Brazil) jointly created a nonprofit research consortium called the Walk Again Project. By the end of 2012, the Walk Again Project received a grant from the Brazilian government to assemble a large international research team of roboticists, neuroscientists, engineers, and computer scientists. This international team joined with a Brazilian multidisciplinary rehabilitation team, composed of physicians, psychologists, and physical therapists, to take on a very ambitious project: designing and implementing the first bipedal robotic exoskeleton whose movements could be controlled directly by human-brain activity. The central goal of the first phase of project was to allow paraplegic patients suffering from severe spinal-cord lesions to use their EEG activity to control the exoskeleton’s leg movements (Figure 2) and, in so doing, regain lower-limb mobility. In addition to restoring basic locomotion behaviors, the exoskeleton would be the first in its class to provide continuous sensory feedback to the user in the form of artificial tactile and proprioceptive signals.
In December 2013, a group of eight patients suffering from complete and incomplete spinal-cord lesions started the training process required for achieving proficiency in controlling a brain-controlled robotic exoskeleton. Four months later, all eight were capable of commanding the exoskeleton with their brain activity alone, and all had regained the sensation of walking in a laboratory setting. The feeling of walking again was even more realistic in these patients because of the addition of two innovative technologies in the design of our exoskeleton. The first was a new type of artificial tactile sensor known as artificial skin, developed by Gordon Cheng at the Technical University Munich. These sensors were distributed across key locations of the exo’s legs and feet to detect the device’s movements and contact with the ground. The second was an ingenious haptic display, created by Hannes Bleuler’s laboratory at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, that allowed the tactile feedback signals generated by the arrays of the artificial sensors to be delivered to the skin of a patient’s forearms. For the haptic display to work properly, patients had to wear a special shirt containing a linear array of small vibromechanical elements in the distal half of each sleeve while walking with the brain-controlled exoskeleton.
The World Cup Demonstration
To celebrate a major first step toward the development of a new generation of neuroprosthetic devices, one of our patients, Juliano Pinto, who is paralyzed from the mid-chest down, was invited to help our team demonstrate our exoskeleton’s enormous potential before the opening match of the 2014 FIFA World Cup in Brazil on June 12. For the first time in history, a human subject showed that a brain-controlled exoskeleton could be used to initiate the kicking of a soccer ball. The demonstration was witnessed by 70,000 fans at the Itaquerão stadium and an estimated one billion people watching on TV. Seconds after executing this historic kick, Juliano reported to us that he clearly felt his leg moving in the air during the moment at which the exo’s foot made contact with the surface of the ball. According to Juliano’s perception, it was his own body, not the exo, that executed the kick. This was a stunning development.
The effort and complexity required to pull off that World Cup demonstration well exemplifies the current state of the art of BMIs. Over the last 15 years, since our initial study with rats launched the field, progress has been steady. And although the case has been made that BMIs offer concrete hope for the future development of a variety of new neurorehabilitation tools, we are still a few years away from being able to produce neuroprosthetic devices that patients can routinely use outside well-controlled laboratory conditions. Certainly, at this point, we are very far from the Braincaps imagined by science-fiction writers like Clarke. Indeed, we may never get there at all.
Despite the uncertainty, recent experiments combining BMIs with cortical electrical microstimulation effectively open the doors for more daring adventures. Indeed, I could almost bet that Clarke himself would have enjoyed the opportunity to be present when Miguel Pais-Vieira, a Portuguese postdoctoral fellow in my lab, demonstrated the operation of the first brain-to-brain interface designed to link two animals’ brains directly (Figure 3). First proposed in my 2011 book Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives, our brain-to-brain interface (BTBI) paradigm, reported in 2013, allowed a pair of rats to transmit and receive rudimentary mental sensorimotor messages.
In one of the published experiments, the first rat of the pair, known as the encoder, was trained to use its facial whiskers to determine the diameter of a computer-controlled aperture placed inside a behavioral box. From trial to trial, the aperture could assume two distinct diameters, classified as narrow (X mm) or wide (Y mm). The encoder’s job was to use its facial whiskers to correctly judge the aperture’s diameter and then indicate its value by placing its snout in one of two holes located in a nearby chamber. If the encoder nose-poked in the hole corresponding to the correct aperture diameter, it received a water reward.
As the encoder used its facial hair to evaluate the opening’s diameter, electrical activity recorded from neurons located in its somatosensory cortex was combined and transmitted, via cortical electrical microstimulation, to the brain of a second rat, the decoder, located in a different behavioral box. The decoder had no access to an aperture, so its facial whiskers were useless in solving the task and getting water. Yet to receive such a reward, the decoder also had to indicate, by nose poking, the diameter of the aperture touched by the encoder. To do that, the decoder had to rely solely on the simple neural message being transmitted to its brain by electrical microstimulation.
After a bit of training, decoder rats became capable of using our brain-to-brain interface to successfully perform this task way above chance level. This indicated that the brain of a decoder rat could make sense of the messages broadcasted by its associated encoder rat. Interestingly, since the encoder received an extra reward allotment every time a decoder was able to correctly indicate the aperture’s diameter, encoder rats adapted their behavior and cortical activity to make it easier for their counterpart to complete the task, particularly after the latter committed a series of trial errors.
That further suggested to us that these rat dyads had established a new form of communication, despite the fact that neither animal was aware of its counterpart’s existence. As an extra proof of the effectiveness of this BTBI, we repeated these tactile-discrimination experiments by using an encoder rat placed in a laboratory in the ELS-IINN, in Natal, Brazil, while the decoder rat performed its trick in my lab at Duke University, in the U.S. Despite the distance and the use of an average Internet connection, the brain-to-brain interface worked as well as it did when the two animals were in the same laboratory.
In a final test of our BTBI, decoder rats used neuronal signals provided by the motor cortex of encoder rats to choose which of two levers to press, without ever seeing the visual cues that instructed the encoders to make the same decision in the first place. In other words, the brain-to-brain connection between the encoder and decoder rats allowed the latter to correctly make a motor decision based on visual cues experienced only by its encoder partner.
During the past year, two other laboratories published studies involving brain-to-brain architectures. Moreover, a press release from a group at the University of Washington indicated that the group had established a functional link between human subjects’ brains by combining two noninvasive techniques: EEG to record brain activity in the first subject (encoder) and transcranial magnetic stimulation (TMS) to deliver an EEG-triggered signal to the second subject’s (decoder’s) brain. Since the group has not yet published a full scientific report, it is difficult to evaluate what was really achieved. If anything, the limited description (and video clip) provided in the press release did not fully support the claim that a true functional communication between human brains occurred. This is because, essentially, the encoder’s EEG activity was simply used to trigger a magnetic stimulus in the decoder’s motor cortex.
As expected, every time this magnetic stimulation was delivered, the decoder subject produced an involuntary body movement. Yet the decoder was unable to actually participate in the decision to create the movement. As such, I do not see how two brains shared a true message in this paradigm. On the other hand, the potential methodology for doing so was unveiled, and that was certainly enough to cause a significant media and public response.
A New Type of Computational Architecture?
In my lab at Duke, we continue to experiment with animal BTBIs. We are currently investigating what kinds of social behaviors and global patterns of neuronal activity emerge when groups of animal brains are allowed to collaborate directly, through the employment of different types of brain-to-brain interfaces. I like to refer to these systems as Brainets. So far we have tested Brainets formed by either four rats or three monkeys. The central task of each of these animal Brainets is to optimize the combination of neuronal activity, sampled from multiple brains simultaneously, into a supranervous system that is responsible for attaining a common behavioral goal, such as identifying a complex tactile pattern or moving an elaborate virtual limb. The results of these studies, which are currently under review for publication, mostly focus on how BTBIs can enhance social interactions between animals and whether Brainets could operate as a new type of computational architecture, like some sort of non-Turing biological computer. In addition, these experimental paradigms allow one to study whether, in a still-distant future, artificial interfaces like these may be used to functionally reconnect brain areas where communication may have been disrupted by brain damage, such as that produced by strokes or other neurological disorders.
Right now this latter proposition may sound farfetched. However, seeking such a path has become a hallmark of our laboratory during the past decade. During this period, we have successfully translated similarly abstract basic-science ideas into potential new therapies for untreatable epilepsy, Parkinson’s disease, and disabling paralysis. All of these therapies are currently undergoing clinical testing worldwide.
As exciting as these animal research projects are, none of them come close to competing with the fictional wonders of Clarke’s Braincaps. But that may not be so bad after all. For starters, nobody would ever consider it ethically or medically acceptable to implant nanotubes or other types of electrodes in healthy human subjects for the purpose of testing a BTBI, as suggested by Clarke. But even if, years or decades from now, better noninvasive technology enables us to record large-scale brain activity in real-time, at the millisecond scale, and then another efficient, noninvasive method might be used to deliver brain-derived messages to another human brain, it is highly unlikely that such a BTBI would lead to the emergence of a fluid and efficient form of human communication, as long as we rely on digital computers to mediate this task.
Nor do I believe that there will be a day in which Braincap-like technologies will allow us to upload vast and complex information packets—like a new language or a large amount of scientific knowledge, as Clarke describes in his book—into our brains, or to download all our memories or personal experiences into some sort of digital storage media. Apart from tasks such as motor control for which BMIs can become very useful, mimicking higher-order brain functions, such as knowledge acquisition, memory storage, performance of cognitive tasks, and even consciousness, may be beyond the reach of binary logic, the basis from which all digital computers operate, no matter how simple or elaborate. An interesting corollary of this view is that we need not worry about the forecast that, in the near future, a “really smart” digital computer/machine will supplant human nature or intelligence. In all likelihood, this day will never come because, in a more-than-convenient arrangement, our most intimate neural riddles seem to have been properly copyright-protected by the very evolutionary history that generated our brains, as well as the very complex emergent properties that make it tick. As such, neither evolution nor neurobiological complexity can be effectively simulated by digital computers and their limited logic.
In the end, this may not be so bad. Like Commander Poole, as much as I would love to take advantage of a brand-new Braincap—minus the nanotubes—to learn a few new intellectual tricks in a hurry, from the perspective of someone living in the early 21st century (not the 31st), it is very difficult to imagine that any of us at this juncture in our history would, in good faith, feel comfortable in surrendering our final frontier of individual privacy, knowing that there is a chance, no matter how insignificant it may be, that an uninvited snoop may, nevertheless, want to take a peek.
- Carmena, J.M., M.A. Lebedev, R.E. Crist, J.E. O’Doherty, D.M. Santucci, D.F. Dimitrov, P.G. Patil, C.S. Henriquez, and M.A. Nicolelis, Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol, 2003. 1(2): p. E42.261882.
- Nicolelis, M., Beyond boundaries: the new neuroscience of connecting brains with machines–and how it will change our lives. 1st ed 2011, New York: Times Books/Henry Holt and Co. 353 p.
- O’Doherty, J.E., M.A. Lebedev, P.J. Ifft, K.Z. Zhuang, S. Shokur, H. Bleuler, and M.A. Nicolelis, Active tactile exploration using a brain-machine-brain interface. Nature, 2011. 479(7372): p. 228-31.3236080.
- Pais-Vieira, M., M. Lebedev, C. Kunicki, J. Wang, and M.A. Nicolelis, A brain-to-brain interface for real-time sharing of sensorimotor information. Sci Rep, 2013. 3: p. 1319.3584574.