Share This Page
Minding the Brain Machines
Philip M. Boffey
July 1, 2019
Advances in linking brain signals to computers are developing so rapidly that analyses of their ethical implications are straining to keep up. Many bioethicists are warning that boundaries must be established soon to demarcate which research and clinical applications are ethically and morally permissible, and which are not.
There is little doubt that studies of the Brain-Computer Interface (BCI) comprise one of the hottest areas of brain research at the moment. Some of the most intriguing possibilities are laid out in an article in the June 2019 issue of Cerebrum, entitled “Mind Over Matter: Cognitive Neuroengineering,” by Karen Moxon and colleagues at The University of California, Davis. They predict that technology “will soon let us simply think about something we want our computers to do and watch it instantaneously happen.” The BCI approach, they say, “holds the promise of improving the quality of life for everyone on the planet in unimaginable ways.”
Brain activity generates a signal, typically an electrical field, that can be recorded through micro-electrodes, which feed it to a computer whose software, a decoding algorithm, translates the signal to a command that operates a computer or other machine. The task can be as simple as moving a cursor left or right on a computer screen, thus allowing paralyzed patients to tap out messages one letter at a time, or as complex as controlling a robotic arm in three dimensions.
Recent work from the University of Pittsburgh has shown that subjects with amyotrophic lateral sclerosis (ALS) “can control a complex robotic arm—having it pick up a pitcher and pour water into a glass—just by thinking about it.”
The technology is not yet ready for prime time, the UC Davis team says. For best results, it is necessary to implant recording micro-electrodes into the brain and current versions are not reliable for more than a few years. Moreover, computer systems to process the data are too large to lug around.
Non-invasive versions of BCI now under development can be placed on the scalp to read electroencephalography (EEG) signals that originate in the brain, but those signals are attenuated when they pass through the skull and the scalp, so they provide much less information.
Electrically stimulating the brain has the potential to improve the treatment of diseases caused by aberrant brain activity, such as Parkinson’s disease and epilepsy. The latest clinical devices can detect brain activity preceding an epileptic seizure and respond with electrical stimulation that prevents the seizure before it develops. Because they operate intermittently, they significantly extend battery life compared to continuous stimulation, the traditional approach.
BCIs might also treat cognitive disorders, such as deficits in memory capacity, learning abilities, and decision-making skills as we age.
Much of the work on BCI is funded by the Defense Advanced Research Projects Agency (DARPA), which raises especially troublesome ethical issues because the mantle of national security can be used to justify reprehensible activities. Witness the willingness of psychologists and other medical professionals to help the Central Intelligence Agency interrogate and torture prisoners over a multi-year period under the Bush administration, starting in 2002.
An article by Charles Munyan, a neurosurgeon at Temple University’s medical school, entitled “Neuroethics of Non-primary Brain Computer Interface: Focus on Potential Military Applications,” was published online by Frontiers in Neuroscience on October 23, 2018. Dr. Munyan, who is also a major in the U.S. Army Reserve Medical Corps, wrote that as the rate of technological development accelerates, “it is important that neuroethics not lag behind.” He proposed a framework for classifying technologies into one of three categories and then examining the risks to the subjects, with different risk-benefit ratios depending on the category.
His first category was research to restore functions lost by those wounded in battle and ameliorate post-traumatic memory and mood disturbances. His second was research to augment the functioning of neurologically normal military personnel to increase their lethality, survivability, or efficiency in battle. The goal is to enhance their native faculties, suppress states that interfere with optimal performance and facilitate communication with teammates.
Finally, there are “disruptive” technologies that can be used for interrogation or pacification. He calls it “clearly impermissible” to use non-consensual, non-invasive stimulation to induce pain without physical trauma, or to induce psychological distress.
A paper published in the Journal of Neural engineering in January 2018 found a disturbing lack of attention to ethical issues in studies that involved human subjects published in neural engineering and engineering journals between 2000 and 2015. The articles focused on technological improvements while language protecting human subjects was “markedly absent,” being omitted from 31 percent of the studies in neural engineering journals and 59 percent of studies in biomedical engineering journals.
One project where ethical issues were addressed from the start was the ambitious Brain Initiative to understand brain functions and disorders announced by then-President Barack Obama in 2013. A paper published in The Journal of Neuroscience on December 12, 2018 by scientists at the National Institutes of Health describes numerous steps taken to ensure that ethical issues are addressed every step of the way as the technologies advance.
But not everyone is so cautious. Technology entrepreneurs have made bold predictions about surging ahead with brain stimulation projects with scant consideration to the social and ethical consequences. Among perplexing issues that need to be addressed: Should people be able to keep their neural signals private, as a basic human right, to prevent malefactors, for example, from learning where they bank and what their PIN numbers are? Will tampering with the brain to help heal depression or other ailments alter a patient’s personality and sense of self?
International treaties, laws, and strong codes of conduct to protect individuals are clearly needed, a Herculean challenge that must be met. As the Moxon article concludes, “if we succeed in building an appropriate regulatory framework to keep up with fast-moving technological changes, we will have the opportunity to improve human cognitive abilities and create a better society—perhaps a society of cyborg citizens.”
Phil Boffey is former deputy editor of the New York Times Editorial Board and editorial page writer, primarily focusing on the impacts of science and health on society. He was also editor of Science Times and a member of two teams that won Pulitzer Prizes.
The views and opinions expressed are those of the author and do not imply endorsement by The Dana Foundation.