Share This Page
Ten years ago, a team of neuroscientists at the University of California, Berkeley, began a ground-breaking series of experiments using functional magnetic resonance imaging (fMRI) to read brain activity in the visual cortex to reconstruct images from movie clips people watched while they lay in the brain scanner. Since then, others have used this same technology for equally impressive feats. In late 2012, Japanese researchers reported using fMRI to accurately decode the visual content of dreams, and at least two groups are now using brain scanning to help apparently unconscious brain-damaged patients to communicate.
fMRI and other brain scanning technologies have the potential to be hugely beneficial for patients with a wide variety of neurological conditions. Researchers are using brain scanning and decoding technologies to develop speech synthesizers to help paralyzed patients communicate, and to decode both the amount and type of pain someone is experiencing. Alongside this potential to aid diagnosis and treatment of neurological diseases, these technologies also hold the promise to advance our knowledge of the brain.
In 2016, the Berkeley team reported a significant advance in the ability to decode the brain activity associated with speech. They used fMRI to scan the brains of seven participants while they listened to ten 10-15-minute stories taken from The Moth Radio Hour. They used the scans to identify brain activity related to word meanings by comparing the activity patterns to those observed from a set of nearly 1,000 common English words. The similarities enabled them to create semantic “maps” showing where word meanings are encoded in the brain. This revealed that related concepts are represented in the same discrete brain areas.
Martha Farah, Ph.D., director of the Center for Neuroscience and Society at the University of Pennsylvania, says the study is important because it “generalizes across people, or reveals that our brains use similar codes for representing meaning. It establishes the potential for these systems to have practical utility, because it means you won’t need to start from scratch with each brain you want to decode.”
Farah, a Dana Alliance for Brain Initiatives member, adds that although these advances are impressive, they are unlikely to culminate in decoding complex linguistic thought, or what one might call “mind reading.” “If you want to get to the moon and start by climbing a tree, you’ll be making progress in the right direction for a while, but the strategy will fail long before your goal is reached,” she quipped.
Deep Learning Restores Speech
Brain-scanning techniques are already being used to develop brain-computer interfaces for uses such as the development of speech prostheses that convert brain activity into intelligible speech. Decoding the brain activity associated with speech has been extremely challenging because speech involves complex, precise, and rapid control of vocal tract muscles in the mouth and throat. Speech prostheses have been under development for years, but the low quality of reconstructed speech has been a severe limitation, and many paralyzed patients rely on other ways of communicating, such as devices that measure head and eye movements, and brain-computer interfaces that enable them to use a cursor to spell out words.
Researchers are increasingly applying artificial intelligence (AI) methods, such as machine and deep learning, to more accurately decode the brain activity associated with speech. Two recent studies significantly improved decoding accuracy of activity in the auditory cortex, and the quality of speech reconstructed from that activity.
In one of the studies, a team of neurosurgeons and researchers at Columbia University combined recent advances in deep learning with sophisticated speech synthesis technologies to reconstruct speech from auditory cortical activity. The researchers used an imaging technique called electrocorticography to measure brain activity in five patients undergoing brain surgery to treat drug-resistant epilepsy. This involved placing a high-density grid of electrodes directly onto the surface of the left temporal lobe, allowing the researchers to eavesdrop on the patients’ brain activity while they listened to short spoken stories.
The researchers designed a deep neural network algorithm to extract key features from the brain activity, and combined it with an advanced speech synthesis algorithm to reconstruct the speech sounds heard by the patients with significantly greater accuracy, and at a higher quality, than previous attempts.
More recently, researchers at the University of California, San Francisco, used similar methods to decode both sound representations and the brain activity controlling the muscles in the vocal apparatus, allowing them to reconstruct speech even more accurately. Although this is a far less complex task than decoding word meanings, the studies represent big leaps forward in the development of speech prostheses, which could eventually restore communication to paralyzed patients at rates approaching natural speech.
Decoding Subjective Pain States
In another significant advance, a research group at the University of Colorado have developed an fMRI-based method for decoding brain activity associated with chronic pain, or pain that persists long after physical injury. Chronic pain is one of the foremost causes of disability worldwide, but it is extremely difficult to manage, and its emotional component is still very poorly understood. Since chronic pain cannot be measured objectively, physicians must rely on patients’ self-reports.
The Colorado group has been using fMRI combined with machine learning to identify chronic pain’s elusive neurological “signature.” In 2013, they reported identifying patterns of brain activity that enabled them to distinguish between painful heat from non-painful warmth, and also from the anticipation of pain, with high sensitivity and specificity. Building on these initial findings, they subsequently reported their same method can decode the amount of hot or sharp pain a person is experiencing, and also identified brain activity patterns associated with chronic pain, which appear to be distinct from those evoked by injury-induced pain.
“This has hugely important legal and ethical potential,” says Farah, “because many lawsuits, disability claims, and even opioid-prescribing problems hinge on who really has pain and who is malingering, we need to rein in irrational exuberance.”
The researchers are well aware of these implications, and have stated that, although their methods should be used to further our understanding of the brain mechanisms underlying pain, the use of their research as a “pain lie detector” is unwarranted.
Will Deep Learning Violate the Privacy of Thought?
Using AI methods, researchers are also decoding visual brain activity and reconstructing it with increasing accuracy. In 2017, researchers at Purdue University published a small study showing they could successfully reconstruct movie clips their participants watched in the brain scanner without having to compare the decoded activity patterns to others generated previously. Last year, the Japanese dream-decoding team reported using a pre-trained deep neural network to decode and reconstruct the neural representations of both seen and imagined visual images.
The application of AI methods compounds the ethical implications of using functional neuroimaging to decode brain activity and raises some new ones.
“There are two ways to think about the advances that are emerging from the use of AI,” says neuroethicist Peter Reiner, Ph.D., a professor of psychiatry at the University of British Columbia in Vancouver. “The first is to consider the implications of what we have at present, and the second is to think about where it might go in the future.”
The primary issue at play with these techniques is that they might impinge upon privacy of thought. If an AI-enabled fMRI machine can glean what you are thinking, then in some meaningful way it has indeed breached the privacy of thought. There’s not much to worry about if this happens in an experimental setting, or under the auspices of a physician, but under other circumstances, it could be troubling.
Many legal systems prohibit professionals from compelling accused criminals to incriminate themselves, Reiner explains, but if these technologies could one day read mental activity in more detail, we would need to consider how to regulate their application.
Fortunately, the cumbersome nature of the technology and existing norms around the treatment of the accused make such an outcome unlikely,” he adds. “But the arc of technology development—particularly in the field of AI—is such that rapid advances may make it considerably easier to glean such information in the future.
Indeed, both small companies such as Openwater and large firms with substantial resources such as Facebook have ongoing programs whose objectives are to miniaturize such technology so that it can be utilized as a wearable device. The aspiration is to realize something that has long been a dream of science fiction fans: telepathy. It is an understatement to say that success in such an endeavour would have widespread implications for privacy of thought.