Share This Page
fMRI: Not a Mind Reader (Yet?)
Briefing Paper
Credit: Shinji Nishimoto and Jack L. Gallant, UC Berkeley, 2011
One day, late in 2009, Shinji Nishimoto, Ph.D., a postdoctoral student at the University of California at Berkeley, lay still inside a functional magnetic resonance imaging (fMRI) unit—basically a room-sized, tube-shaped electromagnet—and watched a movie. The movie was just nine minutes long, but Nishimoto watched it again and again: ten times in all. While he watched, the fMRI machine recorded patterns of activity in his visual cortex. Later Nishimoto and his colleagues fed that fMRI data into a computer model—made from previous sessions of movie-watching—which output a reconstruction of the film. The reconstructed images were hazy, but accurate in their most basic features, and certainly looked like an initial proof of principle. “[D]ynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology,” they wrote in the journal Current Biology.
The feat came just a few years after French researchers described the first fMRI decoding of static images from brain activity. So the field was progressing swiftly, and its aim, apparently, was to devise powerful fMRI-based mind reading applications—or “brain decoding devices,” as Nishimoto and his colleagues termed them.
How long would it take fMRI researchers to demo an app to tap into dreams? Just two more years, as it turned out. This past May, a team from Kyoto, Japan reported in Science that their fMRI-based “[d]ecoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of [dream] contents.”
Thus, even the dream state—what Freud considered the gateway to the subconscious and what we generally regard as one of the last redoubts of human privacy—now seems to be under siege from fMRI tech.
Should we be worried?
Too-Big Brother
Maybe not just yet. In fairness to their developers, these brain-decoding devices do have a plausible scientific use, namely to help scientists test their theories of how the brain works. Such devices also should be helpful in devising high-tech communications and control systems for people whose disabilities—including “locked-in syndrome”—preclude speech, sign language, and any use of their arms or legs.
It’s also worth bearing in mind that fMRI cannot “read” your private thoughts without a considerable amount of compliance on your part. A typical fMRI unit weighs tons, costs a fortune, and requires a huge electromagnet that must be kept supercooled by hundreds of gallons of liquid helium. Into this noisy, high-tech leviathan you as the subject must consent to enter, lying flat on your back—after leaving any watch, necklace, earring, or other ferromagnetic item far away. Smaller, more portable MRI devices with permanent magnets are feasible, but have lower field strengths that make them less useful for functional neuroimaging—as opposed to the ordinary structural imaging of a typical hospital MRI. “Functional MRI probably is never going to be very portable,” says Jack Gallant, Ph.D., whose laboratory at the University of California at Berkeley accomplished the recent movie-decoding feat.
The primary magnetic field inside an fMRI electromagnet is strong enough—tens of thousands of times stronger than the ambient planetary field—to force the realignment of hydrogen nuclei in your brain, as if they were so many compass needles. The device uses a radio-frequency pulse to, in effect, pluck those aligned particles, like a harpist plucking harp strings, and it maps the resulting tones in all their informative variety. The technique is ingenious—the basic MRI concept earned its inventors a Nobel Prize—but it is so sensitive that it can easily be disrupted. Even after consenting to be fMRI-scanned and being positioned within the machine, you could spoil the scan simply by moving your head a bit during the procedure. In principle, you could muddy the data covertly, for example by thinking of the wrong things, by wiggling a toe or clenching a muscle, or merely by failing to follow the operator’s instructions.
Even when subjects are fully compliant, fMRI cannot offer anything close to a fine-grained, neuron-by-neuron picture of the working brain—which is just one of the reasons fMRI applications have been slow to emerge from the lab.
The usual fMRI technique—blood oxygen level-dependent or “BOLD” fMRI—aims to record the tiny surges of oxygen-laden blood that active brain cells draw from surrounding vessels. (The technique takes advantage of the tiny difference in magnetic properties between incoming, oxygen-carrying red blood cells and those that have delivered their O 2 loads.) These surges, even when measured precisely, are imperfect proxies for brain activity. And in practice, the raw BOLD signal they yield is also quite “noisy,” making it likely that a typical large fMRI dataset—in the absence of the right statistical techniques—will show spurious correlations between neural activity, somewhere in the brain, and the behavior of interest. A team of researchers jokingly illustrated this potential noise problem last year by using fMRI to find “neural correlates of interspecies perspective taking”—in a dead fish.
The spatial resolution of an fMRI scan is not very high; it typically can resolve individual “voxels” of 3D brain-space that are on the order of one cubic millimeter—a volume that often contains hundreds of thousands of neurons and their support cells. The time resolution of fMRI is even worse; the changes in the BOLD signal occur over seconds, which is to say: thousands of times more slowly than the changes in neuronal activity that drive them. The content of an fMRI mind-scan movie could never reproduce the rat-a-tat pace of the average Hollywood blockbuster. Even much simpler applications are limited by BOLD signals’ slowness. A Chinese report in 2007 on fMRI-based lie-detection noted plaintively that “all images of fMRI are just the final results of brain changes after lying.”
In other words, fMRI provides a big, slow picture of brain activity—a rough sketch, not a microscopically detailed one. “It helps us understand how the brain processes information at a macroscopic level,” says John-Dylan Haynes, Ph.D., who directs the Berlin Center for Advanced Neuroimaging at the Charité Hospital in Berlin.
We know you’re lying
Though it falls well short of perfection, fMRI nevertheless “beats the heck out of every other method we currently have for measuring brain activity in humans,” Gallant says. That has led many researchers to use it not just as a scientific tool but also as a potential applied technology in clinical, legal, and other settings.
Lie detection was one of the first of these “mind-reading” applications to be studied. It was an obvious choice, given the apparent need for better lie-detection technology by lawyers, courts, and various government agencies—and given the apparent simplicity of the task: to distinguish between just two states, “truth” and “deception.”
From the beginning, researchers found evidence that certain prefrontal and associated parietal cortex regions are heavily involved in inhibiting truth and maintaining deception. In general, scientists in this field now model deception—as University of Pennsylvania psychiatrist Daniel Langleben, M.D., described it in a 2008 review paper—“as a working memory-intensive task, mediated to a large extent by the prefrontoparietal systems dedicated to behavioural control and attention.”
How accurate are these fMRI lie-detection models? The handful of studies to date suggest 75% and up in simple laboratory settings, which—if such numbers translate to real-world settings—could be useful for many applications, for example in security screening and in civil lawsuits that require a relatively low standard of evidence. Langleben says that he has done a study, not yet published, that directly compared fMRI and polygraph lie-detection, and found the former superior (86% to 72%).
However, it appears that most scientists, including Langleben, and at least one federal court judge, regard the real-world use of fMRI lie-detection as premature. As Langleben says, it needs more “translational” research—more laboratory optimization and more clinical-type trials, like those for a potential new medicine.
Haynes is even more pessimistic. “I find it hard to imagine that we’re going to see a workable fMRI lie detector in the next ten years,” he says.
One potential hurdle has to do with the natural variation in lie vs. truth responses across the human population—including psychopaths and other unusually fluent liars—which isn’t yet known. Possibly an even greater challenge has to do with deliberate countermeasures—cognitive or other tricks with which a subject can mask the difference between “lying” and “truthful” brain activity. In what appears to have been the only study of this so far, the use of such a countermeasure dropped the fMRI lie-detection rate to only 33%, essentially defeating the machine. Counter-countermeasures have been used successfully with other lie-detection technologies, but they haven’t yet been tested for fMRI.
Whether such testing will ever be done as needed to make fMRI a standard tool of lie-detection—or even to shoot it down as inaccurate and impractical—is an even bigger question. As Langleben notes, the field has become more or less inactive at present, especially in the U.S. Indeed he senses a “lack of will,” in the academic and research funding communities, to pursue further fMRI lie-detector development.
“It’s a strange topic,” he says. “People just seem to have a problem with lie detection. I hate to sound like a shrink, but I think it goes beyond the rational; people tend to get very emotional about this—and that includes scientists.”
Detecting awareness
There are, of course, potential fMRI applications that aim simply to benefit people. One of the most prominent involves the use of fMRI to assess the level of consciousness in unresponsive hospital patients, based on emerging theories about consciousness’s neural correlates. In most cases this would require not an either/or choice, but a measurement of the degree of neural activity. In a study published in Neurology in 2010, for example, researchers in the laboratory of former Dana grantee Joy Hirsch, Ph.D., at Columbia University found that patients in a “locked-in” state—conscious but physically unable to respond—showed signs of significantly more neural activity in verbal and object-recognition areas during an object-naming task, compared to patients in relatively vegetative, low-consciousness states.
Similarly, a study in 2009 by researchers at Cambridge University found that fMRI showed advantages over traditional behavioral assessments—such as asking the patient to speak or move—in distinguishing the states of consciousness of unresponsive patients.
We found, contrary to the clinical impression of a specialist team using behavioural assessment tools, that two patients referred to the study with a diagnosis of vegetative state did in fact demonstrate neural correlates of speech comprehension when assessed using functional brain imaging.
Strikingly, the researchers found that “the level of auditory processing revealed by functional brain imaging” in the patients they studied correlated with the degree of behavioral improvement six months later—which encouraged them to suggest that fMRI could also be used to obtain “valuable prognostic information” in such cases.
Where does the development of this promising technology for clinical use stand today?
“It has stalled out, to some extent,” says Hirsch. The reasons, she suggests, include the high cost of using fMRI; the difficulty of transferring such a sophisticated technology with its associated analytical techniques and its need for highly skilled personnel from research use to general clinical use; and the lack of standard fMRI criteria, so far, for assessing the degree of awareness in patients. “fMRI has done a beautiful job of establishing proofs of principles here, but the burden of translating these principles into clinical practice may end up being handed off to other more user-friendly technologies that are still in development,” Hirsch says.
Still in the laboratory
fMRI has been described in numerous research papers as a possible clinical tool for diagnosing or monitoring other neuropsychiatric conditions, including autism-spectrum disorders, schizophrenia, Alzheimer’s disease, and antisocial personality disorder.
Even more ambitious are the prospective brain-decoding applications that would tap into subjects’ detailed thoughts and sensory experiences. Haynes and his group reported in 2008 that they could use fMRI data to predict with high accuracy which of two choices a subject would select—up to ten seconds ahead of the subject’s conscious awareness of his decision. “You think, at the time that you’re making up your mind, that you’re free to take either option,” explains Haynes. “But it seems that your brain has already started preparing this decision .”
Similarly, in a 2010 study his team found that they could predict how subjects would value cars by their fMRI responses while watching cars pass by on a video—even when their attention was firmly directed elsewhere in the scene. Will such “neuromarketing” or “brain-marketing” techniques soon render consumers helpless before advertisers? Unlikely, says Haynes. “fMRI doesn’t really tell you much more than traditional behavioral tests do,” he says. “In most cases brain-marketing is just a gimmick.”
The other potential disease-diagnosing and thought-reading applications of fMRI may not be gimmicks, but none has emerged meaningfully from the laboratory into clinical or legal or some other commercialized use. “Researchers are now using simple forms of mental state decoding that weren’t thought possible ten years ago,” Haynes says. “But of course we can’t build a mind-reading machine at the moment with current neuroimaging technology. That has to be clear, even if, in the popular media, this line between research and applied technology is often blurred.”
“In general,” he concludes, “taking fMRI technology out of the lab and showing that it can work in an applied setting is something that has hardly been done so far.”
Credit: Shinji Nishimoto and Jack L. Gallant, UC Berkeley, 2011