When you spot someone you know, that subtle “pop” of recognition you feel produces a distinctive response pattern in your brain.
“If you’ve seen something before, it’s reasonable to assume that the brain might have some record of that and process it differently the next time you encounter it,” said Jesse Rissman, a postdoctoral fellow in the department of psychology at Stanford University.
Using functional magnetic resonance imaging (fMRI), Rissman and his colleagues have found a way to decipher the brain’s response to a familiar face. But, they emphasize, what they found should add a sobering note of caution to efforts to develop fMRI as a tool for the investigation and prosecution of crimes.
“We conclude that fMRI, at least as far as we have applied it, is not yet capable of determining the truth about a person’s past experiences,” said Rissman. “All we could identify was a person’s belief that he or she had seen a particular face before, but this belief could sometimes be strong even for faces the person had never encountered. We found that it was challenging to differentiate such false memories from accurate ones.”
Rissman and his colleagues conducted two experiments to find out of if the recognition of a face produced a reliable “neural signature” detectible by fMRI. In the first, 16 people were asked to study pictures of 200 faces for 4 seconds each. Then, when they were in the fMRI scanner, they were shown those pictures mixed with another 200 they had never seen and asked to decide if they had seen each face before. They indicated how confident they were in each answer by rating it on a 5-point scale ranging from sure they had seen the face previously to sure they hadn’t.
The software used to analyze the pattern of brain activation captured by the fMRI data correctly predicted 83 percent of the time whether the subjects had seen each face before.
In the second experiment, designed to see if the software could detect a similar pattern of activation in the brain of a person not focused on remembering, seven people who didn’t take part in the first experiment were shown the 200 faces. Instead of being asked to remember them, however, they were asked to rate the attractiveness of each face. Later, in the scanner, they were asked to identify whether each of the 400 faces they were shown was male or female. Under these conditions the software accurately predicted recognition only 56 percent of the time—not much better than chance.
“This decline in classification performance provides evidence that our memory decoding method is not able to detect subtle subconscious brain responses that are automatically triggered by stimuli that we’re previously encountered,” Rissman said.” The researchers reported their results in the May 10 issue of the Proceedings of the National Academy of Sciences.
The results suggest that patterns of brain activation do reveal the experience of recognizing a face seen before, and that these patterns are consistent from person to person. However, the accuracy of those patterns depends strongly on how confident people are about what they remember.
“What we were sensitive at detecting was not memory itself, but the subject’s decisions about memory,” Rissman said. “We found that the brain scans are only as good as a person’s memory.”
Nevertheless, the ability to identify the presence or absence of memory patterns in the brain “could have profound implications for forensic investigations and legal proceedings,” Rissman and his co-authors state in their paper.
Cart before the horse?
Efforts to admit brain activity as evidence in a court of law are gaining momentum.
In 2008, a woman in India was convicted of killing her ex-fiance with arsenic. Admitted as evidence were the results of a Brain Electrical Oscillations Signature (BEOS) test, which used electrodes on the defendant’s scalp to measure her brain waves in an effort to detect if she had “experiential knowledge” of the crime. Prosecutors argued that the test implicated her by showing that her brain reacted in a distinctive way to descriptions of the crime. She was sentenced to life in prison, although the sentence has been suspended for other reasons.
And recently lawyers in Chicago for Brian Dugan, convicted of the rape and murder of a 10-year-old girl, introduced into the sentencing hearing fMRI scans that purportedly revealed Dugan to be a psychopath with deficits in the paralimbic region of his brain, which may have compromised his ability to regulate impulses and emotions. The judge was not persuaded: Dugan was sentenced to death. (See a story on the case in Nature)
No such evidence has been admitted into regular court proceedings in the United States, however, though attempts to do so continue.
In 2009, lawyers for a man accused of sexual abuse tried to enter fMRI evidence obtained by a California company, No Lie MRI, which they said supported the truthfulness of their client’s testimony. The prosecution lined up experts to testify about the shortcomings of the technique, and the defense withdrew the motion.
A few months later, a psychologist in Tennessee accused of defrauding Medicare and insurance companies wanted to enter fMRI evidence obtained from Cephos Corp. showing that he was telling the truth, but the judge, in a comprehensive 39-page report, concluded that the evidence did not meet two of the four criteria established by the federal courts for the introduction of scientific evidence. While the Cephos evidence has been subjected to empirical testing and published in peer-reviewed journals, Magistrate Judge Tu Pham concluded, it does not have a known error rate, and it is not generally accepted in the scientific community.
And recently in Brooklyn a woman who worked for a temp agency sued her former employer claiming she was denied good assignments after reporting a case of sexual harassment. Cephos obtained fMRI evidence from a supporting witness, claiming it showed she was telling the truth, but Judge Robert H. Miller ruled that such evidence would violate the jury’s right to determine the credibility of the witness. In his ruling the judge added, “even a cursory review of the scientific literature demonstrates that the plaintiff is unable to establish that the use of the fMRI test to determine truthfulness or deceit is accepted as reliable within the relevant scientific community.”
Steven Laken, the CEO of Cephos, which owns and operates its own fMRI scanner in Framingham, Mass., disagrees. “Thirty-five publications have shown that lying activates more parts of the brain than telling the truth,” said Laken. “The work we’ve been doing for seven years, from the scientific point of view, is moving in the right direction.”
Laken was enthusiastic about Rissman’s work, but not because it might contribute to using fMRI as a form of lie detection. Rather, he saw it as a potential method for determining the quality of eyewitness testimony.
“Eyewitness testimony is almost worse that the flip of a coin,” he said. “When witnesses testify, ‘that’s who I saw,’ it’s almost more likely that they’re wrong than right. The wrongly convicted are usually convicted on the basis of eyewitness testimony. If we could get a quantitative measure of how confident witnesses are that they’ve identified the right person, that could provide a tremendous amount of value.”
But even Rissman acknowledges that the technology is not quite up to that task yet.
“The pattern of activity is consistent with the person’s experience,” he said, “but the process of memory is complex. A face could remind you of a friend, and on the brain scan it will look like you recognize the face. It’s hard to tell those situations apart. I think a much richer understanding of the psychological processes involved in memory will be critical to future experiments.”
The American Academy of Arts & Sciences has published an excellent collection of essays on this subject under the title, “Using Imaging to Identify Deceit: Scientific and Ethical Questions.” The pamphlet contains essays by nine heavy hitters in the field, including law professors Stephen Morse and Henry T. Greely (co-author of Rissman’s paper), and fMRI researchers Marcus E. Raichle and Nancy Kanwisher. It costs $6, and you can download it here: http://www.amacad.org/publications/deceit.aspx.