Blogging from the Neuroethics Society Annual Meeting 2008


November 15, 2008

[First posted on the Dana Press blog]

FRIDAY, NOVEMBER 14, 2008

Truth telling on lie detection

How should we take companies’ claims that their functional magnetic resonance imagers (fMRI) can tell if we are telling lies? With a mighty grain of salt, said panelists during the second day of the Neuroethics Society’s annual meeting in Washington, D.C.

The idea of using fMRIs as a lie detector already has permeated society, said Daniel Langleben of the University of Pennsylvania, one of the most-frequently cited fMRI researchers in this area.

“Over the past 6 years, the press assumed fMRI was better than lie detection,” he said. “But no one really knows.” One reason is few controlled or real-world tests of the technology have been done; another is that people can confound lie-detection, whether a person is saying something they think is a lie, and mind reading.

In carefully controlled tests, with white male college students without known medical conditions and not on drugs, Langleben said, “we could sometimes detect deception,” but not always.

“There is no perfect lie detector,” said Steve Laken of Cephos Corp., which offers fMRI scanning to the public. Like standard lie detectors, fMRI “could be a forensic tool, not a definitive tool but a forensic tool,” he said. Cephos’s method has “78-97 percent accuracy,” according to the company’s research.

But the vast majority of his clients do not wish to prove lies but to show that they are telling the truth, he said. “Prosecuting attorneys and DAs aren’t interested; defenders are.” The company has a long list of explanations included on the informed-consent forms it requires clients to sign before administering tests, including the caveat that the client might not like the outcome. Cephos ensures that the people doing the scanning don’t know the details of the case and neither do the off-site researchers reading the images.

If the images light up the right way, “we say they believe what they’re saying is true,” Laken said, not that it is factually true.

But telling the truth may not look like telling lies in the brain, said Hank Greely of Stanford, who co-published a review of then-current fMRI detection studies in the American Journal of Law and Medicine in 2007. And there are even fewer tests of truth-telling.

Imaging for lie detection is just the first of a variety of neuroscience-based tests that will have legal implications, Greely said. Others include tests that might, in the future, detect levels of pain (for personal injury litigation) and whether we recognize people or crime sites. And interest is high: “The law is really interested in someone’s mental state,” he said.

Greely disagrees that fMRI for lie detection would be just another piece of forensic technology. “It is science saying this person is a liar, this person is telling the truth,” he said. Some studies show that simply showing a person information accompanied by an unrelated picture of a brain scan leads the person to think the information is true. “It is reckless, and so unethical, to proceed [with this technology] with so little knowledge if it is good—or good enough.”

What should we do? “It would be nice to have some regulation,” Laken said, but people disagree on what level to recommend and what agency should do it. In the short term, we could regulate the use of fMRI devices under standard medical regulations, just as we do other medical devices, suggested Langleben.  

Nicky Penttila