Lie Detection Services Remain Premature, Neuroethicists Say


by Aalok Mehta

January, 2009

If television is any judge, then neuroscience-based methods of lie detection have already passed the test of public acceptance—even as the feasibility of the technology remains an open question among scientists. A recent episode of House, for instance, showed doctors casually outing a deceptive patient while testing for blood vessel anomalies.

This growing disparity between public and scientific understanding of “forensic neuroscience” was one of several pressing issues that brought nearly 200 people to Washington, D.C., Nov. 13-14 for the first annual meeting of the Neuroethics Society.

Adding to the urgency of the discussion are two recent developments. A pair of companies, Cephos Corp. and No Lie MRI, have gone fully commercial with functional magnetic resonance imaging (fMRI) systems, the most promising brainscanning method for lie detection, in the past two years—despite objections that the technology is not ready. And in September, an Indian woman was convicted for murder based largely on a widely criticized electroencephalogram, or EEG, test for detecting familiarity with the details of a crime.

Scientists and philosophers fear that premature use of lie detectors— one of the most advanced of new neuroscience-based technologies with broad social implications—may set a poor precedent.

“I think lie detection is important in and of itself, but it’s also important because it’s the first of a variety of new neuroscience-based tests that will have potential legal significance: detection of lies, detection of bias, detection of sensation of pain, detection of recognition,” Hank Greely, a law professor at Stanford University, said at the meeting.

It’s still far too early to tell whether fMRI-based lie detection will become feasible, Greely and other experts said—and that’s why the commercial applications are premature and so worrisome.

For example, most fMRI lie-detection studies have been done in controlled lab settings, with homogenous and willing sets of participants, said University of Pennsylvania psychiatry professor Daniel Langleben. The studies usually show average data for an entire group, which don’t translate well to testing individuals.

And many of the variables present in real life—such as stress, illness or lack of foreknowledge about what is true and what is false—are often ignored.

Lies could be differentiated from truth in these studies, Langleben said, but only under specific conditions: “We do not know whether it can be actually done in real life. Though there are some reports about it, it has not been published and it has not been independently studied.”

Furthermore, when researchers take external factors into account, the data that remain are modest. “When we carefully control for salience (relevance), when we carefully control for every other sensory component of those items, and we carefully control who was studied … in the end you are left with very little,” Langleben said.

But Steven Laken, president and chief executive officer of Cephos, argued that commercial services provide useful evidence to courts and offer the real-world empirical data needed to answer many outstanding scientific questions.

“The judicial system makes lots of mistakes, and as an ethical society, should we be driving a way to fix a system that’s broken?” Laken asked. “Accurate lie detection could help that. It could be a forensic tool. It’s not a definite tool, but it could be a forensic tool.”

The company has worked hard to put ethics first, he added, including gathering data for three years before going commercial, using independent scientific consultants and fully outlining the limitations and risks of fMRI to potential customers. In the end, Laken added, the reliability of the company’s services rival those of many medical tests.

“We’ve done over 250 scans, and our accuracy rates range from 78 to 97 percent,” he said. “We don’t need FDA approval for this, but we’re getting the kind of data that would be available for submission if it was an equivalent FDA type test.”

But fMRI-based lie detection needs to meet a higher standard precisely because of the magnitude of public misconception and the profound consequences of misuse, others countered.

“This is a particularly significant piece of evidence for which we worry not only about reliability but (about) the balance between its probative value and its potential prejudice, and the potential for prejudice here is enormous,” Greely said. “There is already some important evidence that jurors … tend to take seriously—too seriously—any piece of evidence that comes with neuroscience attached to it.”

No one has systematically addressed countermeasures that could fool the system, he added. Studies comparing whether fMRI works better than polygraphy—which detects lies better than chance but has still been deemed inadmissible in many courts—are also lacking.

“I think this is fascinating science,”Greely said. “It may have a place in society, it may not. At this point, we don’t know, and I think it is reckless and hence unethical for us to proceed to the public use of something so important with so little knowledge about whether it’s good or not.”