Truth, Lies, and False Memories: Neuroscience in the Courtroom

Report on Progress
Craig Stark, Ph.D.
October 15, 2014

Since 2008, I have participated in a series of judicial seminars for Federal and State Judges aimed at communicating how emerging issues in neuroscience can and should impact the courts. The seminars are supported by a grant from the Dana Foundation to the American Association for the Advancement of Science (AAAS)

It should come as no surprise that there are many areas in which breakthroughs in neuroscience, and neuroimaging in particular, have informed and will continue to inform the courts. Are there signs of brain activity following trauma even though a person is motionless and unresponsive? Is there a lesion that might explain a person’s sudden change in behavior and might serve as a mitigating factor at sentencing? By being able to peer inside the skull and observe both the structure and function of the brain, we can speak to questions like these with some confidence.

There are many questions though, that despite all our technology and all our progress, we cannot readily address – at least not with the confidence that would be required to affect an individual’s criminal trial. We all see headlines touting remarkable, definitive sounding findings such as “ Study finds psychopaths have distinct brain structures .” Below the headline we might read about structural brain differences between psychopaths (or whatever group is under study) and the rest of us. Our instinct is to believe that neuroscientists have identified a pathognomonic brain alteration and can now determine if someone is a psychopath based on a brain scan. Yet this is far from the truth and, in this case, not even what the authors of this solid study claim.

Sure, there are regional brain volume differences between psychopaths and healthy controls that are statistically reliable at a group level. There are height differences between males and females that are statistically reliable as well, but that doesn’t mean that you can accurately infer the gender by knowing that someone is 5’9” (or even 4’11” or 6’2”). The distribution of male and female stature overlaps far too much. But, the problem isn’t merely one of overlapping distributions of height or of regional brain structure, but also other confounding factors such as age and nationality that would need to be accounted for. The confounding factors in neuroimaging research are complex and not well-understood. We, as neuroimagers, do our best to account for them and our scientific reports are typically appropriately cautious about our conclusions (the above study is no exception, as it lays no claim to being able to make such a reverse inference ), but by the time it is found in the popular press, these nuances are often lost.

We would all like neuroimaging to be this powerful. We would like it to be able to tell us “what’s wrong with somebody” or to diagnose diseases or disorders. To some extent it can, but its confidence is far from resolute. When a researcher like Dr. James Fallon uses his own MRI scans to describe himself as having the brain of a psychopath, we can be intrigued about what this means in terms of how much these scans can determine. Yet, while he is related to Lizzie Borden and seven other killers, carries the MAO-A gene linked to violence, and describes himself as doing “jerky things that piss people off,” he’s not a murderer or a psychopath. Quite possibly, as he believes, it’s been his environment and experiences that led him down a different path. His office is just across the way from me here at UC Irvine and I am not remotely concerned about safety on this campus. But, what would happen if someone close to him were found dead under suspicious circumstances? I recently served as an expert witness in a murder trial, asked to describe the limitations of memory since hard evidence was scarce for both sides in the courtroom. I couldn’t help but wonder what would happen if Dr. Fallon were sitting in the defendant’s chair, innocent, but with neuroscience data painting so strong a picture of a killer. If this information were entered into evidence, how could the jury not treat it as damning?

These are the problems that neuroscience faces as we look to use its incredible technologies to help society and to help the courts in particular. The developments in our field have been nothing short of spectacular and, as scientists, we owe it to society to apply our findings and our tools to the world outside our labs. But, we have a responsibility to apply them appropriately and judiciously, as the cost of errors is far greater than a published finding that fails to replicate in other labs.

Perhaps nowhere has this played out more clearly than in trying to help the courts decide whether someone is telling the truth or not. In August 2014, Michael Brown, a young, unarmed man in Ferguson, MO was fatally shot by a police officer in broad daylight. Witnesses to the event give dramatically different accounts of the events. Did they perceive the events differently? Do they remember them differently? Are some of them lying?

Whether an account is accurate or not may have nothing to do with whether someone is intentionally trying to deceive. Memory is not perfect and what you retrieve is not a veridical rendition of what you saw. It is open to forgetting, failures to encode, distortions, biases and misattributions of the source. But, the fact that it is imperfect, and that memory for an event may contain both accurate and inaccurate aspects does not mean the person must be intentionally lying. It merely means that some portion of the memory is false. How we perceive the world is driven not only by what enters our brain through our eyes, ears, and other sensory receptors, but also by our expectations. Often called “top-down” influences, these expectations drive what we attend to and what we see and are themselves driven by how our past experiences have shaped us. Two individuals with different backgrounds can therefore honestly witness the same event and “see” it differently.

Even if two people witness an event in the same way, there is no guarantee that they will remember it the same way or that either of them will be accurate when tested later on. In fact, if you ask the same person to recount an event twice, separated by some amount of time, even key details often will be altered. For example, even residents of NYC had significantly different memories of the events of 9/11 after a year had passed, with 40% of the details differing between their original account and a later one.

These issues with memory come as no surprise to memory researchers. Even the first empirical memory researcher, Hermann Ebbinghaus, showed us how over half of the information we initially encode can be lost in the first hour. But, this is not the way most of us – those who aren’t memory researchers – think about memory. In a recent survey, when memory experts and other cognitive neuroscientists were asked if “human memory works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later,” they were absolutely unanimous in their disagreement with this statement. Yet almost two thirds of the general population believes it to be true. This general population is, of course, what largely makes up the juries in our courtrooms.

Science can help us maximize the accuracy of information provided by eyewitnesses. Right now, the National Academy of Sciences currently is generating a report on approaches to maximize the validity of eyewitness identification. Eyewitness testimony can, and should, provide a source of information for the court, but there is no evidence that, even with the best techniques, we should expect eyewitness memory to be perfect and free of distortion. The gist and some number of details may be accurate, but many bits will be lost and others will be inaccurate. Confidence counts for something, but can be readily inflated and, at times, have an inverse relationship with accuracy, particularly when memory is being probed with something highly similar to, but not identical to the studied item. So, if a person in a lineup looks similar to the perpetrator, a witness can identify the perpetrator and be highly confident. This confidence in their own accuracy will typically grow with time and with repeated reminiscing over the event.

We would all love it if neuroscience could distinguish between these true and false memories, but our basic understanding of how memory operates suggests that false memories are a by- product of the normal operation of the system. These distortions may even be adaptive. Memory is designed to guide our current and future behavior, not to look back and replay life moment by moment. Perhaps this is why functional brain scans cannot tell true from false memories with any kind of confidence. False memories are “memory illusions” akin to visual illusions. The powerful effects of visual illusions demonstrate how our brain takes shortcuts in how we see and false memories show us that our brain takes similar shortcuts in how we remember.

Memory is good enough for the job it evolved to do. It tunes how we perceive the world and allows “a whole active mass of organized past reactions or experiences” (Bartlett, 1932) to help steer us through the world. It’s a remarkable process that works so well, we assume it can be virtually perfect. But it’s not and neuroscience cannot currently tell us (and may never be able to tell us) when it has failed.

What scientists can do, and must do, is to communicate this information to the public. Science has rigorously demonstrated that memory’s abilities are not in line with common beliefs and yet these beliefs still exist in the judicial system’s procedures and certainly drive the thought process of jurors. We know what our techniques can and cannot say with confidence. As the CSI Effect has shown, public perception of the capabilities of science influences courtrooms and jury rooms. Our constant exposure to over-inflated claims of what technologies like neuroimaging can do are leading to a form of collective false memory in the form of an unreasonable expectation of what the technology can prove. We cannot, with conviction, tell true from false as we cannot tell honesty from lying or psychopath from “normal” simply on the basis of neuroimaging findings. When a conviction is at stake, we need to apply our imaging tools judiciously and to educate the public well on what they truly can and cannot say.

Further Reading:

The Ethical Brain: The Science of our Moral Dilemmas by Michael Gazzaniga

The Neuroscience of Memory: Implications for the Courtroom