Share This Page
Imagine that you have been arrested. Detectives come to your cell and demand your fingerprints, your blood. You do not want to give them these things, but there is nothing you can do about it. Then the detectives demand your thoughts, your testimony. You resist, and you are right to do so.
According to the U.S. Supreme Court’s Fifth Amendment jurisprudence, self-incriminating evidence that is not “physical,” such as your thoughts and your testimony, is protected (Schmerber v. California, 1966). The brain, if not the body, is safe—but is not the brain a part of the body?
The basic idea of mind-body dualism is that the mental and the physical, while equally real, are not able to be assimilated. A thread of dualism is evident in U.S. law (Fox and Stein, 2015). Recent advances in neuroscience, however, have influenced thinking on dualism and are precipitating changes in the realms of ethics and law. Specifically, it has been argued that the distinction between body and mind is fallacious (Farah, 2005) and that dualist notions in the law are obsolete (Fox and Stein, 2015). Current neuroscience technology seemingly allows for determinations of whether an individual’s thoughts indicate that her behavior is ethical or unethical, lawful or unlawful, and to what extent. These developments have led some, in turn, to sound ethical alarms concerning potential misuse of the methods.
In this paper, I argue that the dualist landscape has been poorly surveyed and the significance of neuroscientific advances for legal decision making mischaracterized. My claim is that, while brain scans have much epistemic power, they do not collapse all forms of mind-body dualism. Specifically, we do not have the potential for brain scans that yield proof but rather brain scans that yield new forms of evidence. As a result, the gains from neuroscience are less robust than what has been widely proclaimed, and the privacy concerns attendant to neuroscientific technology are less dire than what has been widely declaimed.
Descartes (1641/1996) famously argued that there are two kinds of substances: matter and mind. This conception, which is called “substance dualism,” is distinct from “property dualism.” For property dualists, matter and mind aren’t necessarily distinct substances, but they do possess properties that are not coterminous. As Chalmers (1996) suggested, consciousness has a qualitative aspect that seemingly is above-and-beyond the physical states of the brain. Consider Jackson’s (1982, 1986) Gedankenexperiment: Mary has spent her entire life in a black and white room containing only a black and white TV. But Mary is a scientist, and she knows all the physical information there is to know about what goes on when a person sees a colored object. She knows how colors stimulate the retina and how they are processed by the central nervous system. The question is, when Mary eventually leaves the room and sees colors for the first time, will she learn anything new? Jackson (1982) argued that she will—“It seems just obvious that she will learn something about the world and our visual experience of it” (p. 130)—implying that there are truths beyond physicalism.
Dualism in the U.S. legal system
A thread of dualism can be traced through U.S. law. As mentioned above, it is evident in the law of compulsion, where, according to Schmerber v. California (1966), “exclusively mental and not physical processes” are protected (Fox & Stein, 2015, p. 122). The thread of dualism also can be found in the treatment of criminal behavior, where conviction on most offenses requires both actus reus (literally, “guilty act”) and mens rea (“guilty mind”). Finally, the U.S. Supreme Court has distinguished between body and mind in the realm of tortious harms. In Metro-North v. Buckley (1997), the Court denied a plaintiff compensation for anxiety and despair caused by asbestos exposure and the subsequent knowledge that he faced an increased risk of developing serious illnesses. The Court’s reasoning rested in part on the notion that emotional injuries, while no less real than physical ones, are “far less susceptible to objective medical proof” (Metro-North v. Buckley, 1997, p. 434, quoting Consolidated Rail Corporation v. Gottshall, 1994, p. 552).
Neuroscience and the law
A primary idea motivating the current interest in neurolaw is the coarse but exciting one evident in popular culture (e.g., see Steven Spielberg’s 2002 film, Minority Report, or George Orwell’s 1949 novel, Nineteen Eighty-Four). The idea is that once the fundamental question in cognitive neuroscience is answered, that is, once we know the neural correlates that code higher-level human cognition, we will be able to circumvent much of the evidentiary difficulties that lie at the heart of the legal system. Through functional magnetic resonance imaging (fMRI) decoding, we could learn defendants’ and witnesses’ thoughts and use this information for, one, establishing mens rea and, two, preventing future unlawful behavior.
Mens rea is a difficult piece in the prosecutorial puzzle, both in terms of drawing conclusions about mental states given behavioral data, which is often how mens rea determinations proceed, and also in the sense of distinguishing between the different mental states that fall within the mens rea ambit. In the sentencing phase of a 2009 murder trial, fMRI scans were introduced as evidence that the defendant exhibited atypical brain activity (Hughes, 2010), activity that was similar to what had been observed in individuals prone to psychopathy, with the suggestion that psychopathy may be less intentional than supposed (Harenski, Harenski, Shane, & Kiehl, 2010).
The discussion of mens rea often is a discussion of degrees: was the crime committed “knowingly” or “recklessly”? Vilares and colleagues (2017) used fMRI to scan forty participants who faced the imaginary decision of whether to carry a suitcase with unspecified “valuable” content across a country border (p. 3223). The researchers were able to distinguish—at higher than chance but far from perfect rates—participants who (in thought) knowingly transported drugs from those who recklessly transported them.
Lastly, neuroscience is poised to solve the inscrutability of mental harms that was described by the Metro-North Court. It has rendered physical and mental distress more equally susceptible to objective medical proof. Not only have researchers have made progress in identifying the structures of the brain that are responsible for pain perception (Brooks & Tracey, 2005), they have used fMRI to predict pain intensity (Wager, Atlas, Lindquist, Roy, Woo, & Kross, 2013) and to distinguish between painful and non-painful thermal stimulation (Brown, Chatterjee, Younger, & Mackey, 2011). Recently, a team a researchers was able to use fMRI decoding to determine whether someone was experiencing chronic low back pain or not (Ung, Brown, Johnson, Younger, Hush, & Mackey, 2012).1
Ramifications: The obsolescence of dualism and the importance of privacy protections
It is widely thought that soon, given increasingly precise technology, we will be able to move from neural-experiential correlation to neural-experiential causation. The reason we will be able to make this move is that body and mind are comprised of the same stuff—physical matter—and the neuroscientific task is the rather clerical one of linking neural activation to subjective experience. This is one ramification of the advances in neuroscience: proclamation that dualism is an antiquated notion that should be scrubbed from scholarly thought. As Fox and Stein (2015) wrote, “[T]he divorce of mind from body is a fiction that distorts the doctrines of harm, compulsion, and intentionality and that serves no redeeming value sufficient to justify its presence” (p. 107).
Second, and as Justice Broussard of the Supreme Court of California wrote, “If there is a quintessential zone of human privacy it is the mind” (Long Beach City Employees Assn. v. City of Long Beach, 1986, p. 944). Embracing this notion, theorists in both lay (Gorman, 2012) and academic (Boundy, 2012; Tong & Pratte, 2012) settings have posited that brain scanning opens the possibility for grave privacy intrusions. In a Hastings Law Journal article, Boundy (2012) wrote, “Because of the invasiveness of this technology, it is imperative that any use be subject to the most stringent procedural safeguards” (pp. 1643-1644). Such technology has been called a “potential tool for evil” (Gorman, 2012, quoting the developer of a brain scanning device), one that might be exploited by governments and corporations (Sahakian & Gottwald, 2017).
Support for the legal parsing along dualist lines
The idea that all neurolaw has to do is move from correlation to causation is problematic in that it fails to account for an ongoing debate in philosophy of mind. Many of the above scholars frame the discussion as taking place between themselves and substance dualists, whom they characterize as doubting that patterns of neural activation cause experience. As Poldrack (2017) wrote in Nature, “[O]ne of the fundamental problems in lay thinking about neuroscience [is] what I often call folk dualism. This is the idea (crucial in legal applications of neuroimaging) that there is somehow a difference between brain and mind that is relevant to understanding people’s actions” (p. 156).
However, there is a third way besides Poldrack’s and that of substance dualists. As far back as Locke (1688/1959), it has been argued that consciousness—and, by extension, mind reading—requires special forms of knowing and access that are limited to the subject’s internal perspective. As Nagel (1974, 2012) argued, purely physical processes lack the essentially subjective character of conscious experience; and, if conscious experience is veridical, then the physical world includes more than can be described by neural firings, neurochemistry, and other physical processes. These processes characterize a space and time bound world, not how that world appears from a particular perspective, which is essential to conscious experience. According to Nagel (2012), these physical processes, while intimate with consciousness, likely are not alone responsible for phenomenological experience.
The current state of neuroscience suggests that those like Nagel (2012) and Chalmers (2010) are correct: if neuroscience continues along its current trajectory, it likely will never yield proof of intentions or pain, as the subjective mind is simply out-of-reach for non-subjects (see Gilead, 2015). What neuroscience will yield is ever stronger evidence. Take the example of pain and suffering as plead in a tort. The plaintiff may appear to be in pain: she may walk with a limp, grimace when she sits or stands, and break out in a cold sweat. Her neural activations also may suggest that she is in pain: the patterns may be statistically similar to those observed in others who report pain and suffering. Even though, as with Jackson’s (1982) Mary example, there is no reason to think that these sets of things—limp, grimace, sweat; neural activations—yield the subjective, phenomenological experience of the plaintiff’s pain, they can be used as evidence, albeit with varying degrees of accuracy.
If we accept that neurotechnical testimony is less valuable than widely thought (it is mere evidence2) and also potentially less invasive (the mind is not accessed), two questions emerge. First, how much information are we willing to give up about our neural activations—information that, like all information, is subject to theft and misuse—for the sake of additional evidence? I suspect the answer is not much. At the same time, given that what we care deeply about protecting—the mind, subjective experience, phenomenological experience—is not at hazard, how much information concerning electrical and chemical processes in our brains are we willing to give up? I suspect the answer is quite a bit.
The concerns here are not much different than the privacy concerns in, say, a standard Fourth Amendment search and seizure matter or the concerns in a Fifth Amendment testimonial matter. Moreover, at trial, neuroscientific evidence must be vetted as all evidence is vetted, with attention paid to relevancy, reliability, validity, false positives, and the standards promulgated in Daubert (for a contextual example, see Miller, 2010), Frye, and Federal Rules of Evidence 403 and 702, among others.
In this paper, I argued that much current thinking on neuroscience’s impact on criminal justice and the ethical concerns it raises is misguided. This is not to say that neuroscience lacks value, just that current emphases should be shifted. Neuroscience shows great potential for providing valuable statistical evidence. For example, scanning of a defendant’s PFC might indicate a 75% chance that it was 15% more difficult for him to refrain from responding in anger to an insult and striking the victim. This is the type of evidence that jurors and judges are ill equipped to evaluate—indeed, for some time, scholars at nexus of law and psychology have been aware of the difficulty of evaluating probabilistic evidence (e.g., see Thompson, 1989)—but it is evidence that computers can process fairly and consistently. Along these lines, the future of neuroethics and neurolaw will entail rethinking our notions of legal decision making in the light of these new forms of evidence.
1Hoameng Ung et al., Multivariate classification of structural MRI data detects chronic low back pain, 24 Cerebral Cortex 1037 (2014).
2More precisely, neurotechnical testimony is not exceptional from other types of scientific evidence.
Boundy, M. (2012). The government can read your mind: Can the constitution stop it? Hastings Law Journal, 63, 1627–1644.
Brooks, J., & Tracey, I. (2005). From nociception to pain perception: Imaging the spinal and supraspinal pathways. Journal of Anatomy, 207(1), 19-33.
Brown, J. E., Chatterjee, N., Younger, J., & Mackey, S. (2011). Towards a physiology-based measure of pain: Patterns of human brain activity distinguish painful from non-painful thermal stimulation. PLoS ONE, 6(9).
Chalmers, D. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford University Press.
Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17, 7-65.
Consolidated Rail Corporation v. Gottshall, 512 U.S. 532 (1994).
Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993).
Descartes, R. (1996). Meditations on first philosophy. (J. Cottingham, Trans.). Cambridge: Cambridge University Press. (Original work published 1641)
Farah, M. J. (2005). Neuroethics: The practical and the philosophical. Trends in Cognitive Sciences, 9(1), 34–40.
Fox, D., & Stein, A. (2015). Dualism and doctrine. In D. Patterson & M. Pardo (Eds.), Philosophical Foundations of Law and Neuroscience (pp. 105-136). New York: Oxford University Press.
Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).
Gilead, A. (2015). Can brain imaging breach our mental privacy? Review of Philosophy and Psychology, 6, 275–291.
Gorman, C. (2012, July 9). The mind-reading machine: Veritas scientific is developing an EEG helmet that may invade the privacy of the mind. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/biomedical/diagnostics/the-mindreading-machine
Harenski, C. L., Harenski, K. A., Shane, M. S., & Kiehl, K. A. (2010). Aberrant neural processing of moral violations in criminal psychopaths. Journal of Abnormal Psychology, 119(4), 863-874.
Hughes, V. (2010). Science in court: Head case. Nature, 464(7287), 340-342.
Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly, 32(127), 127-136.
Jackson, F. (1986). What Mary didn’t know. The Journal of Philosophy, 83(5), 291-295.
Locke, J. (1959). An essay on human understanding. New York: Dover. (Original work published 1688)
Long Beach City Employees Assn. v. City of Long Beach, 41 Cal. 3d 937 (1986).
Miller, G. (2010). Science and the law: fMRI lie detection fails a legal test. Science, 328(5984), 1336-1337.
Metro-North v. Buckley, 521 U.S. 424 (1997).
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–456.
Nagel, T. (2012). Mind and cosmos: Why the materialist neo-Darwinian conception of nature is almost certainly false. New York: Oxford University Press.
Norman, K. A., Polyn, S. M., Detre, G. J., & Haxby, J. V. (2006). Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences, 10(9), 424-430.
Poldrack, R. (2017). The risks of reading the brain. Nature, 541(7636), 156.
Sahakian, B. J., & Gottwald, J. (2017). Sex, lies, and brain scans: How fMRI reveals what really goes on in our minds. New York: Oxford University Press.
Schmerber v. California, 384 U.S. 757 (1966).
Thompson, W. C. (1989). Are juries competent to evaluate statistical evidence? Law and Contemporary Problems, 52(4), 9–41
Tong, F., & Pratte, M. S. (2012). Decoding patterns of human brain activity. Annual Review of Psychology, 63(1), 483–509.
Ung, H., Brown, J. E., Johnson, K. A., Younger, J., Hush, J., & Mackey, S. (2012). Multivariate classification of structural MRI data detects chronic low back pain. Cerebral Cortex, 24(4), 1037-1044.
Vilares, I., Wesley, M. J., Ahn, W.-Y., Bonnie, R. J., Hoffman, M., Jones, O. D., … Montague, P. R. (2017). Predicting the knowledge–recklessness distinction in the human brain. Proceedings of the National Academy of Sciences, 114(12), 3222-3227.
Wager, T. D., Atlas, L. Y., Lindquist, M. A., Roy, M., Woo, C.-W., & Kross, E. (2013). An fMRI-based neurologic signature of physical pain. New England Journal of Medicine, 368(15), 1388-1397.