Friday, October 01, 2004

Everyday Neuromorality

By: Adina Roskies Ph.D.

Neuroimaged views of our brains, aside from their medical value, could, in principle, pose challenges to our senses of freedom and moral responsibility, and render us easily manipulable. But we are not so frail, either personally or socially. Self-control, whether exercised in our individual choices or our informed responses to others’ agendas, is likely to continue to prevail.

Increasingly, scientists are observing and recording human emotions through neuroimaging. Such capability is causing widespread uneasiness, writes Adina Roskies, for it has the potential to result in a sort of demystification of the mental that makes freedom of the will seem impossible and threatens to leave us open to manipulation as never before. Relax, the author argues: Moral responsibility is fundamentally social, and self-control is what makes us free. Understanding cognitive function may cause us to revise, but will not force us to abandon, common notions of moral responsibility and control.

Imagine a world in which our every move is monitored by portable “brain cameras” —devices that can measure our brain activity throughout our everyday lives. During the morning commute, the machine documents our neural activity, cataloguing our reactions to our fellow drivers, which range from indifference to annoyance to road rage. When we stop at the bank, it measures brain function during our financial deliberations; at work, it records what goes on in our heads while we engage in the usual banter around the water cooler and when we hear in the board meeting that the other guy got promoted. At the grocery store, the machine measures brain activity correlated with seeing brands we favor, compared with those we do not, and it documents what happens when we deliberate over types of laundry detergent and ultimately pick, say, the generic brand. The data the machine records when we get home reflect the love we feel upon seeing our spouse, or perhaps the lingering resentment from a recent argument. As we lounge on the couch watching TV, the brain camera measures and logs our neural responses to commercials pitching everything from dog biscuits to pharmaceutical male-performance enhancements, and to shows ranging from the homilies of Oprah to the bawdy humor of South Park or the violent evening movie. 

Relax. This world I describe is a science-fiction world; no brain camera exists that can monitor the neural causes of our every move. Nor, despite claims of some panic-mongers, do we stand at the brink of such a world. However, in the past few decades we have witnessed the development of technologies that allow noninvasive imaging of neural activity, and they can be used—and, in experimental settings, already have been— to explore the brain bases of many mental phenomena. These include virtually the full range of human emotions: basic responses such as anger, fear, and happiness; social feelings such as empathy, indignation, and love; personal taste and the effects of marketing; and even high-level capacities such as economic decision-making and moral reasoning. Although the current understanding of these and other phenomena amenable to neuroimaging is crude and superficial, there is reason to suppose that someday we will have a reasonably good understanding of the brain activations underlying, for example, our senses of fairness and justice, feelings of greed and lust, and so on. 

Clearly, pervasive monitoring of brain activity raises a host of ethical issues, privacy among them. But let us remain grounded in the real world and assume that neuroimaging is used only with consent and for the purposes of achieving an understanding of brain function. Suppose that in general we will not know from direct observation what is happening in the brain of a given person when, for example, he or she displays anger in some everyday situation, but that we will know from scientific studies what neural activity typically underlies behavior of that sort. What kinds of ethical issues arise then? The prospect of being able to investigate the brain bases of complex cognitive tasks as well as the emotions that flesh out the contours of everyday life poses the following difficult question: How will scientific understanding of the brain bases of our capacities, traits, and behaviors affect the way we behave as individuals and a society? And how will it affect how we assess behavior in others? 

A widespread uneasiness seems to attend the prospect of understanding the biology of what makes us behave as we do, for such knowledge results in a sort of demystification of the mental. A neuroscientific understanding of behavior would be a functional characterization of its neural causal factors, reading more or less like this: “Behavior b is caused by neural activity of type a in brain regions x, y, and z; this activity subserves processing of types f and g.” Such an understanding of behavior is committed to the view that each of us is a causal physical system— a sort of machine. Insofar as the behaviors that interest us are reflections of our mental capacities, we are committed to the view that our mental life is, or is the result of, the workings of the physical machine that we are. Some fear that this demystification of mind would have terrible consequences. Most prominently, it makes freedom of the will seem impossible: If everything we do is just the result of causal processes, then we are not truly free. 

This demystification of mind... makes freedom of the will seem impossible: If everything we do is just the result of causal processes, then we are not truly free. 

Let us examine two worries that stem from this perceived lack of freedom. One is the fear that if we are not free, then we are not morally responsible for our actions. The second is that knowledge of the physical processes underlying behavior will leave us open to manipulation.

 

RELIEVED OF MORAL RESPONSIBILITY?

Our folk notion of moral responsibility is closely tied to the twin notions of causation and freedom. It is generally thought that if a person did not cause an action or state of affairs, she cannot be held morally responsible for it. If she was indeed the causative agent but was not free—if she could not have done otherwise—we again believe she cannot be held morally responsible. Thus the worry is that if we are mere physical systems we can never do other than what we do, so moral responsibility is an illusion. Even more troubling, no one and nothing is responsible for our actions: If we do not control our own actions, they are determined by the laws of physics. And physical laws are not the kinds of things that can be held responsible for anything, at least not in any moral sense. Therefore, moral responsibility does not exist. 

Though troubling, this issue neither arises from, nor can it be addressed by, brain imaging. The problem of freedom of the will long predates investigation of brain function, and it hinges in part on whether we believe the physical world is all that exists and not on how the brain, or even physics, works. If we are materialists (that is, if we think that physical stuff is all there is), the problem plagues us whether all physical processes are deterministic or not. The dilemma is that if moral responsibility requires freedom, and if freedom requires the ability to do otherwise, and if we are physically determined systems, then we are not morally responsible for our actions. But if physical processes are not deterministic, then they must be random. Random events are not the kind of thing for which people can be held responsible. So, if our actions are due to random processes, it seems again that we cannot be held morally responsible for them. 

Despite what some may think, neuroimaging cannot provide evidence for or against determinism; it cannot determine the ultimate causes of behavior, nor can it rule out appeals to forces beyond the physical. Neuroimaging cannot even show causation definitively; it can only reveal the correlations between occurrence of certain patterns of brain activity and certain cognitive phenomena. Even with the best neuroscientific evidence imaginable, the question of whether the neural activity we see in brain scans is or is not determined will remain open. Though I believe that neuroscientific evidence does not bear on the question of freedom of the will, neuroimaging might nonetheless contribute to the popular impression that we are determined systems. 

If the person on the street, influenced in part by neuroimaging, becomes convinced that our behavior is determined, will the world devolve into moral anarchy? Such a scenario is quite unlikely. Although the seemingly irreconcilable tension between freedom and determinism has proved to be a longstanding philosophical puzzle, it has not much affected folk intuitions about moral responsibility. These intuitions have a remarkable degree of convergence and, surprisingly, they are usually independent of whether the agent in question could have done otherwise. Recent studies have found that subjects are overwhelmingly likely to maintain that an agent is morally responsible for actions undertaken intentionally, provided that the agent is a person, even in scenarios that are explicitly deterministic. This suggests that our intuitions about moral responsibility are more closely tied to a person’s role in causing an action than they are to freedom. If so, even a widespread and explicit recognition of the causal basis of our behaviors is unlikely to greatly affect our views on moral responsibility. The data also lend some support to the idea that the notion of moral responsibility is fundamentally a social one, one that applies to people in virtue of their role in society and their capacities as agents and not because they exercise some sort of metaphysical freedom. Given this understanding, it is not surprising that people judge agents to be morally responsible despite a stipulation of determinism. It will be interesting to further explore intuitions about responsibility and perhaps use data from psychology and neuroimaging to begin to construct a scientific vision of self-control and agency that is consistent with determinism. 

Worries about freedom also raise the prospect that we might come to view our or others’ behavior as unchangeable or not open to moral criticism. 

Worries about freedom also raise the prospect that we might come to view our or others’ behavior as unchangeable or not open to moral criticism. This, too, is unlikely. Instead, a more sophisticated understanding of responsibility, based upon self-control rather than freedom, would enable us to make more fine-grained judgments of responsibility. It could help us recognize that although we are causal systems, it may be within our control, properly understood, to change our responses to certain situations. In addition, although we can recognize that all behaviors are caused by brain activity, in certain situations we may find that people are exempted from responsibility or their culpability is mitigated, because the requisite neural underpinnings for self-control were lacking because of brain damage or malfunction. In summary, a neuroscientific understanding of cognitive function may cause us to revise, but it will not force us to abandon, common notions of moral responsibility and control. 

EASILY SUBJECT TO MANIPULATION?

A second type of worry stemming from the view that we are just physical causal systems is the fear that public recognition that we are such will lead to a debilitating sense of helplessness or loss of control. The prospect of a nation paralyzed by ennui or existential angst brought on by the sense of loss of agency is unrealistic. I have already argued that control and agency can be reclaimed even if freedom cannot. What is more, whether we are really free or not, we must live as if we are—in this we truly have no choice. 

A related and more reasonable fear is that if the workings of our brains behind our minds is understood, then we are sitting ducks for those who want to use that knowledge to control us: We will be left open to the threat of neural manipulation. This worry, I think, stems from a misunderstanding about the sort of information that brain imaging yields. Neuroscientific knowledge is descriptive: it allows us to correlate brain activity with behavior and enables us to figure out what sorts of brain activity are involved in various types of cognitive tasks. Descriptive knowledge, if fine-grained enough, may also be predictive. It may enable us to anticipate that a person exhibiting a certain pattern of brain activity will act in a particular way, or will engage in a particular kind of cognitive processing. But the ability to predict is not the same as the ability to control. Using neuroimaging to aid in understanding brain-behavior relationships does not allow us to control behavior any more than taking pictures at a busy intersection enables us to control the flow of traffic. 

Using neuroimaging to aid in understanding brain-behavior relationships does not allow us to control behavior any more than taking pictures at a busy intersection enables us to control the flow of traffic. 

However, despite the recognition that knowledge of the neural basis of behavior does not allow us to control behavior, one can still imagine such knowledge to be a powerful tool for influencing it. Suppose executives of the megacorporation Sell-Mart come up with a clever plan: Scan the brains of subjects as they watch a variety of potential advertisements, with the intent of selecting and airing the ad that scores best on some measure—perhaps that it provokes the most activity in brain areas associated with arousal or pleasure. Suppose, in addition, that those areas are in fact correlated with people’s motivation to buy a product. If marketers understand what neural activity is correlated with motivation to buy and design their marketing strategies to elicit this type of activity, then are they not in effect ensuring that their advertising will motivate us to purchase the products they are peddling? In other words, will the understanding of brain function allow some parties, such as companies, to manipulate the average person, thus subjecting us to a novel form of coercion? 

Perhaps such a strategy would be effective, but it is essentially no different from methods already employed in advertising. All advertising attempts to manipulate behavior by manipulating preferences, and both preferences and resulting behavior are downstream effects of brain function. So advertising is just an indirect way of manipulating brain function. What differs in this scenario is the method used for selecting advertisements; with neuroimaging techniques, the marketer can place its bets more directly on neural data instead of on focus groups. 

Still, one may worry that if marketers have access to neural data, we may be somehow more compelled to buy the product advertised. Once we allow Sell-Mart a view into our brains, is there any way we can get out of this bind? We can, and it is the very way we have always managed to deal with advertisements. Sell-Mart is no more in control of our behavior, and we are in no less control, than before the advent of brain imaging. Recognizing that the advertisement, whether neurally savvy or not, is an attempt to manipulate our behavior is the first step in blocking this strategy. By being educated consumers—by knowing that advertisers are attempting to manipulate us—we can stay ahead of the game. Recognizing the ploy and forming the intention not to do what we are expected to do are also just changes in our brains. By thinking, we change the equation that the advertisers rely upon, for by thinking about things differently we literally change the pattern of neural firing in our brains. This point, which is generalizable, ties into the notion of self-control discussed earlier. 

A REAL SUBJECT OF ETHICAL WORRY

So do we have anything to worry about? Indeed we do. Let us return to the analogy of taking pictures at a busy intersection. Photographs do not control their subjects, but nonetheless photography has emerged as a powerful social tool. It can reveal secrets, and it can arouse feelings such as sympathy, patriotism, or horror, depending on the subject matter—say, a scene of war— and how it is portrayed. Similarly, we can expect that images of brain function will be capable of being used to sway opinions and persuade. 

It is here that I think neuroimaging becomes a real subject of ethical worry. Pictures have a persuasive force that other means of communication lack: The old adage “seeing is believing” accurately conveys the sense of certainty we have in what is before our eyes. And indeed the function of pictures, historically, has been to document. Taking pictures at a busy intersection might reveal who ran the red light and caused the accident. However, in this age of digital image-manipulation, no photograph can be assumed to accurately reflect reality. 

Unlike ordinary photographs, the brain pictures yielded by neuroimaging technologies are anything but a straightforward depiction or snapshot of neural activity.

A similar skepticism should be the default attitude of all consumers of brain imaging, including the public, because the construction and interpretation of brain images is as much an art as a science. Unlike ordinary photographs, the brain pictures yielded by neuroimaging technologies are anything but a straightforward depiction or snapshot of neural activity. The relations between the brain and the raw data, and between the raw data and the final image, are complicated and flexible: Many inferential steps occur between brain activity and the picture that appears in publication, and numerous subtleties regarding the exact tasks and the analysis procedures used can play a large role in how the data ought to be interpreted. 

Given the complexity of brain function, and the difficulties inherent in conducting and interpreting imaging studies, we should be especially careful about accepting others’ assertions about the alleged take-home message of an imaging study. This is never truer than when the data are being used as part of an agenda—for example, as evidence in court or to divide a population according to predicted ability in some domain. If we take images and their interpretations at face value, we can be easily misled. The greatest ethical danger comes not from understanding the brain bases of behavior but from not understanding them; it comes from becoming too enamored with our ability to investigate the brain while failing to understand sufficiently well what the data mean. However, if we are careful, the knowledge of brain function that neuroscience yields will enhance our lives and will not lead to negative changes in our everyday lives, in the ways we behave, or in the ways we evaluate other people’s behavior. 

Fully recognizing the incredible promise of new scientific technologies should not prevent us from bearing in mind the limits of science. Neuroscience yields only descriptive, not prescriptive, knowledge. 

Finally, fully recognizing the incredible promise of new scientific technologies should not prevent us from bearing in mind the limits of science. Neuroscience yields only descriptive, not prescriptive, knowledge. It gives us an understanding of how things work but does not address fundamental normative questions about what is best, what we ought to do, or what is morally right. These fundamental ethical questions are not questions science can answer. Yet they are essential questions for us to ask as humans, as members of societies, as citizens of countries and a global economy, and as custodians of the earth. Neuroscience, even the neuroscience of ethics, may one day be part of our everyday life, but it will not replace the need for us to pursue everyday ethics.



About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.