Does the world really need a new term to help cope with the discoveries of brain science? We have a ﬁne old word, ethics, to denote systems of moral principles and the discipline that deals with them. We have the more recent term, bioethics, to denote the ﬁeld that examines the ethical implications of medicine and of biological research. This issue of Cerebrum highlights an even newer coinage, neuroethics, and illustrates its signiﬁcance as a call to the public, as well as to physicians, scientists, and policy makers, to address the profound moral dilemmas raised by progress in the brain sciences.
Diverse areas of law, science, and medicine are already taking on some of the issues that might be found within the purview of neuroethics: personal responsibility in the face of some compulsion or mental illness, competence to make decisions or to stand trial, or the coerced use of psychotropic medication, for example. But emerging technologies and new understandings of brain function have raised new issues and put old ones in a new light.
The matters described in this issue, ranging from brain privacy to enhancement of mood or memory, should be matters of vigorous discussion, and ideally these discussions will mature before continued scientiﬁc advances force societies to respond.
While advances in many ﬁelds in the life sciences raise questions of ethics and policy, the brain has a special status: it is the organ of the mind and the substrate of all our thoughts, emotions, and behavior. Issues raised by progress in brain research bring to the fore concerns about our identities, our sense of agency, and what may be our last bastion of privacy, our own thoughts. Such weighty issues deserve the focus created by the concept of neuroethics; the time for broad ethical discussions related to brain science is upon us.
GETTING AHEAD OF OURSELVES
Increasingly, we can peer into the living, working human brain using the tools of noninvasive brain imaging. Positron emission tomography (PET) uses a radioactive tracer to detect metabolic or circulatory markers of brain activity. Magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) probe the structure and activity of the brain based on strong magnetic ﬁelds and radio waves. Magnetoencephalography (MEG) detects the minute magnetic ﬁelds produced by the workings of the brain. Optical imaging using infrared light promises to give us a look at the workings of the cerebral cortex and appears to be safe enough to use in studying young children. Transcranial magnetic stimulation (TMS) can activate regions of the brain through the intact skull, thereby eliciting tactile or visual experiences or simple behaviors, such as the movement of a limb.
While advances in these technologies during the past decade have been marvelous, they remain laboratory tools that give meaningful results only in carefully controlled medical or experimental settings. It will be years before imaging-based tests might be developed and validated for use outside these settings. Nonetheless, the desire to employ imaging technologies even before they are validated for other uses may prove irresistible in a world increasingly focused on issues such as security. (One need only reﬂect upon the widespread use of the polygraph or “lie detector,” a notoriously inaccurate tool, to see the worrisome possibilities.) Without creating a false sense of emergency or hyping what are still technological works in progress, but also without waiting for the broad deployment of imaging technologies outside of labs or clinics, we need to start examining how such technologies might be used and the concerns that they raise.The problem of premature deployment is only the beginning; some lines of imaging research already raise complex ethical issues. Recent studies suggest that imaging can detect unconscious biases toward other people. Other work has begun to examine the neural substrates of intelligence or speciﬁc abilities. Yet other research (discussed in this issue) explores the use of imaging to tell whether someone is telling the truth. The investigations into bias, to take just one example, illustrate profound ethical, social, and policy questions.
During the past two decades, cognitive neuroscience has posited that for some purposes, our brains can be described as modular, composed in part of multiple, interacting cognitive and emotional systems. What is more, the work of many of these systems is inaccessible to consciousness. For example, as I sit at my computer and hit the keys, I am engaged in a voluntary, but relatively automatic, behavior. Indeed I do better when I do not look at the keys or think about them. At a deeper level, my brain is calculating the necessary forces and trajectories of my ﬁngers, exquisitely controlling the muscles of my hands, while maintaining the posture of the rest of my body. No matter how effortfully I introspect, I cannot intuit the workings of the circuits that are controlling these basic motor behaviors, and that is just as well if my movements are to be rapid and ﬂuid.
Likewise, much processing of emotional reactions is rapid and automatic. For example, at the ﬁrst sight of danger, fear circuits are preparing the body to respond, even before the situation is consciously recognized. Imagers are now studying unconscious responses to social situations based on a long history of psychological experiments (for an accessible account of applications of this work see Banaji et al., 20031). Imaging is taking the salience of this work to a new level. In one of the most controversial examples, experiments in several laboratories have shown that when subjects are shown pictures of unfamiliar faces, many activate a brain structure called the amygdala depending on the race of the person depicted. The amygdala is involved in, among other things, processing emotions such as fear and anger. When combined with other work in psychology, this imaging data suggests negatively biased rapid emotional reaction to people of races other than one’s own. Of course, people whose amygdalas activate may not ultimately behave in a biased way. The cerebral cortex may “override” the initial emotional responses before they lead to any action or even rise to the level of consciousness.
What kinds of ethical issues might be raised by such work? First, this type of information could be applied prematurely in a variety of social situations ranging from the workplace to courtrooms where, for example, there might be attempts to apply it to jury selection. If the work stands up, the ethical questions become knottier, leading perhaps to invasions of the privacy of people’s thoughts. Moreover, the experiments themselves raise uneasy questions. The subjects being tested might not be conscious of their own rapid emotional responses that correlate with amygdala activation. Indeed, a subject may not be aware of any bias at all, because of the action of higher brain centers. The results of participation in such an experiment may shake a subject’s self-concept (am I really a good person if my amygdala activates? What is my “true self”? Is there one “true self”?). And one can only imagine the political applications to which such scientiﬁc investigation might be put prematurely. Failure to face these issues head-on with good scientiﬁc information and with humility for what remains unknown can lead only to misinterpretation of this and many other signiﬁcant imaging results as they emerge.
Imaging studies to characterize individual differences are just beginning; this is a ﬁeld that will confront us with challenging ethical concepts. Already the ﬁrst papers are appearing correlating brain imaging results with human capacities such as ﬂuid intelligence, 2 while other work has explored issues such as criminal behavior. Neuroethics can proﬁt from recalling the early failures to carefully interpret and ponder the ethical dimension in behavioral genetics, a ﬁeld in which early results were vastly overinterpreted and used, in part, to justify truly terrible policies and acts, most notably but not exclusively in Nazi Germany in the mid-twentieth century. The revulsion produced by the history of the eugenics movement then impeded the development of the science investigating the genetic contributions to behavior, for example slowing genetic research into serious mental disorders in Germany and perhaps in other countries.
The history of behavioral genetics is too complex to review here, but similar issues touch neuroscience at many points. For example, neuroscientists are now studying the brain’s role in aggression and violence, just as, previously, behavioral genetics sought a gene “for” aggression, or even criminality. Research to ﬁnd such a gene was often cast in terms of benign preventive interventions, but critics saw a risk of lifelong stigmatization (if not surveillance or preventive detention) or the foreclosure of educational or employment opportunities. It turned out the science was off base: while, in aggregate, genes have a powerful inﬂuence on behavior, there is no single gene “for” aggression—or intelligence or schizophrenia. Rather, behavioral traits, including common mental illnesses, appear to result from the interaction of many genes, acting together with developmental and environmental factors. It may even be that the combinations of genes contributing to a given behavioral trait differ from population to population. Thus, even if one day we all ﬁnd ourselves with our genomes on a chip in our wallets, our genotypes will provide only probabilistic information about our behavior.
This brings us back to neuroethics, because while our genes play a critical role in building our brains, it is ultimately our brains, not our genes, that produce our thoughts, emotions, and actions. As powerful as genetics is likely to prove, especially in our approach to mental illness, it is neuroscience and technologies such as imaging that should ultimately tell us much more about our behavior, desirable or not. For example, an analysis of our genes might one day tell us that we have a certain level of risk for depression or for social anxiety or perhaps for addiction-like behaviors. Ultimately, as technologies mature, more reﬁned probabilities may be detected in our brains. Concerns about misuse of genetic information led the Human Genome Project to make a concerted effort to address ethical, legal, and social issues as it moved forward. It is timely, through focused discussion, through meetings and publications, and through grant programs that encourage research, to engage the problems raised by use and possible misuse of the science of the brain.