Dramatic advances in neuroscience have created fast-rising expectations for improvements in the health and well-being of people around the world. But fulﬁlling this promise is proving to be neither quick nor easy, and not for scientiﬁc reasons alone. Alan Leshner urges that neuroscientists not only address the
signiﬁcant challenges of their own investigations but also be willing to participate in ethical and legal dialogues with colleagues and non-colleagues alike.
The dramatic progress made by neuroscience has generated both rich promise and quickly rising expectations for improvements in the health and well-being of people throughout the world. However, moving those advances from the research bench to actual clinical practice will require more than just scientiﬁc and technological progress. The unique attributes of the brain as an organ system and its centrality to our concept of our own humanity raise an array of ethical issues that must be resolved in an open dialogue involving both the scientiﬁc community and the wider public before we will see widespread application of the fruits of neuroscientiﬁc progress.
WHAT’S SO SPECIAL ABOUT BRAIN RESEARCH?
At a minimum, the same overarching ethical standards should be applied to brain research as to any other area of clinical work: prevent harm, protect the vulnerable, and ensure fairness and equity of access to the beneﬁts of the research. Moreover, of course, the brain is not the only organ system whose integrity is essential to a healthy life.
However, the brain is the most complex organ in the body. No other system has so many roles and consists of so many interoperating parts—the brain’s millions of interconnected cells and circuits. No part of the brain is an “island”; individual parts of the brain neither act alone nor appear to be involved in only one function. This complexity makes studying and eventually intervening effectively in the operations of the human brain among the most difﬁcult challenges facing the scientiﬁc enterprise. The interconnectedness of its parts and the multitasking nature of its individual structures mean that any intervention, however small or precise we try to make it, is unlikely to have a single consequence. Therefore, the decision to in any way alter brain structure or activity involves potentially great cost-beneﬁt tradeoffs.
What makes the brain so special is that it is the seat of the mind. At least, most people think mind when they hear or think about the brain. Although the brain does much in addition to generating what we think of as the mind, mental activity is so central to our very humanness that the relationship between brain and mind always haunts any thoughts of normal or abnormal brain function. The brain is the essence of the “self” and, therefore, doing anything to the brain is potentially altering one’s essential being. This close association among brain, mind, and self colors any discussion of real or imagined brain interventions, whether to enhance a normal brain or to correct neural malfunction.
In addition, although behavior is determined by an interaction among one’s genes, one’s personal life history, the environmental context in which the behavior will occur, and other aspects of an individual’s biological state, the brain is the ﬁnal common path for the experience and expression of all mental activity. For that reason, any intervention in our brains raises the specter of not only causing potential physical disability, but also changing our cognition, emotion, or even our personalities.
DO WE REALLY WANT TO KNOW?
The idea that research discoveries are possible that might best be left unmade is a concept almost uniquely relevant to the issues surrounding research on human biology and behavior. Donald Kennedy, editor of the journal Science, took up this question in a workshop on neuroethics supported by the Dana Foundation and organized by Stanford University and the University of California San Francisco in 2002 (see readings list). His remarks caught my attention because it is such an unusual question to be asked by scientists.
In almost every other area of science, we would immediately answer, “Of course we want to know. We want to know everything!” However, society at large has at times seemed at best ambivalent about what it wants to know about human behavior and its relationship to biology. The best-known cases surround issues such as the neurogenetics of intelligence or of violence. Earlier attempts by scientists to tackle these issues have been met with a great hue and cry from many quarters, with people concerned primarily about how the information might be misused to stereotype or stigmatize individuals or groups.
I believe these negative reactions reﬂect scientists’ failure to accurately communicate the studies and their potential implications as much as they do misuse of or overgeneralization from scientiﬁc ﬁndings. For example, as I will discuss further, the fact of a possible genetic predisposition to greater or lesser intelligence does not automatically imply that members of one or another racial or ethnic group will be more or less intelligent. Nevertheless, ﬁndings on the genetics of intelligence have too frequently been interpreted that way. The same has been true for studies of genetic contributions to levels of aggressiveness or violence.
Another form of the same question was posed in a recent report of the President’s Council on Bioethics titled Beyond Therapy.* The Council members grouped ethical questions around behavioral biology and similar domains into two categories. One set relates to interpersonal issues of preventing harm and protecting vulnerable people (also discussed more below). The second set is of a higher order, having to do with our sense of our own humanity. The Council raised for discussion, without coming to a clear conclusion, the issue that, as we learn much more about genetics and about the brain and how to use our ﬁndings to intervene, we may be at risk of “fooling with Mother Nature” or “playing God.” For example, the Council suggested we ponder whether we may be at risk of doing “unnatural things” when we think about brain-based behavioral enhancements.
This issue, of course, can play out at the individual level, the group level, or even at societal levels. Thus, we also need to think about whether we would also be at risk of changing our entire society.
I share the belief of the scientiﬁc community that we have an obligation to apply the full power of science to solving the toughest problems facing humanity, even if they are potentially contentious.
Whatever the origins of past problems or future concerns, I share the belief of the scientiﬁc community that we have an obligation to apply the full power of science to solving the toughest problems facing humanity, even if they are potentially contentious. And as scientists and clinicians, we must do whatever we can to relieve pain and suffering. However, I also believe that, when entering these kinds of domains, scientists have a duty to be extremely sensitive to the potential implications and uses of the results of their work, and that they need to engage fully with other members of the public, including philosophers, clergy, and ordinary citizens in developing a moral consensus and guidelines about how we will proceed.
MOVING FROM ANIMAL MODELS TO HUMANS
Discussions such as this always point out that humans are animals, and therefore much of what we have learned from studies of animal model systems ought to be generalizable to humans. In very many cases, that is true. But many animal models of complex human conditions, such as mental illnesses or addictions, are actually quite weak; they only very roughly approximate the human condition. Similarly, many animal models of complex human behaviors yield ﬁndings that, at best, generalize only minimally to humans, either because the human behavior the animals are supposed to model is really much more complex or the models are only superﬁcial approximations of the human condition thought to underlie the behavior. The same is true for in vitro models, which are not whole organisms but cells in a Petri dish or test tube.
This inability to generalize readily from animal models to humans raises a set of issues concerning whether and how to proceed from basic knowledge derived in animal or in vitro studies to tests of its relevance to humans. What criteria should we use to decide when basic-science ﬁndings are strong enough to be tested in humans? How do we decide the appropriateness or relevance of the models used to prepare for studies in humans? In seeking to replicate animal ﬁndings through noninvasive, observational studies of humans—using functional magnetic resonance imaging (fMRI), for example—we might set the bar relatively low, but when modifying brain function might be a part of the study, the decision becomes much more difﬁcult.
An interesting recent article by Vivienne Parry in the British Guardian newspaper illustrates well the concern by pointing out some examples where basic science-derived approaches to neurotherapeutics—mostly developed either in animal or in in vitro models—were prematurely tested in human subjects, with catastrophic results. Parry argues that the great promise of stem cell research, for example, for alleviating the disability of diseases such as Parkinson’s could lead us to premature testing in humans. But at some point, if we are going to reap the beneﬁts from basic research, we will need to take the leap. I believe Parry’s point is a good one and that clear standards need to be set now, because the basic science advances are coming at a rapid pace.
PITFALLS OF GENERALIZING FROM THE MANY TO ONE
One aim of neuroscience research is to improve our ability to predict future health conditions and behavior and our knowledge of how to modify both. However, scientiﬁc conclusions are often drawn from averages characterizing relatively large groups of subjects; they may not hold for individual cases within the group. It thus becomes important to avoid prejudging or stigmatizing a particular person merely because he or she belongs to a distinctive group; the individual may or may not have the characteristic in question.
It becomes important to avoid prejudging or stigmatizing a particular person merely because he or she belongs to a distinctive group; the individual may or may not have the characteristic in question.
This is especially true in interpreting studies of the genetics of behavior. Most behavioral traits such as intelligence, emotionality, or aggressiveness vary in intensity along a continuum, and genes predispose people to behave more or less intensely in response to stimuli in their environment. It is not the case that an individual is aggressive or not or that he or she is emotional or not; rather, genes predispose people to be more or less aggressive or more or less emotional. Moreover, genes do not doom an individual to behave in a particular way. They are only one of many things that determine what a person is like, including the individual’s personal history and the environmental context and triggering stimuli.
The same could be said of studies of other kinds of so-called predisposing or risk factors, such as early nutrition, exposure to drugs or alcohol, infections, or parental behaviors. In each case, when one looks at the whole group under study, one can see an overall effect. However, not all individuals within the group will necessarily share the outcome that characterizes the average of the subjects. The effects of prenatal cocaine exposure on later cognitive development are a powerful example that has received much public attention. The latest data show that, when the subjects are averaged together, the effects are not very great, although some individuals are affected very dramatically and others appear to be unaffected. Caution in generalizing from group data to the individual is especially critical in studies of brain and behavior, because any improved ability to predict behavior will likely be of great interest to law enforcement, employers, insurers, and schools. Such entities may use that knowledge in ways not always in the best interests of individuals.
Clinical brain research is subject to the same ethical guidelines and regulations as any other ﬁeld of research with humans. These include regulations governing conﬂicts of interest, conﬁdentiality, data and safety monitoring of clinical trials, institutional review boards, and informed consent. These issues have been deeply considered and extensively written about, and a good source of current thinking and information on regulations and guidelines can be found at the Web site of the Ofﬁce of Extramural Research of the National Institutes of Health (http://www.hhs.gov/ohrp).
Many human subjects in clinical brain research are in one way or another particularly vulnerable, and protecting them therefore requires special consideration. The general rules are not adequate.
But many human subjects in clinical brain research are in one way or another particularly vulnerable, and protecting them therefore requires special consideration. The general rules are not adequate.
Perhaps the most widely discussed issues have to do with informed consent, since many clinical brain research subjects are either cognitively or emotionally impaired. They might not really understand what they are consenting to, or they might be particularly susceptible to inducement or coercion.
It can be extremely difﬁcult to get genuinely informed consent from patients with dementias or other mental illnesses that compromise their intellect and emotions. The same can be true for children too young to really understand the issues at hand. In these cases, researchers often secure consent through a responsible family member or a specially appointed surrogate or proxy for the impaired patient.*
*For more information, see also: http://grants.nih.gov/grants/oer.htm
In my own ﬁeld of addiction research, we have had substantial discussion about the ethics of administering drugs of abuse to addicted individuals, whether they are currently abstaining or not. The problems are manifold: First, by deﬁnition, addicted individuals have severely compromised abilities to control their cravings and thus might be especially susceptible to improper inducements. It also is well known that in abstinent addicts even a very small “taste of the drug” can induce phenomenal cravings and relapse to drug use. Is it right for us to ask the “clean” addict to take that risk? Finally, every clinician’s ﬁrst priority ought to be to get addicted people into treatment, since addiction is a serious, life-compromising illness. In this case, current best practice is ﬁrst to work hard to get potential subjects into drug treatment, and only if that fails to include them in experiments where they might be exposed to abusable substances.
A second issue arises from the need to know what drugs of abuse do to naïve subjects—people never before exposed to the substance under study. Given the powerful addictive quality of many drugs of abuse, is it ever ethical to give drugs of abuse to naïve subjects? In this case, both government guidelines and consensus in the ﬁeld suggest that drugs of abuse should be given to naïve subjects only under the most exceptional circumstances and with the strongest justiﬁcation.*
* For more information, see also: http://www.drugabuse.gov/Funding/HSGuide.html
Some newly emergent issues affect biomedical research generally but seem of particular concern when dealing with vulnerable populations. A particularly thorny area is testing in children or adolescents brain-targeting medications that have been approved for use in adults but are previously untested in children. Historically, the general trend was to consider children and adolescents as simply little adults and assume they would respond the same way adults do. However, in the past few years it has become clear that the brain undergoes very rapid developmental changes all through adolescence; therefore, the effects and risks of a substance on the brain of a preadolescent might be quite different from those for an older adolescent. We need clearer guidelines than we have now about when and how medications—even if approved for similar uses in adults—can be tested in younger people. A similar argument can be made for research on elderly patients whose brains are in a different state of ﬂux.
BEYOND THE BENCH AND BEDSIDE
All scientists wrestling with the ethical issues embedded in translating basic brain research into life-saving therapies must also be mindful of the nontherapeutic uses to which their work might be applied. I have not spent any time here on the very important questions about how society will use in other settings the dramatic new understanding of the brain that science is providing—other articles in this issue of Cerebrum cover those topics quite thoroughly. However, I believe it is important to make explicit that the roles and obligations of scientists do not end with them conducting their studies according to the highest ethical standards nor with simply publishing their results and communicating them to their colleagues.
The roles and obligations of scientists do not end with them conducting their studies according to the highest ethical standards nor with simply publishing their results and communicating them to their colleagues.
Most new discoveries about the brain are both subtle and complex. Interpreting them appropriately for practical and policy use requires a deep understanding of the strengths and weaknesses of the relevant experiments and of the nuances and caveats that surround the scientiﬁc ﬁndings and theories derived from them. For that reason, many neuroscientists must be willing to go beyond their traditional “bench-based” roles. Any scientist who chooses to work on subjects with such signiﬁcant ethical and legal ramiﬁcations must be willing also to play a central role in the important dialogues that will ensue, involving ethicists, the clergy, policy makers, and the interested public.