Literature’s most infamous feud, between the Capulets and the Montagues, cost the lives of Romeo and Juliet. Each family, having lost its most treasured youth, confronts the cost of vendetta, and vows to seek peace. Escalus, Prince of Verona, has long known of the destructive feud, but has deemed futile any action to stop it. In the end, his wrath falls not just on the Montagues and Capulets, but also on his own inaction:
And I for winking at your discords too Have lost a brace of kinsmen: all are punish’d... A glooming peace this morning with it brings.
After more than two decades of discord over brain research involving people with mental impairments, the stars may be aligned at last for a peaceful resolution to questions of when and how such individuals might participate. Past opportunities to produce viable rules have been squandered, the victim of strident rhetoric, short-term thinking, and lack of resolve. That must not happen this time.
The stakes are high. Clinical brain research is one indispensable step toward new treatments, diagnostics, and prevention strategies. These clinical advances are sorely needed for patients with schizophrenia, depression, stroke, epilepsy, Alzheimer’s disease, and a host of other brain disorders. There is real promise for such progress, given new advances in genetics, technology, structural biology, and behavioral science; but these advances will touch few lives if they cannot be translated into new drugs, devices, and public health measures. The research required to do so entails participation by people suffering from psychiatric, neurological, and other conditions that may reduce their ability to give what we call informed consent.
“SUBPART E”: TWO DECADES IN LIMBO
Although often (and often heatedly) discussed, the ethical guidelines for when and how to involve those with impaired brain function in medical research have never been adequately codiﬁed. Part of the explanation for the present murky situation lies a couple of decades back.
For much of this century, those suffering mental illness, especially severe psychoses, were kept in state institutions. Concern about research in such institutions paralleled concerns about research involving prisoners. This focus on the involuntarily institutionalized patient shaped the ﬁrst systematic attempt to codify rules that would protect mentally impaired individuals participating in research. In the United States, the policy story begins in the 1970s with a report from the nation’s ﬁrst bioethics commission, called the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (for short, “the National Commission”).
The National Commission laid the foundations for rules that could govern the involvement of people in medical research —including especially “vulnerable” populations, for whom the Commission recommended special protections. The Commission did so in a series of reports, issued between 1975 and 1978. Many of these reports, for example those on children and on prisoners, were translated directly into federal regulations. Research on prisoners came to require speciﬁc assurances that it was not linked to better living conditions or ﬁnancial inducements, and that participation was kept separate from parole review. Children were only to be allowed in research with a “minor increase over minimal risk,” except when the research offered them the prospect of direct beneﬁt. They were to be given the opportunity to assent to participation (although legal informed consent was to be provided by parents or guardians). The underlying idea, in all this, was to counter potential coercion, to limit risk, and to ensure more thorough review of the risk vs. beneﬁt balance facing vulnerable individuals who might be approached to participate in research.
The National Commission’s guiding principles—respect for persons, beneﬁcence, and fairness—were summarized in its Belmont Report. This Report remains the touchstone of federal protections for involving human subjects in research. Institutions that accept federal funds for such research sign an agreement that obligates them to observe the principles of the Belmont Report and to abide by Title 45, Section 46, of the Code of Federal Regulations, which are the formal regulations protecting those who volunteer to participate in research. These regulations spell out the principles of the Belmont Report and stipulate a process for reviewing and approving studies before they proceed.
In this regulatory scheme, virtually all responsibility falls on Institutional Review Boards (IRBs). When any university, research institute, or other organization is to receive federal funds for research on human participants, it must sign an agreement with a federal oversight agency: the Ofﬁce of Human Research Protections, the Food and Drug Administration (for studies to test drugs or devices subject to FDA review), or both. That agreement includes appointment of an IRB, which does all the work of reviewing and approving any study that involves human participants. The rules and guidelines that set the terms for how IRBs do their job are the crux of the debate over how best to involve those with mental impairments in research.
Unfortunately, those rules speciﬁc to individuals with mental impairments fell victim to historical circumstances. One report of the National Commission was Research Involving Those Institutionalized as Mentally Inﬁrm, which, like other National Commission reports, was translated into proposed federal regulations: to be speciﬁc, into “Subpart E” of those regulations. The recommendations, however, were never formally put into effect, nor were they ever withdrawn, leaving research involving those with mental disabilities in limbo for more than 20 years.
One reason for the lack of action on the recommendations was timing. Research Involving Those Institutionalized as Mentally Inﬁrm appeared in an era of budding optimism about psychiatric treatments made possible by new drugs for manic depression, schizophrenia, and other disorders. State mental hospitals were being emptied, based on the (usually true) assumption that patients would be freer and fare better in the community than in a state facility. The policy, given the ungainly title “deinstitutionalization,” was aimed at emptying state hospitals that were the subject of repeated scandals associated with words like “Bedlam” and “snake pit.”
The unfortunate consequences of deinstitutionalization are well known, and not directly relevant to our story, but a brief comment may be in order. Although life outside a state mental hospital was better for many people, the policy only went half way in reducing the toll of severe mental illness. While the new drugs restored fully normal function for a few people; for others, they were completely ineffective. Most patients were better, but still disabled by their illness. The policy of deinstitutionalization reduced the number of people in mental hospitals by a factor of 10 from the 1950s through the 1990s, but the social services necessary to support partially disabled people with mental illness were never put in place. A disproportionate share of the homeless and those who rotate through the criminal justice system suffer from mental illness; ﬁgures are notoriously shaky, but an estimated 100,000 to 200,000 homeless, and more than one million prisoners, are currently believed to suffer from treatable psychiatric conditions.
Back in the 1970s, however, when the National Commission’s recommended protections for mentally impaired participants in research were being considered, they collided with a concern that progress in psychiatry should not be impeded by unduly burdensome research constraints— constraints premised on an “old” model of mental illness. The Young Turks of the then-emergent ﬁeld of biological psychiatry —those most associated with the powerful new drug treatments—argued vigorously against strictures that treated psychiatry differently from internal medicine or surgery. After all, was not psychiatry entering the medical mainstream? Most treatment was no longer in mental institutions. So was not the National Commission’s report on “those institutionalized” somewhat off target—not because of the Commission, but because of the question it was asked to address? Would not the focus on mental institutions miss most people with mental disorders? Would not singling out psychiatry slow the new treatments advancing at so heartening a pace?
The result was a standstill. If this inaction had been transient—a moratorium while criteria and processes were worked out for protecting a demonstrably vulnerable population, which was one now outside state mental hospitals—the eventual policy could have substantially improved on Subpart E as proposed. Alas, this was not to be. The death of Subpart E (by default) did not breathe life into the policy debate. Instead, it created a toxic political environment, which persists to our day. Consider a report in the May 1999 AAMC Reporter (from the American Association of Medical Colleges) on a recent attempt to revisit and codify protections for the mentally impaired involved in research:
William Carpenter, M.D., director of the Maryland Psychiatric Research Center and professor of Psychiatry and Pharmacology at the University of Maryland School of Medicine, says the Commission “made several fundamental mistakes that created a greatly flawed report—a terrible blow in the fight against stigma of the mentally ill.” Dr. [Adil] Shamoo has his own roster of misgivings. “NBAC [National Bioethics Advisory Committee] is a small step forward, but the report missed the main issues,” he says. “The Commission did not address whether high-risk, non-therapeutic experiments should even be conducted on the decisionally impaired...”
A report that should be a tool for improving policy is, instead, being beaten to death by the old arguments. Given this environment, there is a temptation, to which many have succumbed, to avoid this whole area of policy as nasty and unproductive. In the absence of clear guidance, research institutions have each evolved their own disparate standards for review of proposed research studies. The resulting drift in policies has been punctuated by intermittent scandal, so that psychiatric research has been halted for short periods at major academic medical centers, such as the University of California at Los Angeles and the University of Cincinnati. A court order in December 1996 stopped research at centers in New York State. Many of the facts of these highly publicized cases remain unresolved, but the stoppages are covered by local media. Opinions remain divided about whether or not the “scandals” are deserved, but the controversy itself fuels distrust among patients, families, advocacy groups, scientists, and the public.
The federal government, like the Prince of Verona centuries ago, should conclude now that “all are punish’d” sufﬁciently and should move toward “a glooming peace.” The time is ripe.
SEEKING COMMON GROUND
Three recent reports on research involving people with mental disabilities are evidence of new challenges to the status quo. The attorney general’s ofﬁce in Maryland issued a report in June 1998; the Department of Health in New York issued one in November 1998. At the federal level, the National Bioethics Advisory Committee (NBAC) issued a report in December 1998 (the target of the acerbic comments quoted above). It is no accident that the states stepping forward with task forces were Maryland and New York, since people in those states have been organizing to call for special protections for the mentally ill through Citizens for Responsible Care and Research and civil and legal rights organizations. All three reports stemmed from a perception that there is a problem to be solved. All three cite testimony about ethically controversial practices in psychiatric research. None defends the status quo.
Much debate in academic circles and among policy makers has focused on the differences among the three reports, in particular between the Maryland and New York reports, on one hand, and the NBAC report, on the other. The reports do differ in important ways, but focusing on those differences invites paralysis. Once again, differences can be used to block improvement of the federal regulations and clariﬁcation of rules for IRBs reviewing research protocols. I see the value of these reports in their common diagnosis and the many common elements in their prescriptions. Let me suggest just three:
- Special protections are needed when those with mental disability are involved in research.
- Those directly familiar with the conditions but who do not have a stake in the study under review should be involved in review.
- Studies that entail risk need to be monitored.
The reports together should lay the groundwork for a solution, and there are some signs of movement in that direction. The National Institute of Mental Health (NIMH) put some studies on hold pending further review. NIMH then convened a series of workshops on some of the most controversial research areas. One such area was studies intended to provoke symptoms under controlled conditions—so-called “challenge” studies. Because these studies deliberately anticipate some harm, however transient and mild, they violate the dictum of medical ethics, “ﬁrst do no harm,” and therefore they require special protections and scrutiny.
WASH OUTS AND PLACEBOS
Other difﬁcult issues arise from clinical trials that seek to establish the efﬁcacy of new drugs or other treatments. The economics of new drug development, and especially clinical trials of drugs that affect the brain, make it essential to resolve these issues wisely.
Across the pharmaceutical industry, clinical trials on average consume 25 to 30 percent of the total estimated costs of drug development. The total cost of these trials for a drug is typically in excess of $100 million, and the average number of clinical trials for an approved drug is 68.
Drugs that affect the brain are generally more expensive to develop than other drugs. Drugs affecting the brain constitute 21 percent of drugs on the market; they accounted for 14 percent of the estimated $91 billion in U.S. sales in 1999; but they consume 25 percent of pharmaceutical research and development budgets. One reason for this disproportionate expense is the unusual difﬁculty of establishing the effectiveness of these drugs.
Comparing a test treatment to a placebo is particularly common in many trials of new treatments for mental disorders. There are powerful reasons for this. The very process of diagnosing and monitoring patients, as part of a drug trial, may change the trial’s outcomes.
Some of the studies involve taking patients off their current treatment (this is called “wash out”) to avoid confusing the effects of the treatments being tested; other studies compare a drug to a placebo (no drug, usually a sugar pill). Comparing a test treatment to a placebo is particularly common in many trials of new treatments for mental disorders. There are powerful reasons for this. The very process of diagnosing and monitoring patients, as part of a drug trial, may change the trial’s outcomes. Thus testing a drug or other intervention against a placebo “intervention” can be critical. In fact, many studies show that patients in the non-treatment (placebo) arm of a trial either improve or show negative symptoms, so that without a placebo arm, a trial might give a false impression of efﬁcacy or a false impression that there are harmful side effects.
Trials that do not include use of placebos generally have to be larger, longer, and more complex, increasing costs and the uncertainty of the results. But if using a placebo entails taking research participants off their current treatment, and if withdrawing treatment increases the chance of psychosis or depression, then the risks can be signiﬁcant. The suicide rate of those with severe mental disorders is high, and the symptoms are appalling. Designing a trial involves trade-offs between scientiﬁc value—the likelihood of being able to demonstrate an effect—and the risks for those participating in the trial. Patients are genuinely vulnerable, so the ethical issues are real.
FDA rules for the evidence needed to prove the efﬁcacy of a new drug or other treatment have a strong impact on the design of clinical trials. The FDA and NIMH have been discussing how to establish the criteria and monitoring mechanisms for placebo and washout trials. This focus on placebo and washout trials should provide guidance for investigators, for prospective research participants, and for IRBs. In crafting guidelines, the involvement of nongovernment groups that can speak to the interests of patients and their care givers is essential; without such involvement no real progress is possible.
The prospect of action in New York and Maryland suggests that failure to clarify the federal regulations will invite action at the state level. This would lead almost certainly to different standards or practices in different states, an outcome far less desirable than a standard process and consistent national criteria. Inconsistency is especially difﬁcult in trials that take place at multiple sites; and most large treatment trials, the very trials most likely to offer potential beneﬁt, are in this category. Patient representatives, pharmaceutical ﬁrms, and investigators all have a stake in a sensible set of rules that span the nation.
BRINGING IN THE BIG PICTURE
In addition to speciﬁc concern for research participants with mental disorders, the rules for protecting all research participants, in all biomedical research, are now “in play” at the federal level, and for the ﬁrst time in two decades are ripe for change. In 1998, no fewer than ﬁve reports from the Ofﬁce of the Inspector General, Department of Health and Human Services, pointed to stresses on the system for protecting research participants, a system straining under growing weight and nearing the limits of its capacity. International guidelines for biomedical research are also being revised. At the same time, the NBAC will soon be issuing reports on how well protections for human research participants are working. Finally, the main federal oversight ofﬁce, the Ofﬁce for Protection from Research Risks, has recently been renamed the Ofﬁce for Human Research Protections and has been elevated from the National Institutes of Health to the ofﬁce of the Secretary of Health and Human Services. That change, too, could open the door to renewed attention to how participants in research are protected.
All of this openness to change augurs well for reforming the rules governing research on those with mental disabilities. As the overall reform goes ahead, we need to acknowledge the special nature of severe mental illness, but not segregate individuals with such conditions into a category that makes research on them unnecessarily burdensome. All parties agree on the need for better treatments for severe mental illness; and the special vulnerability of those with mental illness is also probably a point of consensus, despite the overlay of fractious rhetoric.
At the same time, there are key questions of fairness to prospective research participants made vulnerable by other medical conditions. For example, those with Alzheimer’s disease or a recent stroke may be unable to understand the risks and beneﬁts of research; issues will arise around using surrogates to make the decisions for them. But someone delirious from a drug reaction or from poor liver function may be just as irrational as someone in acute psychosis. The tradeoffs in placebo trials arise in trials of treatments for cancer, and indeed for any life-threatening conditions. Many medical conditions, from heart function to immune function to infectious disease, can affect brain function, presenting the same issues of informed consent and research design as severe psychotic and mood disorders. Those who will devise the new rules for research must sail between the Scylla of rendering psychiatric patients “research untouchables,” by making research too bureaucratic and expensive, and the Charybdis of failing to acknowledge that severe mental disorders really do require special attention.
Scandals about research ethics have been at least as frequent in research on cancer, infectious disease, and other conditions as with mental disorders. If committees had been convened to scrutinize cancer, AIDS, or cardiovascular research, they would have found stories similar to those brought to the New York, Maryland, and NBAC proceedings. Only a small fraction of the problems in clinical research centers on psychiatric conditions. Distrust of research institutions among patient advocacy groups has a more prominent history in psychiatry than most other ﬁelds, but it is by no means unique to it; protection for those with severe mental illness should be part of a more general system of research protection. Brain disorders, however, often raise special problems with the concept of informed consent. What principles can guide IRBs, scientists, and patients and their families and advocacy groups in dealing with this pivotal issue?
THE QUEST FOR “INFORMED CONSENT”
In the evolution of thinking about the ethics of responsible research, Immanuel Kant’s moral dictum of respect for persons —his “categorical imperative” that people must never be treated as mere means to an end—became linked with emerging legal doctrine to produce the concept of informed consent. This concept says that individuals should be presented with a genuine choice about whether or not to participate in research. Informed consent itself entails three chief considerations:
- competence to make a decision
- ability to understand the choice and its foreseeable consequences
- freedom from coercion.
All three considerations are called into question when an individual has a brain disorder. First, some people are simply not competent to choose. A person recovering from a stroke or in the late stages of Alzheimer’s disease may be able to signal discomfort, but not to make choices of the complexity needed to decide about participating in a drug trial. In addition to affecting competence to choose, brain conditions that cause confusion or slow down intellectual processes can interfere with understanding the real nature of a choice—the foreseeable risks and beneﬁts of participation—and so limit how informed a choice can be.
The idea of informed consent does not mean consequences are understood only intellectually, but also that the choices are reasonable. Someone severely depressed or in acute psychosis may understand choices, but those choices may be irrational and even dangerous. Such choices do not meet the need for informed consent.
Individuals with mental disabilities often depend on others or live in environments that constrain their freedom. This endangers the “freedom from coercion” requirement of informed consent. Our choices lie on a continuum from almost complete freedom (selecting any ﬂavor of ice cream on the menu) to utter coercion (confession under torture). The minimal criterion for informed consent to participate in research is that the choice be reasonably free of coercion. Decisions involving those with mental disabilities are often made by the patient in conjunction with others, not only in connection with medical care but in many other aspects of life. Participating in research, however, should be a more optional choice than many others. When people cannot genuinely make such choices on their own, others can help to make them based on what the individual would decide if not disabled (projecting that person’s values) or what is in the individual’s best interests.
In the end, however, most people do not think about medical care far in advance, let alone medical research, so they will leave no explicit guidance. No matter how much social norms change, this is unlikely to change.
WHO CAN GIVE CONSENT?
In the simplest theoretical formulation, “voluntary” research participation by someone with a mental impairment would be possible only when that person’s values could be asserted on his behalf, values that could be projected:
- if the person had left indications in a written document
- by involving someone intimately familiar with the impaired person’s values
- by having a person formally designated to make research participation (distinct from medical care) decisions in advance.
All groups that have looked at this question of voluntary participation have tried to ensure that research participants themselves give full informed consent when possible and, when not possible, that any feasible steps be taken to ensure the patient’s assent to a decision made by others and the patient’s permanent right to opt out at any point. All the reports recommend involving individuals who know the prospective participant well enough to project his values. Although they differ in some details, the reports also recommend allowing methods by which an individual can designate a research-participation decision-maker in advance. In the end, however, most people do not think about medical care far in advance, let alone medical research, so they will leave no explicit guidance. No matter how much social norms change, this is unlikely to change. If all patients who have not given explicit guidance are excluded from research, the number participating in clinical research will be small. That may be ﬁne for highly speculative treatments or other interventions whose outcome truly is completely unknown, because no large pool of participants is needed to get informative results. It is far less satisfactory, however, for large trials of treatments where there is already some evidence of effectiveness.
THE THORNY ISSUE OF THERAPEUTIC MISCONCEPTION
One subtlety in the issue of informed consent is what is called the “therapeutic misconception.” There is a common—but mistaken —belief among those contemplating participation in medical research (especially trials of new treatments for diseases with serious symptoms and only partially effective treatments) that the new treatment will help them. Many people will leap beyond legitimate hope to an implicit assumption that participating in research is actually medical care. Almost every IRB, therefore, reviews informed consent statements to guard against this “therapeutic misconception” and ensure that prospective participants do not enroll in a study because they believe they will receive actual medical treatment.
First of all, studies that are truly “research” cannot offer clear beneﬁts to everyone. A clinical trial that compares two treatments is not ethical if it is clear which treatment will produce the desired effect. Moreover, the treatment in the control group should always meet current standards of good care. So, in theory, there should be no direct beneﬁt to participating in research.
If any guarantee of beneﬁt is carefully removed from the equation, however, what might justify involvement of a patient in any research? Usually two reasons have been advanced to justify involving patients who themselves cannot choose to participate. One reason is that, in a normal state, the patient would want to contribute to gathering knowledge about his condition or to developing future treatments. The other reason is that an individual has an obligation to promote the collective good. The existence of a duty to participate in research has long been debated within bioethics and may never be resolved. The essence of research protections, in fact, is to thwart just such “collective good” arguments that might be used to justify sacriﬁcing the interests and choice of an individual patient. If a “duty” to participate in research exists at all, it must be relatively weak and certainly not strong enough to overcome more than minimal risk, or even the mildest indications of unwillingness among those with mental impairments.
THE FLIP SIDE: CLAMORING FOR ACCESS
Fortunately, there are also strong reasons to participate in research. True, IRBs must guard against confusing research with medical care, but they should not ignore the real beneﬁts of participation in research. Starting with AIDS activists a decade ago, those concerned for patients’ rights in clinical research have increasingly asserted a right to research participation, turning the traditional approach to protecting participants on its head. “Drugs into bodies” became a rallying cry for gaining access to experimental AIDS therapies in the 1980s, one of the primary policy objectives of AIDS advocates. We are beginning to experience a backlash against IRBs that, in order to avoid the therapeutic misconception, reduce access to clinical research. The therapeutic misconception can itself become a misconception.
Some patients opt into research because medical studies are linked to other medical services. This is of particular interest to many with severe mental disorders, such as schizophrenia or uncontrolled manic depressive illness, who are often left out of the medical system because they lack insurance, have exhausted their limited mental health beneﬁts, or face non-monetary barriers to medical care. Participation in a clinical trial may be the best way to get an accurate diagnosis and access to any treatment, so even the placebo arm of a drug trial might lead to improved management of their symptoms, simply because the symptoms would get medical attention.
In a health care system premised on “equality,” this reason for research participation might not be a factor, but in the United States system of care it certainly is. The situation here is analogous to the difﬁcult dilemma facing studies of medical treatment in developing countries with far fewer resources for health care than the United States. The comparison is unfortunately more than apt, because state-of-the-art mental health services may be as far out of reach for some in the United States as expensive medical treatments are in a poor African nation.
Research can offer participants beneﬁts beyond standard care in another, more subtle sense, as well. Rigorous clinical analysis itself can affect the quality of care. Until the 1960s, childhood leukemia was synonymous with impending death; there were no effective treatments. Starting in the 1950s, however, some drugs from the World War II era began to show promise of killing leukemia cells selectively. A small cluster of clinical investigators began a system of testing these drugs and other interventions (such as use of platelets to stop bleeding) in formal clinical trials. It became the norm that a child with leukemia was referred to a team of physicians who were doing clinical trials of leukemia treatments, since referral into a clinical trial was tantamount to getting the highest standard of care. Treatment outside the trial was apt to be substandard not because a particular physician or hospital was bad, but because the only way to get the best care was through a system that was tightly coupled to clinical research. The best treatment in any given clinical trial was not clear, or there would not have been a trial underway, but participation in a trial was the only ticket into the system that was delivering state-of-the art treatment.
More than a decade of rigorous clinical research annihilated the therapeutic nihilism that once surrounded childhood leukemia, and today more than 80 percent of children with this disease are cured. The culture of clinical research still pervades pediatric cancer treatment, and further reﬁnements of treatment continue. The amazing shift from ineffective treatment to real expectation of cure is not yet true for other diseases, but the systemic beneﬁts of organized clinical research do generalize well beyond pediatric oncology.
Finally, the beneﬁts of participating in clinical trials differ from study to study. Some kinds of studies, such as small initial trials of untested interventions, perhaps on the basis of animal or laboratory studies, offer little prospect of clinical beneﬁt. But as evidence mounts, later trials of the same intervention really are expected to provide direct clinical beneﬁt. Uncertainty remains, or there would be no reason to do the research, but the uncertainty is much less. By the time a drug has gotten to large phase 3 trials, for example, it typically has a cadre of champions, both physicians and patients, who are “true believers.” By then, some believe that denying a patient the intervention is now unethical, although others disagree.
The existence of true believers is not a reason to stop doing trials; the history of medicine shows true believers are often wrong. Many patients who had cardiac bypass surgery in its early years did not beneﬁt from it. More than a decade passed before doctors could sort out the clinical situations that predicted better outcomes. Likewise, many who have had bone marrow transplantation and high-dose chemotherapy for breast cancer have gone through a high risk procedure that, from recent data, appears to deliver measurable clinical beneﬁt only to a small subset of women.
Psychiatry is notably rich with discarded treatments and theories of illness. Mental patients were at one time immersed under water nearly to the point of drowning. “Insulin shock” therapy was common not so long ago, and only a decade ago many women were adjudged “schizophrenogenic mothers” by psychodynamic dogma. The need for evidence in treating psychiatric disease is especially strong; we need clinical trials to temper the tendency to believe that all new treatments will work. Nevertheless, an intervention successful in early trials usually works in later ones, so that participation in a trial may be the only way to get access to what opinion-leading clinicians believe is optimal treatment. A drug with preliminary evidence of good effects in phase 2 trials might, for example, be the best hope for a patient who has not responded to currently available anti-psychotic drugs. Those reviewing clinical studies must temper undue optimism, but they should not quash reasonable hope. Hope is not always a leap of faith; it can be based on evidence—evidence that IRBs should take into account.
Much discussion of the New York, Maryland, and NBAC reports has focused on the differences in how they propose to deal with risk for participants in research and, in particular, on whether there should be two categories of risk (minimal and more than minimal risk), or three (minimal, somewhat more than minimal, and still higher risk). These differences matter chieﬂy because the highest risk category in each plan triggers rigorous extra requirements, making the review process more stringent and adding costs to the research itself because of the need for an independent monitoring board.
In my view, we will have gone off the track if an argument over risk criteria leads to regulations that try to ﬁt all research into a few clean categories. Clinical studies are highly diverse. The problem is not simply risk, but the nature of the risks and beneﬁts and the balance between them. IRBs are inherently in the business of balancing. Certainly they should be given guidance on which tools to use, but cut-and-dried risk thresholds are less important than making sure IRBs are accountable for the decisions they make, and that they monitor the studies they approve.
So what are the issues for accountability and monitoring? All three reports call for appropriate expertise on IRBs that review many studies involving those with mental disabilities. They also call for separation of the self-interest of IRB members from that of prospective research participants. These principles are common sense, and not restricted to brain disorders; they should be true across the board. And if IRBs do not have enough expertise among their regular members, they should bring in ad hoc experts or change their composition. National oversight of local IRBs, along the lines of the proposal to establish a national advisory committee for the Ofﬁce of Human Research Protections, can also help keep watch on IRB expertise and conﬂict-of-interest standards.
Investigators and IRBs have many options for monitoring a study. Picking the appropriate option is not just a matter of assessing risk, however; it is a matter of weighing risks and beneﬁts. The NBAC report recommends than any study that has more than minimal risk and that involves those with mental illness should be assigned a monitor to track its progress. For some studies, particularly treatment trials, that makes excellent sense; but for other studies, external monitoring is not the right kind of protection at all.
Instead, modiﬁcations to the regulations should ensure that IRBs assess beneﬁts and risks and then pursue the appropriate measures, including appropriate options for monitoring, that ensure that human participants are protected. The regulations must shy away from mandating speciﬁc protections based on this threshold or that. A few examples may suggest why “one size ﬁts all” is not wise policy.
Breaches of conﬁdentiality could cause a person to be labeled as mentally ill, produce stresses within a family or at work, or make it difﬁcult to get health or disability insurance.
Most studies of “family pedigrees” (genetic information about many generations of one family) are now classiﬁed as presenting more than minimal risk because they could result in violations of privacy. Breaches of conﬁdentiality could cause a person to be labeled as mentally ill, produce stresses within a family or at work, or make it difﬁcult to get health or disability insurance. These risks are not physical risks, like the danger of having a drug reaction or becoming severely symptomatic upon withdrawal of anti-psychotic treatment. The risk of drug reaction, psychosis, or suicide is a good reason to assign someone to monitor how a clinical trial is proceeding. But a monitor is expensive and requires a continual stream of data. In the case of pedigree studies, prevention is much more appropriate than watching for mishaps. Would it not be a better use of resources to train the study staff in special procedures and to pay for special database protections that would ensure privacy and conﬁdentiality?
Some diagnostic studies are classiﬁed as presenting more than minimal risk because they involve small amounts of radiation (for example, studies involving positron emission tomography) or because they involve large and intimidating machines that can provoke anxiety (for example, functional magnetic resonance imaging). A data and safety monitor does not help to reduce such risks, but employing bio-hazard protections or modifying the machines or training staff to anticipate and prevent anxiety can do so.
If the highest risk of a study comes in its early phases—during recruitment— or toward its end—in assessment—then employing a monitor throughout the study makes little sense. The National Cancer Institute (NCI) recently mandated that all large phase 3 clinical trials have data safety and monitoring boards. This is a sensible policy, but not yet a formal regulation, and it should probably not be made formal until its effects are understood. Phase 3 trials constitute a small fraction of all clinical trials, and clinical trials constitute only a fraction of the studies most IRBs must review. Moreover, phase 3 trials necessarily involve large numbers of people, and are done for two main reasons: to corroborate statistically that a promising intervention works and to test the intervention in a large enough population to detect any common side effects. The NBAC recommendation, by contrast, is hard to implement because it would trigger monitoring for all studies above minimal risk that involve those with particular conditions— mental disorders—regardless of the kind of study. The NCI policy mandates monitoring for a relatively homogeneous kind of cancer treatment study that involves large numbers of participants and is far along toward clinical application. Pursuing a similar policy for phase 3 studies involving those with all brain disorders (not just “mental disorders”) might well make sense.
Again, the wider issue is that IRBs should be given guidance, and held accountable; they should not be constrained by “one size ﬁts all” mandates. Most clinical studies entail multiple components; IRBs need to assess the risks and beneﬁts of each. It makes good sense to give IRBs guidance on when to consider ongoing monitoring of a study or other special protections, but embedding a speciﬁc monitoring requirement in regulations is apt either to be horrendously complex, because studies are so different (thus making the regulations elaborate and difﬁcult to implement) or to create perverse situations, as in the above examples, if the regulations are speciﬁc but simple. The better tack is to keep the regulations clear in terms of guiding principles, establish minimal criteria for process, and then hold IRBs accountable for their decisions. This accountability need not entail draconian hardships. It can take the form of more explicit guidance about special risks of particular study components and options for dealing with them. Or it can mean increasing resources to educate IRB members about both the studies they are likely to encounter and how other IRBs are dealing with such studies.
A ﬁnal issue in handling risk is the role of federal oversight. Two models for federal oversight have worked well. One is the process applied to gene therapy trials between 1983 and 1995 under the Recombinant DNA Advisory Committee (RAC) at NIH. RAC reviewed studies that were associated with a new set of genetic technologies and was a powerful force in building trust and keeping the process open. The recent scandal around the death of Jesse Gelsinger in a gene therapy trial has refocused public attention on the importance of RAC’s role in the past, and has brought into question the 1995 decision to, in effect, “demote” RAC. I was on record in favor of reducing RAC’s role in 1995 but now believe that was a mistake
The other process is FDA review of clinical studies. At FDA, government ofﬁcials review research protocols before they are approved. The two models differ in that the FDA process is staff-driven; very little information is made public until late in the process, when a drug or device ﬁnally comes up for discussion at a public meeting of an external advisory committee or when a major mishap occurs. RAC review, by contrast, is open, often highly public. Another difference is that FDA has real powers to investigate when things go wrong, whereas RAC and the current Ofﬁce for Human Research Protections have only paltry investigative powers.
In both cases, the decisions of IRBs are reviewed at the national level. This dual review is another way to enhance accountability of IRBs. We do not have a system for dual review of most clinical studies, and we should. It is long overdue. Federal oversight should ideally retain the advantages of RAC’s openness wherever possible (and this should be possible if data monitoring and safety boards are used to buffer raw clinical data from premature public disclosure) but couple it with credible investigations when mishaps occur.
People with brain disorders, their families and others near to them; health professionals; medical researchers; private ﬁrms developing drugs and devices; and research institutions all share a common interest in clearer rules to enable research to advance.
REWRITING THE FINAL SCENE
People with brain disorders, their families and other near to them; health professionals; medical researchers; private ﬁrms developing drugs and devices; and research institutions all share a common interest in clearer rules to enable research to advance. After two decades of failure, events have thrown open an opportunity to clarify the rules on involving those with brain disorders in research. Let us keep a few essential points in mind as these rules are reﬁned:
- People with mental disorders, particularly severe psychoses, are vulnerable because of their condition. They are also vulnerable because our system of medical care and social support can make the quality of care available through research protocols attractive.
- Brain disorders can affect the ability to give informed consent to participate in research. For high-risk research with little evidence to suggest likely beneﬁt, prior explicit indications of a desire to participate in such research might be required, but for research based on preliminary evidence of beneﬁt, the presumption should shift to ensuring access to clinical trials.
- Those with brain disorders are not the only vulnerable population, but merely one among many. IRBs must be competent to deal with protocols that involve brain disorders, but also expert in many other troublesome areas.
- The best approach for federal regulations is to avoid detailed criteria and mechanisms and instead to specify both goals and mechanisms to ensure accountability.
- Reports from New York, Maryland, and the NBAC provide guidance on how to improve criteria and procedures for involving those with brain disorders. Most attention has focused on how the reports differ in setting risk thresholds and in mandating speciﬁc monitoring mechanisms. More important is the strong consensus that a problem exists, that IRBs need to scrutinize protocols more vigorously, that patient advocates can counterbalance institutional self-interest, and that IRBs should monitor research protocols after approval.
- Large trials that entail signiﬁcant risks should be monitored by data safety and monitoring boards.
- Federal oversight of local IRB decisions and a forum for national debate about protecting human volunteers in research are both needed. Recent proposals for a national advisory body to carry out these functions are welcome, indeed long overdue.
Every one of us will at some point confront a brain disorder, in ourselves or in someone we love. We need a system that gets us the research essential to improving prevention, diagnosis, and treatment. But we also need a system that is safe and trustworthy. Research rules are for us. It is time to stop squabbling and start building rules around the consensus embodied in three decades of reports. How much gloom must we endure before we strike a peace?