Ahead of the inaugural meeting of the Neuroethics Society, on Nov. 13 and 14 in Washington, D.C., Dana Press writer Aalok Mehta quizzed some of the experts in the field about the implications of neuroscience and its relevance to everyday life. Judy Illes, a professor of neurology and Canada Research Chair in Neuroethics at the University of British Columbia, will moderate a session on deep brain stimulation during the meeting. She also is an expert on how to deal with unintentional discoveries of medical conditions in volunteer subjects when such discoveries crop up during neuroscience research.
What are incidental findings, and why are they a concern in neuroethics?
Incidental findings are unexpected clinical findings that are identified in research. My work has to do with these kinds of anomalies on brain imaging scans.
What’s really important about these kinds of findings is that when we’re doing research, we’re not doing clinical medicine. We have scans that are optimized for research, not optimized for clinical discovery. We also have people who are acquiring these images for research purposes, not for medical purposes, and are, more often than not, not medical personnel. So we have created a gigantic matrix of brain anomalies, for which we don’t know the clinical significance, that are being noticed on research scans by research people.
In the clinical environment, when we have medical findings that occur apart from what the physician is looking for clinically, there is a duty to report that to the patient. But when we began our research, there was no such protocol or professional guideline for knowing what to do with this kind of discovery in the research setting.
We began by rigorously trying to understand the nature of this challenge—how often are these things identified, how significant are they (that is, how many are medically significant), how are different laboratories dealing with them, was there an accepted standard or were people just winging it, and what participants themselves expect in terms of being informed about these findings.
Then I brought together an interdisciplinary group of people for a consensus meeting to see if we could establish a first pass of recommendations for people to adopt and for laboratories to adopt.
What were some of the conclusions reached at that meeting?
The conclusion on which we literally had consensus is that laboratories must acknowledge the possibility that there will be incidental findings in their brain imaging research and acknowledge that possibility in their research planning, and they should inform their respective institutional review boards of their intended plan for handling findings as well as the subjects themselves. That includes the option to acknowledge that these findings exist but the right to opt out of dealing with them.
Our recommended pathway at the time of the consensus meeting—the majority suggestion—was that when an anomaly is noticed, the scans should be shown to a specialist for a recommendation as to whether it should be followed up medically. We also acknowledged that there are other options that are morally acceptable, given the researchers’ goals and the research setting—for example, the researcher might be in a psychology department with no access to medical personnel. Some of our group even urged that subjects have the right to forgo being told about a finding even if there is one.
Are these decisions to be made with the knowledge of research subjects or in the background?
Our further work from that consensus meeting, work by my colleague Susan Wolf at the University of Minnesota and myself and others, added a layer of what we call returning results. That group urged that only findings that would have a meaningful impact on participants be given to them, but that, again, in the consent form subjects are told what the process will be.
So it’s not like something’s going on in the background that subjects weren’t informed about. They’re made aware of how the researcher will deal with these kinds of findings.
What’s next? Will there be further consensus meetings? Should there be governmental regulation?
I like to think more in terms of professional self-regulation than government regulation. We hope to gather a group to look at incidental findings in banks of data, data that are put into giant archives. How to handle that has not at all been sorted out.
In addition, we here at the University of British Columbia are also looking at a cost-benefit analysis of different strategies for handling incidental findings. That is, what is the cost to a researcher if he or she were to send all of the research scans to a radiologist. What is the cost-benefit to individuals and to their quality of life of being told on the one hand of a finding that has no medical significance or on the other of a finding that does have medical significance—can it be treated, can it not be treated, and so forth.
What are some of the ethical concerns being raised over deep brain stimulation, and what conclusions have you come to about when it should be used and when it shouldn’t be used?
I don’t think any hard conclusions exist yet. I think things are still very much in an exploration phase, an experimental phase, with some clinical applications of great promise. We’re definitely still learning about where how to optimize the technique—how to optimize by disease and by patient, keeping patient benefit foremost in mind.
As to the ethics considerations, we always start with safety at the very top. Then we think about optimizing for different kinds of pathology and for different kinds of people with different value systems in the clinical domain. Then of course we think about off-label uses of technology, particularly one that is invasive. Those are always challenging.
The Neuroethics Society
Dana section on Neuroethics
Q&A with Steven Hyman: What is Neuroethics?
Q&A with Hank Greely: The Ethics of Forensic Neuroscience
Q&A with Martha Farah: The Business of Neurotech