Ahead of the inaugural meeting of the Neuroethics Society, Nov. 13 and 14 in Washington, D.C., Dana Press writer Aalok Mehta quizzed some of the experts in the field about the implications of neuroscience and its relevance to everyday life. Martha Farah, director for the Center for Cognitive Neuroscience at the University of Pennsylvania, will moderate a discussion on the business of neuroscience at the conference. Here she explains how some companies are rushing to cash in on recent neuroscience developments and why that might not be so good for consumers.
What are some of the ways neuroscience is being used in business?
Neuroscience is being applied to all sorts of problems. Some of these are medical applications, but what’s really interesting now are some of the applications of neuroscience outside medicine. These are things like education training or lie-detection systems. Companies are even saying that they can analyze advertising campaigns by imaging potential customers’ brains to tell which campaign is going to be successful. It raises a host of ethical issues as profit motives enter the picture.
What are some of the specific ethical concerns this raises?
We know from laboratory studies of how people evaluate claims and evidence that if you show somebody a brain image along with an explanation for why a certain psychological phenomenon is observed, just the presence of the brain image makes people think, “Wow, this is really scientific. I believe what I’m being told.”
Take that out of the cognitive psychology research lab and put it into the marketplace. School administrators and individuals are making decisions to buy these fairly expensive systems based on claims that neuroscientists have developed this improved way of teaching. Put some brain images in your sales brochure, and you will do good business. People will be inclined to buy your product and assume that there must be a lot of solid science behind it when, generally, there is not.
Should there be regulations on this sort of technology before it’s more widely employed?
For things like these educational systems, it isn’t life-or-death. But there’s certainly potential for people to be wasting their money. When you’re talking about the education of children, you want to make sure you are giving them the best teaching methods available, not something that’s suboptimal just because it looks really high-tech and scientific.
With some other applications of neuroscience in the private sector, you can imagine much worse things than people wasting money. I’m told a big part of the clientele for these [functional magnetic resonance imaging]–based lie detection systems is couples where one is trying to prove his or her faithfulness to the other partner. So that person says, “Look honey, I’ll go into an fMRI lie detector, and I’ll tell you what happened, and the scanner will show you whether I’m telling the truth or not.” That’s the kind of thing that could wreck a relationship if it’s not what it claims to be.
What are some of the commercial applications you see coming out in the next few years?
There are some entertainment devices now that are based on electroencephalographs (EEGs). Some video game companies are selling headsets that detect brain activity to guide actions in the games.
In the not-too-distant future I think we will also have people performing brain scans to assess certain personality traits. Turhan Canli, who’s also involved in the Neuroethics Society, has shown some strong correlations between personality traits such as extroversion and neuroticism. He’s shown that if you put somebody in a scanner and show them the right kinds of stimuli, those personality traits do determine what your brain activity looks like in the scanner.
This is just speculation, but I could imagine hiring or personal screening decisions being made partly on readings like this.
Are people going to use this to find dates online?
There already is one online dating site that uses supposedly neuroscientific information in a loose way to help you find a match—echemistry.com. The site asks you questions designed to assay your levels of specific hormones and neurotransmitters, and it applies some sort of theory it has about which types of women get along with which types of men to pair people up.
What would you like to see happen as these technologies develop and mature?
I don’t feel that at this point in the development of the field of neuroethics we know enough about these technologies to want to issue blanket recommendations of any kind.
I have read that some people say we should legislate against use of brain imaging outside of medical contexts. I think that’s a bit of an overreaction. Why would we do that? Just because you could invade people’s privacy with brain imaging? You could invade people’s privacy with telephones, but we don’t outlaw telephones.
I think it would be a mistake to leap to a broad regulatory response at this point. Let’s watch these technologies develop and make sure that the people developing them are open with us about what their efficacy is. I don’t think there is anything to gain by trying to shut off a whole direction of research.
And by the way, if we did enact repressive regulation, there’d be lots of other countries that would be galloping ahead in the private sector or with government support to develop these technologies. A moratorium on these technologies wouldn’t stop their development, it would just stop the United States from having any piece of the action and therefore from having a role controlling how they’re developed and what kind of applications are pursued.
Some people worry about these technologies being put into use before they’re ready or properly vetted.
I think it’s a huge concern. Of course everybody is thinking about the case in India a couple weeks ago where a woman was convicted of murder partly on the basis of this totally unproven EEG-based system. But I think the cure for that is not just to say nobody should record EEGs unless it’s a medical procedure. The cure is to require that somebody who claims they have this method produce the data to show that it works and make it available to other people so they can check his conclusions.
I think transparency is key. To some extent we’ve already learned some painful lessons from the pharmaceutical industry about products owned by private companies that have a financial interest in overstating effectiveness and minimizing risks. The same dangers of inaccurate reporting come up with all kinds of neurotechnology.
One concern I have is that, for medical applications, although the drug companies are influential in the way drugs are developed and marketed and medical device companies play a similar role, the public has some control over it through the funding of the National Institutes of Health, various private foundations and so forth. With nonmedical applications, there really isn’t any source of funding other than private companies. So there’s no place where these technologies will be developed without the pressures of the need to make a profit.