Share This Page
If we were to nominate a poster boy for studies on brain and behavior, the winner would be Phineas Gage. In the summer of 1848, Gage was in charge of a crew laying track for the Rutland & Burlington Railroad. One of his tasks was to use gunpowder to blow up rock sections, and, in a tragic error, Gage accidentally tapped on the gunpowder with an iron rod—and made history. The explosion propelled the three-foot iron rod upward through Gage’s cheek, then through his brain, and out the top of his head. Gage not only survived, but also within minutes he was speaking and walking. Within two months, doctors pronounced him healed.
Indeed, Gage seemed physically intact, particularly so for a man who had an iron bar shot through his head. But he was, in one important sense, anything but cured. His personality had changed in a dramatic way, so much so that people who knew him before and after the accident would comment that “Gage was no longer Gage.”
Before the accident, Gage was an amicable fellow, but that changed completely. One of his physicians, Dr. John Harlow, described the changes thus: “…the equilibrium or balance, so to speak, between his intellectual faculty and animal propensities” was gone. He displayed “little deference for his fellows…[and was] capricious and vacillating, devising many plans of future operation, which were no sooner arranged than they were abandoned.” It was as if Gage had lost his ability to plan and act in a reasonable manner, particularly when it came to social interactions. Could a brain injury that by all accounts should have killed Gage instantly really have resulted in such a specific sort of personality problem? All that is known for certain is that the iron rod in question traversed his prefrontal cortices. Is that, then, where our abilities to handle social interactions lie? If so, in what parts of the frontal lobe, and what does this tell us in general about how the brain works?
Before attempting to answer these questions, let us look at another case history. In his wonderfully entertaining and thought-provoking book Descartes’ Error, Antonio Damasio, M.D., Ph.D., tells us of a modern- day Gage, named Elliot. In Elliot’s case, a large tumor wrought havoc in his right frontal lobe, but the resulting changes in his personality were remarkably similar to those noted in Gage. Elliot’s memory was ﬁne, his mathematical abilities were unchanged, his language was intact, and his perceptual skills were seemingly unchanged. Yet, socially, Elliot was a disaster. A battery of tests found that, although Elliot was aware of social conventions—what was viewed as acceptable and unacceptable—in real-life social situations, he was simply unable to decide among the countless options and so failed miserably. For example, if he was asked to describe what might happen if he was given too much money by a bank teller when cashing a check, he responded normally. Or, if asked how he might calm down a friend or spouse whom he angered, Elliot again did ﬁne. But faced with the same circumstances outside the controlled environment of the lab, when any decision he might make would have true social consequences, Elliot was hopeless. He would simply freeze up, incapable of choosing among the myriad options available.
In his report on Elliot’s behavior, Damasio notes that Elliot seemed almost devoid of emotion and spoke with a calculated cold-bloodedness. Damasio and other neuroscientists have noted that when a patient’s lack of social skill is juxtaposed with a complete lack of emotion, the trouble usually stems from problems in the prefrontal cortices of the brain, and, in particular, in the ventromedial area of the prefrontal cortices.
From this work, Damasio has generated a fascinating hypothesis. Although it is often thought that to make rational social decisions emotion needs to be removed from the process, brain science suggests that without emotion people fail utterly in the social realm. Emotion and rational decision making—that is, reason—are not separate phenomena. On the contrary, they appear to be integrally linked in the human brain.
Studies such as those on Gage and Elliot have informed not only neuroscience but also what has been called “the dismal science”—economics. Although some economic decisions are made outside a social context, they are a minority. Social dynamics, many economists believe, are at the core of economic decision making—that is, decision-making about resource acquisition and expense allocation. What I decide affects you, what you decide affects me, and, even more to the point, I care how I fare economically compared with how you fare. Thus, over the past ﬁve to ten years, economists and brain scientists have come together and created a new subdiscipline—neuroeconomics—that calls both neurobiology and economics home.
Neuroeconomics. The word itself has an ominous futuristic ring, conjuring up images of Big Brother engaged in economic mind reading. Indeed, an element of mind reading, or at least brain reading, is involved. Neuroeconomists want to understand what happens in the human brain during economic decision making.
Imaging and the Ultimatum Game
How is an experiment in neuroeconomics conducted? Typically, subjects who are connected to a functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) scanning machine make some sort of economic decision while scientists observe and record their brain activity. Some of the clever scenarios used in these studies predate brain scanning. Economists, of course, came up with the rational-man model of decision making long before brain scans, and psychologists have investigated learning and decision making for more than a century. But brain-scanning technology affords scientists an unprecedented glimpse into the brain as it makes decisions; without this technology, neuroeconomics would not exist.
The decisions subjects must make are often cast in terms of cooperation, cheating, or punishment, which means the studies tie together brain neurobiology, economics, and social psychology in novel ways. The evolution of cooperation, cheating, and punishment is important in evolutionary biology, too, so neuroeconomics pulls yet another discipline into its fold. Although this new interdisciplinary ﬁeld today focuses on economic decision making, its approach can be applied to studying all kinds of decisions, such as how people informally estimate probabilities and how people categorize human faces as trustworthy or not. This suggests a future for neuroeconomics in shaping a comprehensive theory of decision making.
A few current studies will give us a feel for what neuroeconomics can tell us about the human brain and decision making. Keep in mind, though, that new studies come out weekly and that new centers for neuroeconomics continue to appear at universities. Here, I can only set the stage for discussion of some of the deep questions neuroeconomics poses.
First, consider what researchers call the Ultimatum Game. Two people are involved in an economic transaction over a resource of value to both (for example, a pot of pretend money), but only one person—called the proposer—controls that resource. The structure of the game is such that if the proposer offers some proportion of the resource to the other person—call him the responder—and that offer is accepted, then both people get to keep their portions. If the offer is rejected, then neither person receives anything. For example, imagine there is $10 in Monopoly money in the pot. Suppose that John (the proposer) offers Jim (the responder) $5. If Jim agrees, each player gets $5. But if he refuses, both John and Jim get nothing.
Let us analyze this game from the traditional perspective of the rational economic man (Homo economicus), which holds that a person ranks potential outcomes and acts, in the lingo of economists, to “maximize some utility function”—in this case, taking as much of the pot of money as possible. From this economic perspective, the proposer should make the smallest offer he believes that the responder will accept, for that will obviously leave the proposer with the most money at the end of the game. The responder should accept that offer; if he does, he gets whatever the proposer offered. Otherwise, he gets nothing.
Yet the results tell another story. Most often, in these sorts of behavioral economics experiments, the proposer offers half the resource, rather than a small slice of it. If the offer is not half the resource, the responder often rejects it, even though by doing so he ends up with nothing. For example, Alan Sanfey, Ph.D., and his colleagues at Princeton University examined the Ultimatum Game with 19 subjects in the role of responder and used fMRI to observe their brain activity. They found that when unfair offers (deﬁned as those of less than half the resource) were made, responders often rejected them. As they did so, the area of their brains associated with negative emotional states (in this case, the bilateral anterior insula), rather than those associated with complex cognition (in this case, the dorsolateral prefrontal cortex) were most active. The more the offer deviated from fair, the more active was the bilateral anterior insula when such an offer was rejected. Anger at being treated unfairly by other players appeared to override rational economic reasoning. In the minority of cases when the offer was accepted, the dorsolateral prefrontal cortex was most active.
Sanfey and his team took their experiment one step further. They had the same subjects play the Ultimatum Game against a computer that did exactly what the subject’s human partner had done. In a testament to the ﬁne-scale social distinctions that humans make, Sanfey and his colleagues found that subjects were more likely to accept an unfair offer from a computer partner than from a human partner, and activation of the bilateral anterior insula was lower when unfair offers were made by the computer. In other words, although the monetary calculations were exactly the same in both conditions—and hence a rational person should respond similarly in both contexts—subjects were much more likely to view an unfair offer from another person as a violation of social norms, and hence respond emotionally.
In an interesting twist to the Ultimatum Game, Erte Xiao, Ph.D., and Daniel Hauser, Ph.D., added the possibility for the responder to attach a written note to be read by the proposer at the same time that the proposer learned of the responder’s decision. Because the note was written after the proposer made his decision (and hence no negotiations were involved), it functioned only as an emotional vent for responders. Xiao and Hauser hypothesized that when the responder was offered an unfair offer, he could release negative emotions in his note and therefore would be less likely to express such emotions in his decision about whether or not to actually accept the monetary offer. Indeed, that is what happened. In cases of unfair offers—when the responder was offered only 10 or 20 percent of the available cash—responders expressed negative emotions in their notes to the proposers. More critical to the hypothesis, however, was the discovery that responders who wrote notes were more likely to accept unfair offers than when notes were not permitted. Given the chance to express negative emotions in a less costly way—that is, accept the unfair offer, but write a nasty note—responders chose this option over the more costly negative emotional response of rejecting the unfair offer and so receiving nothing.
Positive emotions also seem to come into play in social economic interactions. In particular, emotions associated with reward processing can affect decision making. James Rilling, Ph.D., and his colleagues at Emory University had women play the Prisoner’s Dilemma game either with one another or against a computer.
In this game, each player alone does better to cheat than to cooperate, but the combined payoff for mutual cooperation is greater than when both players cheat. If Sally cooperates, but Jane doesn’t, Jane avoids any of the costs of cooperation, but can parasitize Sally’s goodness. As such, Jane receives a higher payoff than Sally and hence is tempted to cheat. But, Sally should employ the same logic, as she’d do better if Jane paid the costs and she (Sally) sits back and receives the beneﬁt of Jane’s action. The catch is that when Sally and Jane both employ this logic— they both cheat—they do worse than when they both cooperate. The dilemma in the Prisoner’s Dilemma game is how to salvage cooperation when the temptation to cheat is ever present. When people play Prisoner’s Dilemma many times, as they did in the Rilling experiment, one economic strategy is to be conditionally cooperative: cooperate when a partner cooperates, but not when she cheats. This strategy, called tit for tat, allows for the relatively high payoff for mutual cooperation and minimizes the number of times players get nothing.
The Prisoner’s Dilemma game again highlights the interdisciplinary nature of neuroeconomics, because the same game is frequently used to analyze the biological evolution of cooperation. In the evolutionary version of the game, which uses computer simulation, a person’s ﬁtness is increased or decreased depending whether he cooperates; the results are tracked over many simulated computer generations of players. Tit for tat works well in this version of the game; thus the results from the study of Prisoner’s Dilemma in neuroeconomics may shed light on the evolution of human cooperation, as well.
Rilling and his team found that, although the highest monetary reward in this game ($3) was obtained when a player cheated and her partner cooperated, the payoff that was most emotionally rewarding was mutual cooperation (which yielded $2 to each player). Subjects said that they found mutual cooperation the most rewarding outcome, and the mutual cooperation payoff caused the greatest activation of brain sections associated with reward processing (the ventromedial/orbitofrontal cortex, the anterior cingulated cortex, and the nucleus accumbens).
Neuroeconomists have examined emotions and rational behavior with respect to cooperation, but also in terms of what sections of the brain are active when a player punishes others who fail to act cooperatively. Punishing people who violate social norms interests anthropologists and evolutionary biologists who want to understand how social norms evolved before modern legal codes came into place. In particular, such punishment often entails a cost to the people who mete out the punishment and a beneﬁt to others not even involved in the interaction. This is difﬁcult to explain from a “selﬁsh gene” evolutionary perspective, which posits a sort of cost-beneﬁt ledger, favoring those traits that have high beneﬁts and low costs to the person. Dominique de Quervain, Ph.D., and his colleagues at the University of Zurich hypothesized that one mechanism involved in maintaining this sort of punishment lies in the pleasure that people derive from enforcing social norms.
To test this hypothesis, de Quervain and his colleagues used PET scanning to watch the reactions of pairs playing what is called the Trust Game. In this game, both of the players, who are not allowed to communicate with each other, begin with 10 units of money (called monetary units, or MUs, in the experiment). Player A begins the game by deciding whether to give his 10 MUs to Player B or keep them for himself. If A opts to give B his money, then the investigator quadruples the gift to 40 MUs, so that B now has 50 MUs, and A is broke. Then B is given a choice. He can send half of his MUs back to A or keep everything for himself. So, if A correctly trusts B to send the money back, they each end up with 25 MUs; whereas if A opts to give nothing, they both end up with only their original 10 MUs.
De Quervain hypothesized that if A trusts B to play fairly and give him the money, but B turns around and keeps it, A should view this as a violation of trust and social norms and therefore want to punish B. To allow for this possibility, one minute after B makes his decision, A can opt to punish B by revoking up to 40 MUs. (In one variant, A pays a price for doing so, but in others does not.). If A trusts B, who then violates that trust, A indeed punishes B—even if it is costly for A to do so. More to the point, subjects said that they enjoyed punishing players who violated their trust, and PET analysis showed that one section of the brain associated with reward (the dorsal striatum) was most active when A undertook his act of retribution.
Clearly, players took pleasure in punishing cheaters. They not only wanted to inﬂict some sort of economic damage on cheaters, but they also enjoyed evening the score. Results of the experiments also suggested that the more intense the punishment doled out to cheaters, the more active is the dorsal striatum of the players exacting retribution. In one further twist, A was sometimes told that B’s decision was determined by a random device, so the decision was out of B’s hands. In that situation, when B did not send money back to A, A did not view this as a violation of trust and did not respond by punishing B.
This version of the experiment shows that when subjects dole out punishment to partners who violate social norms they do not rely on those sections of their brain associated with calculated actions; they respond out of emotion. In a reﬁnement of the Trust Game, Read Montague, Ph.D., and his colleagues at Baylor University examined what happened when a pair of subjects played 10 times, while the brains of both subjects were monitored. Montague and his colleagues found that trust between subjects developed over time and that one brain area in particular—the head of the caudate nucleus, which is associated with prioritizing and ranking rewards—was active when subjects were determining the fairness of a partner’s action and how to repay that action with an act of trust. The question then becomes why humans derive pleasure from punishing cheaters in order to enforce social norms, even when such enforcement is costly. An evolutionary biologist might ask: What are the hidden beneﬁts for survival that people may gain, or our long-ago ancestors might have gained, from acting as enforcer? The answer, which is explained shortly, may center on reputation.
Some Reasonable Emotions
Neuroeconomics seems to demonstrate that the brain is hardwired to handle some economic problems through emotion rather than number crunching. Experiments have shown that speciﬁc areas of the brain, known to process speciﬁc emotions, are activated during various economic decision-making scenarios. No one suggests that there is a “fairness center” in the brain; in fact, different brain areas are active when the question of fairness arises in social versus asocial situations. But human brains do seem to have evolved to respond with special emotional vehemence to social cheating.
I am not suggesting that humans have evolved a speciﬁc emotional response to each economic situation. Instead, people have probably evolved emotional responses that, on average, work well in social situations similar to those our species faced in evolutionary history. In their experiments, neuroeconomists recreate such social situations and observe what parts of our brain are involved in our responses to them. Almost certainly, by the way, our responses occur even when we are not conscious of how our emotions are affecting our social decisions. Michael Raleigh, Ph.D., has found that baboons whose behavior is friendly and cooperative have lots of serotonin-2 receptors in the ventromedial frontal lobes of their brains. Uncooperative, aggressive baboons do not. This may offer a biological explanation for the social behavior of baboons, but that does not mean that baboons are conscious of how their emotions affect their behavior. Likewise, our human brains may be responding to emotions when we make economic decisions about cooperation, cheating, and punishment even if we are not aware of those emotions and how they inﬂuence us.
Tweaking the Rational Man Model
Society is left with a paradox. Damasio’s work and other studies of patients with brain disorders suggest that emotions are necessary for rational behavior; yet at least some work in neuroeconomics suggests that emotions often seem to act to produce irrational results, such as turning down an unfair offer at the cost of receiving nothing at all. What are we to make of this apparent contradiction?
One could argue that the contradiction is, indeed, apparent, not real. Emotions are clearly necessary for normal social interactions, but not because in themselves they always lead to rational results—if by rational, we mean maximizing some simple gain such as immediate money. But acting rationally even in one-on-one social economic interactions is more complex than that. How I interact with you has implications not only for that interaction but also for my standing in a community, my reputation, and my sense of self-worth—all of which have long-term economic, as well as other, consequences. Perhaps that is where emotions come into play because they facilitate behaving in such a way as to promote reputation, self-worth, and other long-term social assets that are also economic assets.
Do discoveries from neuroeconomics suggest that the rational-man perspective on human behavior be abandoned? I don’t think so, but the rational-man model needs to be modiﬁed by what has been learned. With respect to primarily asocial economic interactions, the rational-man model works well; people behave as predicted, and, what is more, everyone expects them to behave that way. It is when economic interactions are cast in a social context that this model needs to be modiﬁed. Scientists need to build emotion and reputation into the model of rational man.
Emotions do not necessarily cause people to behave irrationally in social interactions; this is true even in the studies reviewed here. Instead, once people recognize that their reputation has long-term economic consequences, scientists see that emotional responses, although they may reduce immediate gains, may foster higher overall long-term gains. If you and I are in some economic interaction that mimics the Ultimatum Game, and I turn down an unfair offer that you make, I walk away with nothing. If that were the only implication of my decision, then I clearly acted in an irrational manner. But, if, as is likely in real-life interactions, by turning down your offer I send a message to others that I will not accept unfair offers, then the gain in reputation may more than make up for any short-term loss.
Emotions may represent an acquired integration of the exceedingly complex calculations that are involved in such decisions. Operating automatically, and almost instantaneously, they produce a response to which the brain-damaged Gage and Elliot had lost access and that they could not replace with any amount of explicit calculation of considerations and options.
Studies of neuroeconomics and human social dynamics are new, but theories about reputation have been around for decades. Economists Robert Frank, Ph.D., and Thomas Schelling, Ph.D., political scientist Robert Axelrod, Ph.D., and evolutionary biologist Randolph Nesse, M.D., have long argued that reputation—and, in particular, a reputation for keeping one’s commitments—is essential to human social dynamics (for an excellent recent synopsis see Nesse’s Evolution and the Capacity for Commitment, Russell Sage Publishing, 2001). In Frank’s terminology, humans have evolved “passions within reason.” He means that the pursuit of self-interest in a socially complex landscape requires the use of emotions to guide people along, and that they may lead to actions we cannot explain without reference to reputation or standing in a community. In his masterly book Passions within Reason: The Strategic Role of the Emotions (Norton Publishing, 1988) Frank offers this hypothetical scenario:
Jones has a $200 leather briefcase that Smith covets. If Smith steals it, Jones must decide whether to press charges. If Jones does, he will have to go to court. He will get his briefcase back and Smith will spend 60 days in jail, but the day in court will cost Jones $300. Since this is more than the briefcase is worth, it would clearly not be in his material interest to press charges… Thus, if Smith knows Jones is a purely rational, self-interested person, he is free to steal the briefcase with impunity. Jones may threaten to press charges, but this threat would be empty. But now suppose that Jones is not a pure rationalist; that if Smith steals his suitcase, he will become outraged, and think nothing of losing a day’s earnings, or even a week’s, in order to see justice done. If Smith knows that Jones will be driven by emotion, not reason, he will let the briefcase be. If people expect us to respond irrationally to the theft of our property, we will seldom need to, because it will not be in their interests to steal it.
Reputation matters, which is not to say that the drive for reputation always yields the sorts of actions that society approves. The Hatﬁelds and the McCoys, Frank notes, killed each other for 40 years, apparently motivated by concern for defending reputation. Either family could have stopped the cycle of violence at any time simply by not retaliating against the latest murder, but that would have led to the reputation that one could kill their clan members without fear of reprisal, and, from their perspective, that would not do.
Mathematical models can help us understand how reputation may have evolved as a survival advantage. Consider a model of the Ultimatum Game developed by Martin Nowak, Ph.D., and his colleagues. They created a computer simulation in which cyber-players ﬁnd themselves in the role of either proposer or responder. Proposers offer some proportion of the money available, whereas responders have a minimum acceptable offer. Some proposers make fair offers and some do not; some responders have high acceptance thresholds, and others have low acceptance thresholds. How would each fare over the long term?
Nowak’s simulation assigns strategies that deﬁne how each cyberplayer acts in both the proposer and responder roles: For example, one strategy might be to offer low when in the role of proposer but accept only high offers when in the role of responder. The computer simulation then keeps track of the success of different strategies for some number of “cybergenerations.” Strategies that do well increase their “evolutionary ﬁtness” and their representation in the next generation, thus mimicking the process by which natural selection favors the spread through a species of behavior advantageous to survival and reproduction.
Nowak and his team found that, when no reputation was built into their game, the most favorable evolutionary solution was to offer low as a proposer and accept low as a responder. That is, the evolutionary solution matched the solution from a traditional rational man economic perspective. Once reputation was built into this model by giving a player information about his partner’s previous behavior, however, a different outcome emerged. Now the evolutionary solution was to make a fair offer (half the resource) and only accept fair offers, which is what people practice in modern neuroeconomic studies. In other words, people in the Ultimatum Game behave as the model predicts, but only when reputation is taken into account.
Neuroeconomics is a young ﬁeld,but its potential seems great. With the power of fMRI, PET, and computer simulations and well-grounded economists, social psychologists, neurobiologists, and evolutionary biologists, the prospects for a more fundamental understanding of human social dynamics are better than ever.