Share This Page
The twentieth century was marked by atrocities and genocide, and is often said to be the most violent era of human history. Beginning in 1915, the Ottoman Empire systematically exterminated 1.5 million Armenians; during World War II, some 6 million Jews perished in the Nazi holocaust; Stalin’s regime in the Soviet Union was responsible for approximately 20 million deaths, and Mao Zedong’s in China for 40 million; and as the century drew to a close, ethnic cleansing in Rwanda, Bosnia, and elsewhere killed millions more.
The people responsible for these horrific events were, for the most part, seemingly ordinary individuals. What might make ‘normal’ people transform into perpetrators of repetitive, extreme violence? Researchers discussed the potential neurological underpinnings of such behaviors at The Brains that Pull the Triggers conference in Paris last week, convened by Itzhak Fried, a professor of neurosurgery at the University of California, Los Angeles.
Tania Singer, a professor of social neuroscience at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, presented evidence that social emotions such as empathy and compassion are regulated by beliefs and context, and that we can learn, through mental training, to act with more empathy and compassion.
Empathy is defined as the ability to share the feelings of others, and is widely believed to be based on shared neural representations of emotional states, which enable us to put ourselves “in someone else’s shoes.” People do not always show empathy; sometimes, we take pleasure in other people’s pain and suffering–a phenomenon called schadenfreude – and this may be due in part to how we perceive those people, she said.
Singer and her colleagues have been investigating the neurological basis of these emotions. In 2012, they published a brain scanning study showing that neural responses to suffering predict individual differences in altruistic behaviors. They scanned soccer fans’ brains while they viewed film clips of a fan of their favorite or a rival team in pain, and then gave them the choice of reducing the others’ pain by enduring pain themselves.
Seeing a fan of their favorite team in pain was associated with increased activity in a brain region called the insula, which predicted their subsequent helping behavior. In contrast, seeing a fan of a rival team in pain led to increased activation of the brain’s reward circuitry, and this predicted their failure to help.
“Instead of feeling empathy with the out-group, what they now felt was reward, and they kind of rejoiced by seeing the other people suffering,” said Singer. This does not necessarily mean that perpetrators of mass violence rejoice at killing, however. “In-group empathy fosters co-operation and helping, but [increased] reward signalling inhibits empathy and predicts the lack of helping behavior.”
Research from Singer’s group also shows that the brain systems serving empathy and compassion are “plastic,” or malleable, such that their structure and function can be altered by experience. In one early study, the researchers showed that short-term compassion training increased prosocial behavior in a specially designed game. (This free e-book, co-authored by Singer, describes compassion training programs.)
More recently, Singer and her colleagues launched the ReSource Project, a large-scale study in which participants underwent a series of 3-month training modules focusing on attention-based mindfulness, prosocial motivation and compassion, and perspective-taking on themselves and others.
“We see specific effects [for each module], not only for behavior, but also at the level of grey matter plasticity,” said Singer, describing the first results from the study. “We see this huge increase in thickness of grey matter in the insula after three months of compassion training, and this is predictive of behavior, so those with the biggest increases were also the most compassionate and caring after the training.”
A New Model of Morality
Traditionally, antisocial behavior was explained by a failure to inhibit impulses, but Molly Crockett, a neuroscientist at the University of Oxford offered new evidence for an alternative explanation based on how the brain represents values.
In a study published earlier this month, Crockett and her colleagues recruited 28 pairs of participants. One of each pair was randomly assigned to the role of “decider,” and was asked to choose between giving an electric shock to themselves or to their anonymous partner for financial gain.
Most of the study participants chose to deliver electric shocks to themselves rather than to profit from inflicting pain on others–but about one-third bucked this trend: “We see that people require more money to harm other people than to harm themselves,” said Crockett. “We had participants who refused to deliver shocks to another person, even for a profit of £20, but at the other end of the spectrum, we had people who were willing to deliver 20 shocks for a profit of just 10 pence.”
Using functional neuroimaging, the researchers could trace a network of brain regions activated during the decision-making process. In subjects who chose to hurt themselves over others, a brain region called the striatum, which includes the brain’s reward centre, responded less to money gained from harming others than to money gained from delivering shocks to themselves, suggesting that ill-gotten gains were less rewarding to them.
Another brain region, the lateral prefrontal cortex, responded to profits gained from harming others, but not oneself, and was most active when participants delivered shocks to others for a small profit. Thus, it appears to encode participants’ blameworthiness, and to influence the decision-making processes by reducing the rewarding effects of immoral choices.
Crockett also described unpublished data from another set of experiments. These involved asking participants to make exactly the same decisions; this time, however, they were told that they could donate any profits made from inflicting pain on others to charity.
Knowing that they could do so seemed to abolish their moral preferences, making them more willing to harm their anonymous partner. “The very same mechanism that makes us avoid harming others for our own benefit could actually drive us to harm others for a perceived greater good,” said Crockett, “and that could explain why people are willing to inflict harm in the service of an idea, or the notion that they are actually doing good.”