The emerging moral psychology
Experimental results are beginning to shed light on the psychological foundations of our moral beliefs

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others’ insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the “new synthesis in moral psychology.” The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human “moral faculty.”

Hot morality

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of “affective” systems that generate “hot” flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional “rationalist” approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts—the highest stages of moral development are reached when people are able to reason about abstract general principles, such as justice, fairness and the Kantian maxim that individuals should be treated as ends and never as means.

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt “bad” or “wrong.” One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, “I just know it’s wrong!”—a phenomenon Haidt calls “moral dumbfounding.”

It’s hard to argue that people are rationally working their way to moral judgements when they can’t come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people’s moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds.

A Tale Of Two Faculties

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. In the first, an out-of-control trolley is heading down a rail track, ahead of which are five hikers unaware of the looming threat. On the bank where you’re standing is a switch that, if flicked, will send the trolley on to another track on which just one person is walking. If you do nothing, five people die; flick the switch and just one person will die.

To flick or not to flick—what would you do? Like 90 per cent of people, you probably looked at the numbers (saving five and losing one, versus losing five) and decided to hit the switch. Now consider the Footbridge Problem: again, a trolley is heading towards five unsuspecting hikers, but this time there is no switch you can throw to save the hapless hikers. The only way to stop the trolley is to put a heavy weight in front of the impending threat. Unfortunately, the only sufficiently weighty object nearby is a large man standing on the footbridge with you. Do you push him in front of the trolley, and to his death, to save the five hikers? Or is this beyond the pale? Is inaction now mandated?

Even though the numbers are the same as before—losing one life or losing five—most people feel differently about this dilemma: now a clear majority (70–90 per cent in most studies) say it is not morally permissible to push the man, and those that say it is permissible tend to take longer to reach their decision than when reflecting on the Trolley Problem.

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying “Don’t do it!”; on the other, cognitive elements saying “Save as many people as possible and push the man!” For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

To further explore the causal role of emotions in generating a normal pattern of moral judgements, neuroscientist Antonio Damasio of the University of Southern California and colleagues have looked at the effect on moral judgement of damage to a part of the brain called the ventromedial prefrontal cortex (VMPC), a region previously implicated in processing negative social emotions. Faced with the Trolley Problem, these brain-damaged patients chose like most people with intact brains, opting to flick the switch to save five lives at the expense of one, but in the Footbridge Problem took a coldly rational, utilitarian approach and said that it was morally permissible to throw the fat man in front of the train (using the same “one for five” calculus).

These findings fit in with Greene’s dual-processing view of competing affective–cognitive systems. Damage to the VMPC and impairment of the functioning of the emotional system makes little difference in the Trolley Problem, which involves an impersonal action. But with the Footbridge Problem, for patients with damage to the VMPC, there is no counterbalancing emotional voice to question the wisdom of rationality’s precepts, and the utilitarian calculus carries the day.

Read the rest on Prospect Magazine.

Tags: ethics, moral psychology, morality, morals, psychology

Views: 15

Replies to This Discussion

check out the videos (and transcripts) of a related Edge conference: http://www.edge.org/3rd_culture/morality10/morality10_index.html

Alex McCullie
Thanks Alex. I think I remember seeing that posted before the actual conference, but I haven't been back to see that they have videos and MP3s. I can't view the vids, but I'll download the MP3s later. Thanks again.

RSS

Support Atheist Nexus

Donate Today

Donate

 

Help Nexus When You Buy From Amazon

Amazon

 

© 2014   Atheist Nexus. All rights reserved. Admin: Richard Haynes.

Badges  |  Report an Issue  |  Terms of Service