Friday, December 31, 2010

From the 2010 APA in Boston: Neuropsychology and ethics

This session featured a single speaker, Joshua Greene from Harvard, known for his research on "neuroethics," the neurological underpinnings of ethical decision making in humans. The title of Greene's talk was "Beyond point-and-shoot morality: why cognitive neuroscience matters for ethics."
Greene started out actually acknowledging that there is a pretty strong line separating is and ought, but he contended that there are important points of contact, particularly when it comes to evaluating moral intuitions. Still, he was clear that neither neuroscience nor experimental philosophy will solve ethical problems.
What Greene is interested in is to find out to what factors moral judgment is sensitive to, and whether it is sensitive to the relevant factors. He presented his dual process theory of morality. In this respect, he proposed an analogy with a camera. Cameras have automatic (point and shoot) settings as well as manual controls. The first mode is good enough for most purposes, the second allows the user to fine tune the settings more carefully. The two modes allow for a nice combination of efficiency and flexibility.
The idea is that the human brain also has two modes, a set of efficient automatic responses and a manual mode that makes us more flexible in response to non standard situations. The non moral example is our response to potential threats. Here the amygdala is very fast and efficient at focusing on potential threats (e.g., the outline of eyes in the dark), even when there actually is no threat (it's a controlled experiment in a lab, no lurking predator around).
Delayed gratification illustrates the interaction between the two modes. The brain is attracted by immediate rewards, no matter what kind. However, when larger rewards are eventually going to become available, other parts of the brain come into play to override (sometimes) the immediate urge.
When it comes to moral judgment, Greene's research shows that our automatic setting is "Kantian," meaning that our intuitive responses are deontological, rule driven. The manual setting, on the other hand, tends to be more utilitarian / consequentialist. Accordingly, the first mode involves emotional areas of the brain, the second one involves more cognitive areas.
The evidence comes from the (in)famous trolley dilemma and it's many variations. I will not detail the experiments here, since they are well known. The short version is that when people refuse to intervene in the footbridge (as opposed to the lever) version of the dilemma, they do so because of a strong emotional response, which contradicts the otherwise utilitarian calculus they make when considering the lever version.
Interestingly, psychopaths turn out to be more utilitarian than normal subjects - presumably not because consequentialism is inherently pathological, but because their emotional responses are stunted. Mood also affects the results, with people exposed to comedy (to enhance mood), for instance, more likely to say that it is okay to push the guy off the footbridge.
In a more recent experiment, subjects were asked to say which action carried the better consequences, which made them feel worse, and which was overall morally acceptable. The idea was to separate the cognitive, emotional and integrative aspects of moral decision making. Predictably, activity in the amygdala correlated with deontological judgment, activity in more cognitive areas was associated with utilitarianism, and different brain regions became involved in integrating the two.
Another recent experiment used visual vs. verbal descriptions of moral dilemmas. Turns out that more visual people tend to behave emotionally / deontologically, while more verbal people are more utilitarian.
Also, studies show that interfering with moral judgment by engaging subjects with a cognitive task slows down (though it does not reverse) utilitarian judgment, but has no effect on deontological judgment. Again, in agreement with the conclusion that the first type of modality is the result of cognition, the latter of emotion.
Nice to know, by the way, that when experimenters controlled for "real world expectations" that people have about trolleys, or when they used more realistic scenarios than trolleys and bridges, the results don't vary. In other words, trolley thought experiments are actually informative, contrary to popular criticisms.
What factors affect people's decision making in moral judgment? The main one is proximity, with people feeling much stronger obligations if they are present to the event posing the dilemma, or even relatively near (a disaster happens in a nearby country), as opposed to when they are far (a country on the other side of the world).
Greene's general conclusion is that neuroscience matters to ethics because it reveals the hidden mechanisms of human moral decision making. However, he says this is interesting to philosophers because it may lead to question ethical theories that are implicitly or explicitly based on such judgments. But neither philosophical deontology nor consequentialism are in fact based on common moral judgments, seems to me. They are the result of explicit analysis. (Though Greene raises the possibility that some philosophers engage in rationalizing, rather than reason, as in Kant's famously convoluted idea that masturbation is wrong because one is using oneself as a mean to an end...)
Of course this is not to say that understanding moral decision making in humans isn't interesting or in fact even helpful in real life cases. An example of the latter is the common moral condemnation of incest, which is an emotional reaction that probably evolved to avoid genetically diseased offspring. It follows that science can tell us that three is nothing morally wrong in cases of incest when precautions have been taken to avoid pregnancy (and assuming psychological reactions are also accounted for). Greene puts this in terms of science helping us to transform difficult ought questions into easier ought questions.
Personal question at the end of all this: if emotional ethical judgment is "deontological," and cognitive judgment is utilitarian, could it be that the integration of the two brings us closer to behave in a way consistent with virtue ethics? Something to ponder, methinks.

4 comments:

  1. http://en.wikipedia.org/wiki/Trolley_problem

    ReplyDelete
  2. It's worth pointing out that many if not most utilitarians are rule-utilitarians, they will advocate for rules that will in general/on average lead to greatest good for greatest number, not for everyone constantly engaging in detailed utility calculus before making each decision. So the dichotomy between Kantian and utilitarian becomes blurred when we talk about how people do or should make decisions (very different ethical theories could lead to the same or similar decision protocols).

    If human behaviour can be generalized in the way suggested, it suggests that this is something of an evolved response and the basic and persistant nature of the "emotional" ethical rules suggests this mode is adaptive. This further supports rule-utilitarians have a point that obeying rules that will not be optimal in each case can be better than trying to calculate an individual optimal strategy in each case (at least for optomizing inclusive fitness, if not the true utility function).

    Obviously these sorts of discussions do help inform the debate and matter a great deal to applied ethics, but its important to keep in mind the nuiansces.

    As to your question, well Aristotle did say that virtue was the mean between two extremes...

    ReplyDelete
  3. To separate out the disgust component of the distaste for incest, we can see how couples who have one or more children with Down's Syndrome do not incur society's wrath if they try for more children, even if the subsequent children have a higher risk of birth defects (though I'm not sure what the relative magnitude is compared to the offspring of incestuous unions)

    ReplyDelete

Note: Only a member of this blog may post a comment.