Friday, December 31, 2010

From the 2010 APA in Boston: Neuropsychology and ethics

This session featured a single speaker, Joshua Greene from Harvard, known for his research on "neuroethics," the neurological underpinnings of ethical decision making in humans. The title of Greene's talk was "Beyond point-and-shoot morality: why cognitive neuroscience matters for ethics."
Greene started out actually acknowledging that there is a pretty strong line separating is and ought, but he contended that there are important points of contact, particularly when it comes to evaluating moral intuitions. Still, he was clear that neither neuroscience nor experimental philosophy will solve ethical problems.
What Greene is interested in is to find out to what factors moral judgment is sensitive to, and whether it is sensitive to the relevant factors. He presented his dual process theory of morality. In this respect, he proposed an analogy with a camera. Cameras have automatic (point and shoot) settings as well as manual controls. The first mode is good enough for most purposes, the second allows the user to fine tune the settings more carefully. The two modes allow for a nice combination of efficiency and flexibility.
The idea is that the human brain also has two modes, a set of efficient automatic responses and a manual mode that makes us more flexible in response to non standard situations. The non moral example is our response to potential threats. Here the amygdala is very fast and efficient at focusing on potential threats (e.g., the outline of eyes in the dark), even when there actually is no threat (it's a controlled experiment in a lab, no lurking predator around).
Delayed gratification illustrates the interaction between the two modes. The brain is attracted by immediate rewards, no matter what kind. However, when larger rewards are eventually going to become available, other parts of the brain come into play to override (sometimes) the immediate urge.
When it comes to moral judgment, Greene's research shows that our automatic setting is "Kantian," meaning that our intuitive responses are deontological, rule driven. The manual setting, on the other hand, tends to be more utilitarian / consequentialist. Accordingly, the first mode involves emotional areas of the brain, the second one involves more cognitive areas.
The evidence comes from the (in)famous trolley dilemma and it's many variations. I will not detail the experiments here, since they are well known. The short version is that when people refuse to intervene in the footbridge (as opposed to the lever) version of the dilemma, they do so because of a strong emotional response, which contradicts the otherwise utilitarian calculus they make when considering the lever version.
Interestingly, psychopaths turn out to be more utilitarian than normal subjects - presumably not because consequentialism is inherently pathological, but because their emotional responses are stunted. Mood also affects the results, with people exposed to comedy (to enhance mood), for instance, more likely to say that it is okay to push the guy off the footbridge.
In a more recent experiment, subjects were asked to say which action carried the better consequences, which made them feel worse, and which was overall morally acceptable. The idea was to separate the cognitive, emotional and integrative aspects of moral decision making. Predictably, activity in the amygdala correlated with deontological judgment, activity in more cognitive areas was associated with utilitarianism, and different brain regions became involved in integrating the two.
Another recent experiment used visual vs. verbal descriptions of moral dilemmas. Turns out that more visual people tend to behave emotionally / deontologically, while more verbal people are more utilitarian.
Also, studies show that interfering with moral judgment by engaging subjects with a cognitive task slows down (though it does not reverse) utilitarian judgment, but has no effect on deontological judgment. Again, in agreement with the conclusion that the first type of modality is the result of cognition, the latter of emotion.
Nice to know, by the way, that when experimenters controlled for "real world expectations" that people have about trolleys, or when they used more realistic scenarios than trolleys and bridges, the results don't vary. In other words, trolley thought experiments are actually informative, contrary to popular criticisms.
What factors affect people's decision making in moral judgment? The main one is proximity, with people feeling much stronger obligations if they are present to the event posing the dilemma, or even relatively near (a disaster happens in a nearby country), as opposed to when they are far (a country on the other side of the world).
Greene's general conclusion is that neuroscience matters to ethics because it reveals the hidden mechanisms of human moral decision making. However, he says this is interesting to philosophers because it may lead to question ethical theories that are implicitly or explicitly based on such judgments. But neither philosophical deontology nor consequentialism are in fact based on common moral judgments, seems to me. They are the result of explicit analysis. (Though Greene raises the possibility that some philosophers engage in rationalizing, rather than reason, as in Kant's famously convoluted idea that masturbation is wrong because one is using oneself as a mean to an end...)
Of course this is not to say that understanding moral decision making in humans isn't interesting or in fact even helpful in real life cases. An example of the latter is the common moral condemnation of incest, which is an emotional reaction that probably evolved to avoid genetically diseased offspring. It follows that science can tell us that three is nothing morally wrong in cases of incest when precautions have been taken to avoid pregnancy (and assuming psychological reactions are also accounted for). Greene puts this in terms of science helping us to transform difficult ought questions into easier ought questions.
Personal question at the end of all this: if emotional ethical judgment is "deontological," and cognitive judgment is utilitarian, could it be that the integration of the two brings us closer to behave in a way consistent with virtue ethics? Something to ponder, methinks.

Wednesday, December 29, 2010

From the 2010 APA in Boston: Teleological thinking in scientific explanations

The first talk of this session was by Devin Henry, Western Ontario. Plato and Aristotle's accounts of teleology is seen in the light of the concept of optimization. In the Phaedo Socrates says that we need to inquire into what is the best way for things to be, a research program stemming from the idea that the universe was put together by a mind aiming at what is best (because that mind is supremely good). The universe is the way it is by necessity, because that is the best way for things to be. Finding that necessity explains a given phenomenon.
This idea is seen by the author as the ancestor of Aristotle's ideas on the subject, including that nature does nothing in vain. It also follows that being the best is in accordance to nature. However, there are important differences between Plato and Aristotle. For instance, Socrates makes his argument at the cosmological level, the good is the good of the whole cosmos, not of individuals (indeed, the other way around, individuals are for the good of the cosmos). Aristotle doesn't invoke a cosmological principle, what is good for the organism is good for it, not for the broader context of the cosmos.
A second difference is that Plato clearly speaks of an intelligent designer. While Aristotle's language is full of design talk, his personification of nature is only metaphorical, like Darwin's. Aristotle's form of teleology is seen in his analysis of why snakes do not have legs. Nature does nothing in vain while doing the best for the organism: if the length of a snake is a built in feature, and if no blooded animal can move with more than four points of leverage (as Aristotle thought), then having no legs is better than having some legs (as a centipede type solution wouldn’t work for bloodied animals).
Aristotle even criticized what today we would label a Panglossian view of the world: things are the best they can be, not the best they can conceived to be. (Again, close to the conception of constraints by modern biologists, the author citing the Gould & Lewontin paper on spandrels.) So Aristotle's concept of teleology is based on optimality, not perfection.
In his analysis of male testis, for instance, Aristotle claims that we need to understand the function of the organ in order to understand its form. Again, a remarkably modern sounding connection between form and function. Aristotle was aware that some species of animals (fish) don't have male testis, which means that testis cannot be essential for reproduction, and yet somehow have to make it better in the animals in which they are present. (Aristotle's specific explanation, that testis slow down sperm production, is not the correct one, of course, but the idea is still guiding functional biology today.)
The second talk was by Jeffrey McDonough, Harvard. A teleological explanation purports to explain something in terms of its outcome. In ancient and early medieval periods the range of teleological explanations was broad, including not just rational beings, but living beings more generally, and even features of the cosmos at large.
In Plato, as well as for Augustine and Aquinas, goodness is prior to being: the universe exists because it is good, it isn't good as one consequence of existing. So goodness figures into explanations of why things are. Also, in this view, teleological explanations are just as appropriate, if not better, than efficient explanations.
This ancient view, however, seemed to commit one to some sort of moral necessitarianism, where god simply has to do what is good, in contradiction with the classic Christian view of divine agency. In later medieval and early modern views, from Scotus to Boyle to Descartes, we see the concept of a libertarian will, where one could choose something that is not best. This means, however, that one can no longer explain what the agent does by considering the outcome. It is the will's efficient decision that becomes central to explanation.
This quickly led to philosophers giving up teleological explanations (final causes) for anything that is not a rational agent (god, angels, and human beings). Hence a mechanistic view of anything that is not a rational agent, a la Descartes.
In more modern times, Spinoza is considered the ultimate enemy of teleology and final causes, again, however, with the exception of rational agents. However, Spinoza was also a naturalist, and it becomes difficult to justify limiting teleology only to a particular subset of natural entities. Accordingly, for him there is no sharp distinction between rational and non rational agents. Spinoza also rejected the idea of objective goodness, which means that one cannot invoke goodness as explanatory. For Spinoza we do not strive toward certain things because we think them valuable, but on the contrary we think certain things valuable because we happen (by our nature) to want them.
Leibniz, on the other hand, presented himself as a strong defender of teleology, in important ways arching back to the Greeks. God here does things because they are good, but god has to consider total goodness, and so chooses whatever maximizes good overall, and may not necessarily be individually good. Leibniz therefore opens again himself to the problem of moral determinism (for finite agents) and moral necessitarianism (for god). Hence some of his compatibilist maneuvering when it comes to free will.
Overall, it seems to me that this session was badly titled, as neither talk (and particularly the second one!) had much to do with scientific explanations, certainly not in the modern sense of the term. Oh well.

From the 2010 APA in Boston: Social networking and philosophy

The APA meeting in Boston is turning into a disaster because of the weather: many sessions have been canceled because speakers couldn’t get to this frozen hell, while other sessions are being run by substitute speakers gathered at the last minute, with some presenting talks that only have a vague connection to whatever it was that the original session was supposed to be about.
This particular session was billed as having to do with how Twitter is changing the connectedness of philosophical communities, but turned out to be about social networking more broadly. Neither of the original speakers was present, and neither of the two replacement talks was about Twitter specifically. Oh well.
The first speaker was Casey Haskins (SUNY Purchase), who announced that he was going to talk about aesthetics and interconnected communities (though eventually aesthetics didn’t really make much of an appearance, probably a good thing). I find it amazing that someone gives a talk about Twitter, Facebook, and RSS feeds but freely admits that he doesn't know much about and has in fact just started using them.
"Small worlds" (the term Haskins uses for social networks) can be thought of as analogous to biological ecosystems that exchange information instead of organic materials. They are media that allow our "extended minds."
The guy was all over the place, using the term "good ideas" to talk about things ranging from Twitter to the evolution of coral reefs (apparently, nature can have ideas too, though what determines whether Twitter and corals are "good" isn't clear).
Reefs are then conceptualized as "platforms," apparently in the engineering sense (like Twitter!), structures that make it possible for other things to happen. He suggests an analogy between information flow on Twitter and material flow in biological ecosystems. I couldn’t be more unconvinced.
Twitter is presented as a "cultural exaptation" of a more primitive text based system (hence the 140 characters limit). Of course this is an example of an intentional exaptation, and hence yet another disanalogy with biology. I wonder what’s up with some philosophers’ biology envy. Someone should do a sociological study on this.
The second co-opted speaker was Saray Ayala (Universitat Autonoma de Barcelona). She talked about whether the computational theory of mind accounts for the “extended mind” (again!) made possible by environmental inputs, including social networks. Social networks (as environmental structures) may impose constraints on the functioning of our minds, and some of these constraints may not be computable.
She brings up an interesting example of robots that literally "embody" the ability of carrying out simple computations, by virtue of the way they are physically put together. A particular morphology of the robot plays the role of the hidden layer in a three-way layer system producing a logical XOR function (the other two layers being the input and the output).
The author then suggests that a computational theory of mind does not explain the environmental contribution of social networks to mind, because the theory treats the environment as background, passive with respect to computation, as opposed to as a structural component of what the mind does. Well, I’m not too sympathetic to computational theories of mind anyway, so I’ll need to look into this.