Saturday, October 27, 2012

From the naturalism workshop, part II


by Massimo Pigliucci

Second day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. Today we started with Steven Weinberg (Nobel in physics) introducing his thoughts about morality. Why is a physicist talking about morality, you may ask? Good question, I reply, but let’s see...

The chair of the session was Rebecca Goldstein, who mentioned that she doesn’t find the morality question baffling at all. For her, moral reasoning is something that we have been doing for a long time, and moreover where philosophy has clearly made positive and incremental contributions throughout human history. She of course accepts the idea of a naturalistic origin for morality, but immediately added that evolutionary psychological accounts are simply not enough. In the process, she managed to both appreciate and criticize the work of Jonathan Haidt on the different dimensions of liberal vs progressive moral reasoning.

Weinberg agreed with Goldstein’s broad claim that we can reason about morality, but was concerned with the question of whether we can ground morality using science, and particularly the theory of evolution. He declared that he has been “thoroughly annoyed” by Sam Harris’ book on scientific answers to moral questions. He went on to observe that most people don’t actually have a coherent set of moral principles, nor do they need it. Weinberg said that early on in his life he was essentially a utilitarian, thinking that maximization of happiness was the logical moral criterion. Then he read Huxley’s Brave New World, and he was disabused of such a simplistic notion. Which is yet another reason he didn’t find Harris compelling, considering that the latter is a self-described utilitarian.

Weinberg also criticized utilitarianism by rejecting Peter Singer-style arguments to the effect that more good would be done in the world by living on bare minimum necessities and giving away much of your income to others. Weinberg argued instead that we owe loyalty to our family and friends, and that there is nothing immoral about preferring their welfare to the welfare of strangers. Indeed, although I don’t think he realized it, he was essentially espousing a virtue ethics / communitarian type of ethics. Weinberg concluded from his analysis that we “ought to live the unexamined life” instead, because that’s what the human condition leads us to.

Goldstein’s response was that we don’t need grounding postulates to engage in fruitful moral reasoning, and I of course agree. I pointed out that ethics is about developing reasonable ways to think about moral issues, starting with (and negotiating) certain assumptions about human life. In my book, for instance, Michael Sandel’s writings are excellent examples of how to engage in fruitful moral reasoning without having to settle the sort of metaethical issues that worry Weinberg (interestingly, and gratifyingly, I saw Jerry Coyne nodding somewhat vigorously while I was making my points). Dennett added that there are ways of thinking through issues that do not involve fact finding, but rather explore the logical consequences of certain possible courses of action — which is why moral philosophy is informed by facts (even scientific facts), but not determined by them. And for Dennett, of course, we — meaning humanity at large — are the ultimate arbiters of what works and doesn’t work in the ethical realm.

Dawkins agreed with Goldstein that there has been moral progress, and that we live in a significantly improved society in the 21st century compared to even recent times, let alone of course the Middle Ages. Dawkins also mentioned Steven Pinker’s work demonstrating a steady decrease in violence throughout human history (Goldstein humorously pointed out that Pinker got the idea from her). Dawkins also made the good point that we talk about morality as if it were only a human problem because all other species of Homo went extinct. Had that not been the case, we might be having a somewhat different conversation.

Both Weinberg and Goldstein agreed that a significant amount of moral progress comes from literature, and more recently movies. Things like Uncle Tom’s Cabin, or Sidney Poitier’s role in Guess Who’s Coming to Dinner, have the power to help changing people’s attitudes about what is right and what is wrong.

Which led to my comment about Hume and Aristotle. I think — with these philosophers — that moral reasoning is grounded in a broadly construed conception of human nature. Aristotle emphasized the importance of community environment, and particularly of one’s family and early education environment; but also of reflection and conscious attempts at improving. Hume agreed that basic human instincts are a mix of selfish and cooperative ones, but also argued that human nature itself can change over time, as a result of personal reflection and community wide conversations.

Carroll noted a surprising amount of agreement in the group about the fact that morality arose naturally because we are large brained social animals with certain needs, emotions and desires; but also about the fact that factual information and deliberate reflection can both improve our lot and the way we engage in moral reasoning. Owen Flanagan, however, pointed out that most people outside of this group do think of morality in a foundational sense, which is untenable from a naturalistic perspective. Owen went on to remind people that David Hume — after the famous passage warning about the logical impossibility of deriving oughts from is — went on to engage into quite a bit of moral reasoning nonetheless, simply doing so without pretending that he was demonstrating things.

Weinberg claimed that he cannot think of a way to change other people’s minds about moral priorities when there is significant disagreement. But Dennett pointed out that we do this all the time: we engage in societal conversations with the aim of persuading others, and in so doing we are changing their nature. That is, for instance, how we made progress on issues such as women rights, gay rights, or animal welfare (as Goldstein had already pointed out).

Terrence Deacon remarked that there was an elephant in the room: how is it that this group agrees so broadly about morality, if a good number of them are also fundamental reductionists? Isn’t moral reasoning an emergent property of human societies? That is indeed a good question, and I always wonder how people like Coyne or Rosenberg (or Harris, who was invited but couldn’t make it to the workshop) can at the same time hold essentially nihilistic views about existence and yet talk about good and bad things and what we should (ought?) to do about them? Carrol agreed that we should be using the emergence vocabulary when talking about societies and morality. In his mind, the stories we tell about atoms are different from the stories we tell about ethics; the first ones are descriptive, the latter ones become prescriptive. To use his kind of example, we can use the term “wrong” both when someone denies the existence of quarks and when someone kills an innocent person, but that word indicates different types of judgments that we need to keep distinct.

Simon DeDeo asked what sort of explanation do we have for saying that, say, Western society has gotten “better” at ethical issues? (We all agreed that, more or less, it has.) We don’t seem to have anything like, say, the evolutionary explanation of what makes a bird “better” at flying. But Don Ross replied that we do have at least partial explanations, for instance drawing on the resources of game theory. In response to Ross, DeDeo pointed out that game theory can only give an account of morality within a consequentialist framework. Both Ross and (interestingly) Alex Rosenberg disagreed. Dennett helped clarifying things here, making a distinction between what he called “second rate” (or naive) consequentialism, which is a bad idea easily criticized on philosophical grounds, and the broader concept that of course consequences matter to human ethical decision making. In general, I think that we are still doing fairly poorly in the area that we need to answer DeDeo’s question: a good theory of cultural evolution. But of course that doesn’t mean it cannot be done or will not be done at some point (as is well known, I’m skeptical of memetic-type theories in this respect).

In the second part of the morning session we moved to consider the concept of meaning, with Owen Flanagan giving the opening remarks. He pointed out that the historical realization that we are “just” animals caused problems within the context of the preceding cultural era during which human beings were thought of as special direct creations of gods. Owen brought us back 2,500 years ago, to Aristotle and the ancient Greek’s concept of eudaimonia, the life that leads to human flourishing. Aristotle noted that people have different ideas of the good life, but also that there are some universals (or nearly so). One of these is that no normal person wishes to have a life without friends. Flanagan thinks — and I agree — that we can use the Aristotelian insight to build a discipline of “eudaimonics,” one that is both descriptive and normative. The good  life is about the confluence of the true, the beautiful and the good (all lower case letters, of course).

An example I brought up of modern-day analysis of a concept that Aristotle would have been familiar with is the comparison between people’s day-to-day self-reported happiness vs their overall conception of meaning in their life when it comes to having children. Turns out that having children actually significantly decreases day-to-day happiness, but it also increases the long-term positive meaning that most people attribute to their lives.

Rebecca Goldstein argued that novelists have a unique perspective on the issue of meaning, because of the process involved in devising characters and their stories. She claims that her writing novels taught her that a major component of flourishing and meaning is the idea of an individual mattering to other people. (Again, Aristotle would not have been surprised.) Rebecca connected this to the question that she is often asked about how can she find meaning in life as an atheist. She had a hard time even understanding the question, until she realized that of course for theists meaning is defined universally by an external agency on the basis that we “matter” to the gods. So the atheist is still using the idea that mattering and meaning are connected, she just does away with the external agency.

Dennett suggested that we as atheists need to think of projects and organizations that help secular people feel that they matter in more productive ways than, say, joining a crusade to kill the infidels. Janna Levin brought up the example of a flourishing of science clubs in places like New York City, which provide a community for intellectual kins (and of course there are also a good number of philosophy meetups!). Still, I argued (and Carroll, Goldstein, Coyne, and Flanagan agreed) that attempts in that direction — like the various Societies for Ethical Culture — are largely a failure. Secularists, especially in Europe, find meaning and feel that they matter because they live in a society they feel comfortable in and are active members of. Just like the ancient Greeks’ concept of a polis that citizens could be proud of and contribute to. It’s the old Western idea of civic pride, if you will. 

I need to note at this point, that — just as in the case of morality discussed above — the nihilists / reductionists in the group didn’t seem to have any problem meaningfully talking about meaning, so to speak, even though their philosophy would seem to preclude that sort of talk altogether... (The exception was Rosenberg, who stuck to his rather extreme nihilist guns.)

The afternoon session was devoted to free will, with Dennett giving the opening remarks. His first point was that there is a difference between the “manifest image” and the “scientific image” of things. For instance, there is a popular / intuitive conception of time (manifest image), and then there is the philosophical and/or scientific conception of time. But it is still the case the time exists. Why, then, asked Dennett, do so many neuroscientists flat out deny the existence of free will (“it’s an illusion”), rather than replacing the common image with a scientific one?

Free will, for Dennett, is as real as time or, say, colors, but it’s not what some people think it is. And indeed, some views of free will are downright incoherent. He suggested that nothing we have learned from neuroscience shows that we haven’t been wired (by evolution) for free will, which means that we also get to keep the concept of moral responsibility. That said, contra-causal free will would be a miracle, and we can’t help ourselves to miracles in a naturalistic framework.

Citing a Dilbert cartoon, Dennett said that the zeitgeist is such that people think that it follows from naturalism that we are “nothing but moist robots.” But this, for Dennett, is confusing the ideology of the manifest image with the manifest image itself. An analogy might help: one could say that if that is what you mean by color (i.e., what science means by that term), then color doesn’t exist. But we don’t say that, we re-conceptualize color instead. For instance: it makes perfect sense to distinguish between people who have the competence and will to sign a contract, and those who don’t. We have to draw these distinctions because of practical social and political reasons, which however does not imply that we are somehow cutting nature at its joints in a metaphysical sense. Moreover, Dennett pointed out that experiments show that if people are told that there is no free will they cheat more frequently, which means that the conceptualization of free will does have practical consequences. Which in turn puts some responsibility on the shoulders of neuroscientists and others who go around telling people that there is no free will.

Jerry Coyne gave the response to Dennett’s presentation, not buying into the practical dangers highlighted by the latter (Jerry seemed to think that these effects are only short-term; that may be, but I don’t think that undermines Dennett’s point). Coyne declared himself to be an incompatibilist (no surprise there), accusing compatibilists of conveniently redefining free will in order to keep people from behaving like beasts. However, Jerry himself admitted to having changed his definition of free will, and I think in an interesting direction. His old definition was the standard idea that if the tape of the history of the universe were to be played again you would somehow be able to make a different decision, which would violate physical determinism. Then he realized that quantum indeterminacy could, in principle, bring in indeterminism, and could even affect your conscious choices (through quantum effects percolating up to the macroscopic level). So he redefined free will as the idea that you are able to make decisions independently of your genes, your environments and their interactions. To which Dennett objected that that’s a pretty strange definition of free will, which no serious compatibilist philosopher would subscribe to.

Jerry then plunged into his standard worry, the same that motivates authors like Sam Harris: we don’t want to give ground to theologically-informed views of morality, and incompatibilism about free will (“we are the puppets of our genes and our environments”) is the best way to do it. Dennett was visibly shaking his head throughout (so was I, inwardly...).

In the midst of all of this, Jerry mentioned the (in)famous Libbett experiments, even though they have been taken apart both philosophically and, more recently, scientifically. Which Dennett, Flanagan, and Goldstein immediately pointed out.

During the follow up discussion Weinberg declared his leaning toward Dennett’s position, despite his (Weinberg’s) acceptance of determinism. We weigh reasons and we arrive at conscious decisions, and we know this by introspection — although he pointed out that of course this doesn’t mean that all our own desires are transparent and introspectively available. Weinberg did indeed paint a picture very similar to Dennett’s: we may never arrive — given the same circumstances — to a different decision, but it is still our decision.

Rosenberg commented that we have evidence that we cannot trust our introspection when it comes to conscious decision making, again citing Libbett. Both Dennett and Flanagan once more pointed out that those experiments have been taken conceptually apart (by them) decades ago (and, I reminded the group, questioned on empirical grounds more recently). Dennett did agree that introspection is not completely reliable, but he remarked that that’s quite different from claiming that we cannot rely on it at all.

Owen Flanagan discussed experiments about conceptions of free will done on undergraduate students. The students were given a definition of free will and then asked questions about whether the person made the decision and was responsible for her actions. The majority of subjects turned out to be both determinists and compatibilists, which undermines the popular idea that the commonsense concept of free will is contra-causal.

I pointed out, particularly to Jerry and Alex Rosenberg, that incompatibilists seem to discard or bracket out the fact that the human brain evolved to be a decision making, reason-weighing organ. If that is true, then there is a causal story that involves the brain, and my decisions are mine in a very strong sense, despite being the result of my lifelong gene-environment interactions (and the way my conscious and unconscious brain components weigh them).

Sean Carroll also objected to Coyne, using an interesting analogy: if Jerry applied his argument toward incompatibilism to fundamental physics, he would have to conclude for an incompatibility between statistical mechanics and the second law of thermodynamics. But, Sean suggested, that would be a result of confusing language that is appropriate for one level of analysis with language that is appropriate for another level. (Though he didn’t say that, I would go even further, following up on the previous day’s discussion, and suggest that free will is an emergent property of the brain in a similar sense to which the second law is an emergent property of statistical mechanics — and on the latter even Steven Weinberg agreed!)

Terrence Deacon asked why we insist in using the term “free” will, and Jerry had previously invited people to drop the darn thing. I suggested, and Owen elaborated on it, that we should instead use the terms that cognitive scientists use, like volition or voluntary vs involuntary decision making. Those terms both capture the scientific meaning of what we are talking about and retain the everyday implication that our decisions are ours (and we are therefore responsible for them). And dropping “free” also doesn’t generate confusion about contra-causal mystical-theological mumbo jumbo.

Dennett, in response to a question by Coyne about the evolution of free will, pointed out two interesting things. First, if we take free will to be the ability of a complex brain to exercise conscious decision making, then it is a matter of degrees, and other species may have partial free will. Second, and relatedly, human beings themselves are not born with free will: we develop competence to make (morally relevant, among others) decisions during early life, in part as the result of education and upbringing.

Jerry at some point brought up the case of someone who commits a murder because a brain tumor interfered with his brain function. But I commented that it is strange to take those cases — where we agree that there is a malfunction of the brain — and make them into arguments to reject moral responsibility. Dennett agreed, talking about brains being “wired right” or “wired wrong,” which is a matter of degree, and which translates into degrees of moral responsibility (lowest for the guy affected by the tumor, highest for the person who kills for financial or other gain). Jerry, interestingly, brought up the case of a person who becomes a violent adult because of childhood traumas. But Dennett and I had a response that is in line with our conception of the brain as a decision making organ with higher or lower degrees of functionality: the childhood trauma imposes more constraints (reduces free will) on the brain’s development than a normal education, but fewer than a brain tumor. Consequently, the resulting adult bears an intermediate degree of moral responsibility for his actions.

The second session of the afternoon was on consciousness, introduced by David Poeppel. He claimed — as a cognitive scientist — that there are good empirical reasons to reject the conclusion that Libbett’s experiments (again!) undermine the idea of conscious decision making. At the same time, he did point to research showing that quite a bit of decision making in our brain is in fact invisible or inaccessible to our consciousness.

Dennett brought up experiments on priming in psychology, where the subjects are told not to say whatever word they are going to be primed for. Turns out that if the priming is too fast for conscious attention to pick it up, the subjects will in fact say the word, contravening the directions of the experimenter. But if the time frame is sufficiently long for consciousness to come in, then people are capable of stopping themselves from saying the priming word. The conclusion is that this is good evidence that conscious decision making is indeed possible, and that we can study its dynamics (and limits).

Rosenberg warned that we have good evidence leading us to think that we cannot trust our conscious judgments about our motives and mental states. Indeed, as Dennett pointed out, of course there is self-deception, rationalization, ideology, and self-fooling. But it is also the case that it is only through conscious reasoning that we get to articulate and reflect on our thoughts. We need consciousness to pay attention to our reasons for doing things. Conscious reasons can be subjected to a sort of “quality control” that unconscious reasons are screened off from. For Dennett human beings are powerful thinking beings because they can submit their own thinking to analysis and quality control.

And of course Daniel Kahneman’s work on type I (fast, unconscious) vs type II (slow, conscious) thinking came up. Poeppel pointed out that sometimes type I thinking is not just faster, but better than type I. To which Dennett replied that if you are about to have brain surgery you might prefer the surgeon to make considered decisions based on his type II system rather than quick type I decisions. Of course, which system does a better job is probably situation dependent, and at any rate is an empirical question.

Carroll asked whether it is actually possible to distinguish conscious from unconscious thoughts, to which both Poeppel and Goldstein replied yes, and we are getting better at it. Indeed, this has important practical applications, as for instance anesthesiologists have to be able to tell whether there is conscious activity in a patient’s brain before an operation begins. However, the best evidence indicates that consciousness is a systemic (emergent?) property, since it disappears below a certain threshold of brain-wide activity.

Dennett brought up the example of the common experience of thinking that we understand something, until we say it out loud and realize we don’t. No mystery there: we are bringing in “more agents” (or, simply, more and more deliberate cognitive resources) into the task, so it isn’t surprising that we get a better outcome as a result.

Rosenberg asked if we were going to talk about the “mysterian” stuff about consciousness, things like qualia, aboutness, and what is it like to be a bat. I commented that the only sensible lesson I could take out of Nagel’s famous bat-paper is not that first person experiences are scientifically inexplicable, but that the only way to have them is to actually have them. Dennett, however, remarked that he pointedly asked Nagel: if you had a twin brother who was a philosopher, would you be able to imagine what it is like to be your brother? To which Nagel, unbelievably I think, answered no. Of course we are perfectly capable of imagining what it is like to be another biologically relevantly similar to ourselves being.

Flanagan brought up Colin McGinn’s “mysterian” position about consciousness, pointing out that there is no equivalent in neuroscience or philosophy of mind of, say, Godel’s incompleteness theorems or Heisenberg’s indeterminacy principle. Similarly, Owen was dismissive (rightly, I think) of David Chalmers’ dualism based on mere conceivability (of unconscious zombies who behave like conscious beings, in his case).

I asked, provocatively, if people around the table think that consciousness is an illusion. Jerry immediately answered yes, but the following discussion clarified things a bit. Turns out — and Dennett was of great help here — that when Jerry says that consciousness is an epiphenomenon of brain functioning he actually means something remarkably close to what I mean by consciousness being an emergent property of the brain. We settled on “phenomenon,” which is the result of evolution, and which has functions and effects. This, of course, as opposed to the sense of “epiphenomenon” in which something has no effect at all, and which in this context leads to an incoherent view of consciousness (but one that the “mysterians” really like).

At this point Rosenberg introduced yet another controversial topic: aboutness. How is it possible, from a naturalist’s perspective, to have “Darwinian systems” like our brains that are capable of semantic reference (i.e., meaning)? Terrence Deacon responded that the content of thought, its aboutness, is not a given brain state, but brain states are necessary to make semantic reference possible. Don Ross, in this context, invoked externalism: a brain state in itself doesn’t have a stable meaning or reference, that stable meaning is acquired only by taking into account the largest system that includes objects external to us. Dennett’s example was that externalism is obviously true for, say, money: bank accounts have numbers that represent money, but they are not money, and the system works only because the information internal to the bank’s computers refer to actual money present in the outside world.

Rosenberg seemed bothered by the use of intentional language in describing aboutness. But Dennett pointed out that intentionality is needed to explain a certain category of phenomena, just like — I suggested — teleological talk is necessary (or at the very least very convenient) to refer to adaptations and natural selection. And here I apparently hit the nail on the head: Rosenberg rejects the (naturalistic) concept of teleology, while Dennett and I accept. That is why Rosenberg has a problem with intentional language and Dennett and I don’t. 

And that, as it tuns out, was a pretty good place to end the second day. Tomorrow: scientism and the relationship between science and philosophy.

No comments:

Post a Comment