Wednesday, October 31, 2012

CFP: Philosophy of the Social Sciences (Venice, September 2013)

THE EUROPEAN NETWORK FOR THE PHILOSOPHY OF THE SOCIAL SCIENCES & THE PHILOSOPHY OF SOCIAL SCIENCE ROUNDTABLE

Call for Papers:

First joint European/American Conference University of Venice Ca' Foscari
3-4 September, 2013

The European Network for the Philosophy of the Social Sciences and the Phil= osophy of Social Science Roundtable invite contributions to their first joi= nt conference. Contributions from all areas within the philosophy of the so= cial sciences, from both philosophers and social scientists, are encouraged= .

Keynote speakers:

  *   Cristina Bicchieri (University of Pennsylvania)
  *   Nancy Cartwright (University of Durham/ University of California San =
Diego)

Submissions:

  *   An abstract of no more than 1000 words suitably prepared for blind re=
viewing should be submitted electronically through the Easychair system at =
  https://www.easychair.org/conferences/?conf=3Denpossrt2013. Only one abst= ract per person may be submitted.
  *   Deadline for submission: 27 January, 2013
  *   Date of notification of acceptance: 15 March, 2013

Local organizers:

  *   Eleonora Montuschi, Luigi Perissinotto (University of Venice Ca' Fosc=
ari, Dept. of Philosophy and Cultural Heritage, Philosophy Section).

Conference homepage:
For more information about the conference see www.enposs.eu<http://www.enpo= ss.eu>

Publication:

  *   Selected papers from the Conference will be published in an annual sp=
ecial issue of the journal Philosophy of the Social Sciences

ENPOSS:
The purpose of the European Network of Philosophy of the Social Sciences is=  to promote, encourage and facilitate academic discussion and research in t= he philosophy of the social sciences broadly conceived.
Steering Committee: Alban Bouvier (Paris), Byron Kaldis (Athens), Thomas Ue= bel (Manchester), Julie Zahle (Copenhagen), and Jes=FAs Zamora-Bonilla (Mad= rid).

PSSRT:

The Philosophy of Social Science Roundtable serves as a forum for communica= tion among philosophers and social scientists who share an interest in disc= ussion of epistemology, explanatory paradigms, and methodologies of the soc= ial sciences.

Programme Committee: James Bohman (St. Louis), Mark Risjord (Atlanta), Paul=  Roth (Santa Cruz), Stephen Turner (Tampa), Alison Wylie (Seattle)

Tuesday, October 30, 2012

CFP: Models and Decisions (Munich, April 2013)

***************************************
6th Munich-Sydney-Tilburg conference on

MODELS AND DECISIONS

Munich Center for Mathematical Philosophy

10-12 April 2013

http://www.lmu.de/ModelsAndDecisions2013

****************************************
Mathematical and computational models are central to decision-making in a wide-variety of contexts in science and policy: They are used to assess the risk of large investments, to evaluate the merits of alternative medical therapies, and are often key in decisions on international policies – climate policy being one of the most prominent examples. In many of these cases, they assist in drawing conclusions
from complex assumptions. While the value of these models is undisputed, their increasingly widespread use raises several philosophical questions: What makes scientific models so important? In which way do they describe, or even explain their target systems? What makes models so reliable? And: What are the imports, and the limits, of using models in policy making? This conference will bring together philosophers of science, economists, statisticians and policy makers to discuss these
and related questions. Experts from a variety of field will exchange first-hand experience and insights in order to identify the assets and the pitfalls of model-based decision-making. The conference will also address and evaluate the increasing role of model-based research in scientific practice, both from a practical and from a philosophical point of view.

We invite submissions of extended abstracts of 1000 words by 15 December 2012. Decisions will be made by 15 January 2013.

KEYNOTE SPEAKERS: Luc Bovens (LSE), Itzhak Gilboa (Paris and Tel Aviv),
Ulrike Hahn (Birkbeck), Michael Strevens (NYU), and Claudia Tebaldi (UBC)

ORGANIZERS: Mark Colyvan, Paul Griffiths, Stephan Hartmann, Kaerin
Nickelsen, Roland Poellinger, Olivier Roy, and Jan Sprenger

PUBLICATION: We plan to publish selected papers presented at the
conference in a special issue of a journal or with a major a book
publisher (subject to the usual refereeing process). The submission
deadline is 1 July 2013. The maximal paper length is 7000 words.

GRADUATE FELLOWSHIPS: A few travel bursaries for graduate students are
available (up to 500 Euro). See the website for details.

Special Issue: Kuhnian Perspectives on the Life and Human SciencesSCIENCES

To mark the 50th anniversary of the publication of Thomas Kuhn's The Structure of Scientific Revolutions, a Kuhn-and-revolutions-themed special issue of articles from Studies in History and Philosophy of Biological and Biomedical Sciences is now available for free downloading at the journal's website:

http://www.journals.elsevier.com/studies-in-history-and-philosophy-of-science-part-c-studies-in-history-and-philosophy-of-biological-and-biomedical-sciences/

In the main journal, articles from the current (September 2012) issue include:

* Anna Maerker on Florentine anatomical wax models in eighteenth-century Vienna
* Roberta Millstein on Darwin, race and sexual selection
* Leon Rocha on Needham, Daoism and Science and Civilization in China

Monday, October 29, 2012

From the naturalism workshop, part III

And we have now arrived at the commentary on the final day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. It was my tun to do an introductory presentation on the relationship between science and philosophy, and on the idea of scientism. (Part I of this commentary is here, part II here.)

I began by pointing out that it doesn’t help anyone if we play semantic games with terms like “science” and “philosophy.”  In particular, “science” cannot be taken to be simply whatever deals with facts, just like “philosophy” isn’t whatever deals with thinking. So for instance, facts about the planets in the solar system are scientific facts, but the observation that I live in Manhattan near the Queensborough Bridge is just a fact, science has nothing to do with it. Similarly, John Rawls’ book A Theory of Justice, to pick an arbitrary example, is real philosophy, while Ron Hubbard’s nonsense about Dianetics isn’t, even though he thought of it as such.

So science becomes a particular type of structured social activity, characterized by empirically driven hypothesis testing about the way the world works, peer review, technical journals, and so on. And philosophy is about deploying logic and general tools of reasoning and argument to reflect on a broad range of subject matters (epistemology, ethics, aesthetics, etc.) and to reflect on other disciplines (“philosophies of”).

Another important thing to get straight: philosophy is not in the business of advancing science. We’ve got science for that, and it works very well. Some philosophy is “continuous” with science, but most is not. Also, philosophy makes progress by exploring logical space, not by making empirical discoveries.

I then brought up the Bad Boy of physics, Richard Feynman, who famously said: “Philosophy of science is about as useful to scientists as ornithology is to birds.” True enough (except when it comes to ornithologists helping out avoiding the extinction of some bird species), but surely that does not imply that ornithology is thereby useless.

Next, I moved to a discussion of scientism. I suggested that in the strong sense this is the view that only scientific claims, or only questions that can be addressed by science, are meaningful. In a weaker sense, it is the view that the methods of the natural sciences can and should be applied to any subject matter. I think the first one is indefensible, and that the second one needs to be qualified and circumscribed. For instance, there are plenty of areas where science has little or nothing interesting to say: mathematics, logic, aesthetics, ethics, literature, just to name a few.

It is, of course, true that a number of philosophers have said, and continue to say, bizarro things about science, or even about philosophy itself (Thomas Nagel and Jerry Fodor come to mind as recent examples). But a pretty good number of scientists are on record has having said bizarro things about philosophy, or even about science itself (Lawrence Krauss, and more recently Freeman Dyson).

What I suggested as a way forward is that we should work toward re-establishing the classical notion of scientia, which means knowledge in the broader sense, including contributions from science, philosophy, math, and logic. There is also an even broader concept of understanding, which is relevant to human affairs. And I think that understanding requires not only scientia, but also other human activities such as art, music, literature, and the broader humanities. As you can see, I was trying to be very ecumenical...

In the end, I submitted that skirmishes between scientists and philosophers are not just badly informed and somewhat silly, they are anti-intellectual, and do not help the common cause of moving society toward a more rational and compassionate state than it finds itself in now.

The discussion that followed was very interesting. Alex Rosenberg did stress that philosophers interested in science need to pay close attention to what goes on in the lab, to which both Sean Carroll and Janna Levin responded that there are very good examples of important conceptual contributions made by philosophers to physics, particularly in the area of interpretations of quantum mechanics. Rosenberg also pointed out that some philosophers — for instance Samir Okasha — have contributed to biology, for instance in the area of debates about levels of selection.

We then talked about the issue of division of intellectual labor, with Dennett stressing the ability (and dangers!) of philosophers to take a bird’s eye view of things that is often unavailable to scientists. This, I commented, is because scientists are justifiably busy with writing grant proposals, doing lab work, and interacting tightly with graduate students. That was my own experience as a practicing evolutionary biologist. As a philosopher, I rarely write grant proposals, I don’t have to run a lab or do field work, and my interactions with graduate students are often in the form of visits to coffee houses and wine bars. All of which affords me the “luxury” (really, it’s my job) to read, think and write more broadly now than what I could do when I was a practicing scientist.

Along similar lines, Sean Carroll remarked — again going back to actual examples from physics — that scientists concern themselves primarily with how to figure things out, postponing the broader question of what those things mean. That’s another area where good philosophy can be helpful. Rebecca Goldstein added that philosophy is hard to do well, and that scientists should be more respectful and less dismissive of what philosophers do. Janna Levin observed that much of the fracas in this area is caused by a few prominent, senior (quasi-senile?) scientists and philosophers, but that in reality most scientists have a healthy degree of respect for philosophy.

At this point Coyne asked a reasonable question: we have talked about contributions that philosophers have made to science, what about the other way around? Several people offered the examples of Einstein, Bell and Feynman (ironically, the same guy of the philosophy-as-ornithology comment mentioned above), the latter for instance on the concept of natural law.

That was it, folks. What did I take from the experience? At the least the following points:

* On naturalism in general: we agreed that there are different shades of philosophical naturalism, and that reasonable people may disagree about the degree of, say, reductionism or determinism that the view entails.

* On determinism: given that even the physicists aren’t sure, yet, whether quantum mechanics is best interpreted deterministically or not (not to mention of the interpretation of any more fundamental theory), the question is open.

* On reductionism: Rosenberg’s extreme reductionism-nihilism was clearly, well, extreme within this group. Most participants agreed that one can, indeed should, still talk about morality and responsibility in meaningful terms.

* On emergence: there was, predictably, disagreement here, even among the physicists. Carroll seemed the most sympathetic to the concept, repeatedly talking, for instance, about the emergence of the Second Law of thermodynamics from statistical mechanics. Even Weinberg agreed that there are emergent phenomena in a robust sense of the term, but of course he preferred a “weak” concept of emergence, according to which the reductionist can write a promissory note that “in principle” things could be explained by a fundamental law. It was unclear what such principle may be, or even why that fundamental law couldn’t itself be considered emergent from something else (the “it’s turtles all the way down” problem).

* On meaning: following Goldstein, most of us agreed that there is meaning in human life, which comes out of the sense that we matter in society and to our fellow human beings. Flanagan’s concept of “eudaimonics” was, I think, most helpful here.

* On free will and moral responsibility: the debate between incompatibilists (Coyne, Rosenberg) and compatibilists (most of the rest, led of course by Dennett) continued. But we agreed that “free will” is far too loaded a concept, with Flanagan’s suggestion that we go back to the ancient Greeks’ categories of voluntary and involuntary action being particularly useful, I think. Even Coyne agreed that there is a Dennett-like sense in which we can think of morally competent vs morally incompetent agents (say, a normal person and one with a brain tumor affecting his behavior), thereby rescuing a societally and legally relevant concept of morality and responsibility.

* Relationship between science and philosophy: people seemed in broad agreement with my presentation (again, including Jerry), from which it follows that science and philosophy are partially continuous and partially independent disciplines, the first one focused on the systematic study of empirical data about the world, the second more concerned with conceptual clarification and meta-analysis (“philosophy of”). We also agreed that there are indeed good examples of philosophers of science playing a constructive role in science, and vice versa of scientists who have contributed to philosophy of science (take that, Krauss and Hawking!).

This, added to the positive effect of meeting one’s intellectual adversaries in person, sharing meals and talking over a beer or a glass of wine, has definitely made a stupendous success of the workshop as a whole. Stay tuned for the full video version on YouTube... 

Saturday, October 27, 2012

From the naturalism workshop, part II


by Massimo Pigliucci

Second day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. Today we started with Steven Weinberg (Nobel in physics) introducing his thoughts about morality. Why is a physicist talking about morality, you may ask? Good question, I reply, but let’s see...

The chair of the session was Rebecca Goldstein, who mentioned that she doesn’t find the morality question baffling at all. For her, moral reasoning is something that we have been doing for a long time, and moreover where philosophy has clearly made positive and incremental contributions throughout human history. She of course accepts the idea of a naturalistic origin for morality, but immediately added that evolutionary psychological accounts are simply not enough. In the process, she managed to both appreciate and criticize the work of Jonathan Haidt on the different dimensions of liberal vs progressive moral reasoning.

Weinberg agreed with Goldstein’s broad claim that we can reason about morality, but was concerned with the question of whether we can ground morality using science, and particularly the theory of evolution. He declared that he has been “thoroughly annoyed” by Sam Harris’ book on scientific answers to moral questions. He went on to observe that most people don’t actually have a coherent set of moral principles, nor do they need it. Weinberg said that early on in his life he was essentially a utilitarian, thinking that maximization of happiness was the logical moral criterion. Then he read Huxley’s Brave New World, and he was disabused of such a simplistic notion. Which is yet another reason he didn’t find Harris compelling, considering that the latter is a self-described utilitarian.

Weinberg also criticized utilitarianism by rejecting Peter Singer-style arguments to the effect that more good would be done in the world by living on bare minimum necessities and giving away much of your income to others. Weinberg argued instead that we owe loyalty to our family and friends, and that there is nothing immoral about preferring their welfare to the welfare of strangers. Indeed, although I don’t think he realized it, he was essentially espousing a virtue ethics / communitarian type of ethics. Weinberg concluded from his analysis that we “ought to live the unexamined life” instead, because that’s what the human condition leads us to.

Goldstein’s response was that we don’t need grounding postulates to engage in fruitful moral reasoning, and I of course agree. I pointed out that ethics is about developing reasonable ways to think about moral issues, starting with (and negotiating) certain assumptions about human life. In my book, for instance, Michael Sandel’s writings are excellent examples of how to engage in fruitful moral reasoning without having to settle the sort of metaethical issues that worry Weinberg (interestingly, and gratifyingly, I saw Jerry Coyne nodding somewhat vigorously while I was making my points). Dennett added that there are ways of thinking through issues that do not involve fact finding, but rather explore the logical consequences of certain possible courses of action — which is why moral philosophy is informed by facts (even scientific facts), but not determined by them. And for Dennett, of course, we — meaning humanity at large — are the ultimate arbiters of what works and doesn’t work in the ethical realm.

Dawkins agreed with Goldstein that there has been moral progress, and that we live in a significantly improved society in the 21st century compared to even recent times, let alone of course the Middle Ages. Dawkins also mentioned Steven Pinker’s work demonstrating a steady decrease in violence throughout human history (Goldstein humorously pointed out that Pinker got the idea from her). Dawkins also made the good point that we talk about morality as if it were only a human problem because all other species of Homo went extinct. Had that not been the case, we might be having a somewhat different conversation.

Both Weinberg and Goldstein agreed that a significant amount of moral progress comes from literature, and more recently movies. Things like Uncle Tom’s Cabin, or Sidney Poitier’s role in Guess Who’s Coming to Dinner, have the power to help changing people’s attitudes about what is right and what is wrong.

Which led to my comment about Hume and Aristotle. I think — with these philosophers — that moral reasoning is grounded in a broadly construed conception of human nature. Aristotle emphasized the importance of community environment, and particularly of one’s family and early education environment; but also of reflection and conscious attempts at improving. Hume agreed that basic human instincts are a mix of selfish and cooperative ones, but also argued that human nature itself can change over time, as a result of personal reflection and community wide conversations.

Carroll noted a surprising amount of agreement in the group about the fact that morality arose naturally because we are large brained social animals with certain needs, emotions and desires; but also about the fact that factual information and deliberate reflection can both improve our lot and the way we engage in moral reasoning. Owen Flanagan, however, pointed out that most people outside of this group do think of morality in a foundational sense, which is untenable from a naturalistic perspective. Owen went on to remind people that David Hume — after the famous passage warning about the logical impossibility of deriving oughts from is — went on to engage into quite a bit of moral reasoning nonetheless, simply doing so without pretending that he was demonstrating things.

Weinberg claimed that he cannot think of a way to change other people’s minds about moral priorities when there is significant disagreement. But Dennett pointed out that we do this all the time: we engage in societal conversations with the aim of persuading others, and in so doing we are changing their nature. That is, for instance, how we made progress on issues such as women rights, gay rights, or animal welfare (as Goldstein had already pointed out).

Terrence Deacon remarked that there was an elephant in the room: how is it that this group agrees so broadly about morality, if a good number of them are also fundamental reductionists? Isn’t moral reasoning an emergent property of human societies? That is indeed a good question, and I always wonder how people like Coyne or Rosenberg (or Harris, who was invited but couldn’t make it to the workshop) can at the same time hold essentially nihilistic views about existence and yet talk about good and bad things and what we should (ought?) to do about them? Carrol agreed that we should be using the emergence vocabulary when talking about societies and morality. In his mind, the stories we tell about atoms are different from the stories we tell about ethics; the first ones are descriptive, the latter ones become prescriptive. To use his kind of example, we can use the term “wrong” both when someone denies the existence of quarks and when someone kills an innocent person, but that word indicates different types of judgments that we need to keep distinct.

Simon DeDeo asked what sort of explanation do we have for saying that, say, Western society has gotten “better” at ethical issues? (We all agreed that, more or less, it has.) We don’t seem to have anything like, say, the evolutionary explanation of what makes a bird “better” at flying. But Don Ross replied that we do have at least partial explanations, for instance drawing on the resources of game theory. In response to Ross, DeDeo pointed out that game theory can only give an account of morality within a consequentialist framework. Both Ross and (interestingly) Alex Rosenberg disagreed. Dennett helped clarifying things here, making a distinction between what he called “second rate” (or naive) consequentialism, which is a bad idea easily criticized on philosophical grounds, and the broader concept that of course consequences matter to human ethical decision making. In general, I think that we are still doing fairly poorly in the area that we need to answer DeDeo’s question: a good theory of cultural evolution. But of course that doesn’t mean it cannot be done or will not be done at some point (as is well known, I’m skeptical of memetic-type theories in this respect).

In the second part of the morning session we moved to consider the concept of meaning, with Owen Flanagan giving the opening remarks. He pointed out that the historical realization that we are “just” animals caused problems within the context of the preceding cultural era during which human beings were thought of as special direct creations of gods. Owen brought us back 2,500 years ago, to Aristotle and the ancient Greek’s concept of eudaimonia, the life that leads to human flourishing. Aristotle noted that people have different ideas of the good life, but also that there are some universals (or nearly so). One of these is that no normal person wishes to have a life without friends. Flanagan thinks — and I agree — that we can use the Aristotelian insight to build a discipline of “eudaimonics,” one that is both descriptive and normative. The good  life is about the confluence of the true, the beautiful and the good (all lower case letters, of course).

An example I brought up of modern-day analysis of a concept that Aristotle would have been familiar with is the comparison between people’s day-to-day self-reported happiness vs their overall conception of meaning in their life when it comes to having children. Turns out that having children actually significantly decreases day-to-day happiness, but it also increases the long-term positive meaning that most people attribute to their lives.

Rebecca Goldstein argued that novelists have a unique perspective on the issue of meaning, because of the process involved in devising characters and their stories. She claims that her writing novels taught her that a major component of flourishing and meaning is the idea of an individual mattering to other people. (Again, Aristotle would not have been surprised.) Rebecca connected this to the question that she is often asked about how can she find meaning in life as an atheist. She had a hard time even understanding the question, until she realized that of course for theists meaning is defined universally by an external agency on the basis that we “matter” to the gods. So the atheist is still using the idea that mattering and meaning are connected, she just does away with the external agency.

Dennett suggested that we as atheists need to think of projects and organizations that help secular people feel that they matter in more productive ways than, say, joining a crusade to kill the infidels. Janna Levin brought up the example of a flourishing of science clubs in places like New York City, which provide a community for intellectual kins (and of course there are also a good number of philosophy meetups!). Still, I argued (and Carroll, Goldstein, Coyne, and Flanagan agreed) that attempts in that direction — like the various Societies for Ethical Culture — are largely a failure. Secularists, especially in Europe, find meaning and feel that they matter because they live in a society they feel comfortable in and are active members of. Just like the ancient Greeks’ concept of a polis that citizens could be proud of and contribute to. It’s the old Western idea of civic pride, if you will. 

I need to note at this point, that — just as in the case of morality discussed above — the nihilists / reductionists in the group didn’t seem to have any problem meaningfully talking about meaning, so to speak, even though their philosophy would seem to preclude that sort of talk altogether... (The exception was Rosenberg, who stuck to his rather extreme nihilist guns.)

The afternoon session was devoted to free will, with Dennett giving the opening remarks. His first point was that there is a difference between the “manifest image” and the “scientific image” of things. For instance, there is a popular / intuitive conception of time (manifest image), and then there is the philosophical and/or scientific conception of time. But it is still the case the time exists. Why, then, asked Dennett, do so many neuroscientists flat out deny the existence of free will (“it’s an illusion”), rather than replacing the common image with a scientific one?

Free will, for Dennett, is as real as time or, say, colors, but it’s not what some people think it is. And indeed, some views of free will are downright incoherent. He suggested that nothing we have learned from neuroscience shows that we haven’t been wired (by evolution) for free will, which means that we also get to keep the concept of moral responsibility. That said, contra-causal free will would be a miracle, and we can’t help ourselves to miracles in a naturalistic framework.

Citing a Dilbert cartoon, Dennett said that the zeitgeist is such that people think that it follows from naturalism that we are “nothing but moist robots.” But this, for Dennett, is confusing the ideology of the manifest image with the manifest image itself. An analogy might help: one could say that if that is what you mean by color (i.e., what science means by that term), then color doesn’t exist. But we don’t say that, we re-conceptualize color instead. For instance: it makes perfect sense to distinguish between people who have the competence and will to sign a contract, and those who don’t. We have to draw these distinctions because of practical social and political reasons, which however does not imply that we are somehow cutting nature at its joints in a metaphysical sense. Moreover, Dennett pointed out that experiments show that if people are told that there is no free will they cheat more frequently, which means that the conceptualization of free will does have practical consequences. Which in turn puts some responsibility on the shoulders of neuroscientists and others who go around telling people that there is no free will.

Jerry Coyne gave the response to Dennett’s presentation, not buying into the practical dangers highlighted by the latter (Jerry seemed to think that these effects are only short-term; that may be, but I don’t think that undermines Dennett’s point). Coyne declared himself to be an incompatibilist (no surprise there), accusing compatibilists of conveniently redefining free will in order to keep people from behaving like beasts. However, Jerry himself admitted to having changed his definition of free will, and I think in an interesting direction. His old definition was the standard idea that if the tape of the history of the universe were to be played again you would somehow be able to make a different decision, which would violate physical determinism. Then he realized that quantum indeterminacy could, in principle, bring in indeterminism, and could even affect your conscious choices (through quantum effects percolating up to the macroscopic level). So he redefined free will as the idea that you are able to make decisions independently of your genes, your environments and their interactions. To which Dennett objected that that’s a pretty strange definition of free will, which no serious compatibilist philosopher would subscribe to.

Jerry then plunged into his standard worry, the same that motivates authors like Sam Harris: we don’t want to give ground to theologically-informed views of morality, and incompatibilism about free will (“we are the puppets of our genes and our environments”) is the best way to do it. Dennett was visibly shaking his head throughout (so was I, inwardly...).

In the midst of all of this, Jerry mentioned the (in)famous Libbett experiments, even though they have been taken apart both philosophically and, more recently, scientifically. Which Dennett, Flanagan, and Goldstein immediately pointed out.

During the follow up discussion Weinberg declared his leaning toward Dennett’s position, despite his (Weinberg’s) acceptance of determinism. We weigh reasons and we arrive at conscious decisions, and we know this by introspection — although he pointed out that of course this doesn’t mean that all our own desires are transparent and introspectively available. Weinberg did indeed paint a picture very similar to Dennett’s: we may never arrive — given the same circumstances — to a different decision, but it is still our decision.

Rosenberg commented that we have evidence that we cannot trust our introspection when it comes to conscious decision making, again citing Libbett. Both Dennett and Flanagan once more pointed out that those experiments have been taken conceptually apart (by them) decades ago (and, I reminded the group, questioned on empirical grounds more recently). Dennett did agree that introspection is not completely reliable, but he remarked that that’s quite different from claiming that we cannot rely on it at all.

Owen Flanagan discussed experiments about conceptions of free will done on undergraduate students. The students were given a definition of free will and then asked questions about whether the person made the decision and was responsible for her actions. The majority of subjects turned out to be both determinists and compatibilists, which undermines the popular idea that the commonsense concept of free will is contra-causal.

I pointed out, particularly to Jerry and Alex Rosenberg, that incompatibilists seem to discard or bracket out the fact that the human brain evolved to be a decision making, reason-weighing organ. If that is true, then there is a causal story that involves the brain, and my decisions are mine in a very strong sense, despite being the result of my lifelong gene-environment interactions (and the way my conscious and unconscious brain components weigh them).

Sean Carroll also objected to Coyne, using an interesting analogy: if Jerry applied his argument toward incompatibilism to fundamental physics, he would have to conclude for an incompatibility between statistical mechanics and the second law of thermodynamics. But, Sean suggested, that would be a result of confusing language that is appropriate for one level of analysis with language that is appropriate for another level. (Though he didn’t say that, I would go even further, following up on the previous day’s discussion, and suggest that free will is an emergent property of the brain in a similar sense to which the second law is an emergent property of statistical mechanics — and on the latter even Steven Weinberg agreed!)

Terrence Deacon asked why we insist in using the term “free” will, and Jerry had previously invited people to drop the darn thing. I suggested, and Owen elaborated on it, that we should instead use the terms that cognitive scientists use, like volition or voluntary vs involuntary decision making. Those terms both capture the scientific meaning of what we are talking about and retain the everyday implication that our decisions are ours (and we are therefore responsible for them). And dropping “free” also doesn’t generate confusion about contra-causal mystical-theological mumbo jumbo.

Dennett, in response to a question by Coyne about the evolution of free will, pointed out two interesting things. First, if we take free will to be the ability of a complex brain to exercise conscious decision making, then it is a matter of degrees, and other species may have partial free will. Second, and relatedly, human beings themselves are not born with free will: we develop competence to make (morally relevant, among others) decisions during early life, in part as the result of education and upbringing.

Jerry at some point brought up the case of someone who commits a murder because a brain tumor interfered with his brain function. But I commented that it is strange to take those cases — where we agree that there is a malfunction of the brain — and make them into arguments to reject moral responsibility. Dennett agreed, talking about brains being “wired right” or “wired wrong,” which is a matter of degree, and which translates into degrees of moral responsibility (lowest for the guy affected by the tumor, highest for the person who kills for financial or other gain). Jerry, interestingly, brought up the case of a person who becomes a violent adult because of childhood traumas. But Dennett and I had a response that is in line with our conception of the brain as a decision making organ with higher or lower degrees of functionality: the childhood trauma imposes more constraints (reduces free will) on the brain’s development than a normal education, but fewer than a brain tumor. Consequently, the resulting adult bears an intermediate degree of moral responsibility for his actions.

The second session of the afternoon was on consciousness, introduced by David Poeppel. He claimed — as a cognitive scientist — that there are good empirical reasons to reject the conclusion that Libbett’s experiments (again!) undermine the idea of conscious decision making. At the same time, he did point to research showing that quite a bit of decision making in our brain is in fact invisible or inaccessible to our consciousness.

Dennett brought up experiments on priming in psychology, where the subjects are told not to say whatever word they are going to be primed for. Turns out that if the priming is too fast for conscious attention to pick it up, the subjects will in fact say the word, contravening the directions of the experimenter. But if the time frame is sufficiently long for consciousness to come in, then people are capable of stopping themselves from saying the priming word. The conclusion is that this is good evidence that conscious decision making is indeed possible, and that we can study its dynamics (and limits).

Rosenberg warned that we have good evidence leading us to think that we cannot trust our conscious judgments about our motives and mental states. Indeed, as Dennett pointed out, of course there is self-deception, rationalization, ideology, and self-fooling. But it is also the case that it is only through conscious reasoning that we get to articulate and reflect on our thoughts. We need consciousness to pay attention to our reasons for doing things. Conscious reasons can be subjected to a sort of “quality control” that unconscious reasons are screened off from. For Dennett human beings are powerful thinking beings because they can submit their own thinking to analysis and quality control.

And of course Daniel Kahneman’s work on type I (fast, unconscious) vs type II (slow, conscious) thinking came up. Poeppel pointed out that sometimes type I thinking is not just faster, but better than type I. To which Dennett replied that if you are about to have brain surgery you might prefer the surgeon to make considered decisions based on his type II system rather than quick type I decisions. Of course, which system does a better job is probably situation dependent, and at any rate is an empirical question.

Carroll asked whether it is actually possible to distinguish conscious from unconscious thoughts, to which both Poeppel and Goldstein replied yes, and we are getting better at it. Indeed, this has important practical applications, as for instance anesthesiologists have to be able to tell whether there is conscious activity in a patient’s brain before an operation begins. However, the best evidence indicates that consciousness is a systemic (emergent?) property, since it disappears below a certain threshold of brain-wide activity.

Dennett brought up the example of the common experience of thinking that we understand something, until we say it out loud and realize we don’t. No mystery there: we are bringing in “more agents” (or, simply, more and more deliberate cognitive resources) into the task, so it isn’t surprising that we get a better outcome as a result.

Rosenberg asked if we were going to talk about the “mysterian” stuff about consciousness, things like qualia, aboutness, and what is it like to be a bat. I commented that the only sensible lesson I could take out of Nagel’s famous bat-paper is not that first person experiences are scientifically inexplicable, but that the only way to have them is to actually have them. Dennett, however, remarked that he pointedly asked Nagel: if you had a twin brother who was a philosopher, would you be able to imagine what it is like to be your brother? To which Nagel, unbelievably I think, answered no. Of course we are perfectly capable of imagining what it is like to be another biologically relevantly similar to ourselves being.

Flanagan brought up Colin McGinn’s “mysterian” position about consciousness, pointing out that there is no equivalent in neuroscience or philosophy of mind of, say, Godel’s incompleteness theorems or Heisenberg’s indeterminacy principle. Similarly, Owen was dismissive (rightly, I think) of David Chalmers’ dualism based on mere conceivability (of unconscious zombies who behave like conscious beings, in his case).

I asked, provocatively, if people around the table think that consciousness is an illusion. Jerry immediately answered yes, but the following discussion clarified things a bit. Turns out — and Dennett was of great help here — that when Jerry says that consciousness is an epiphenomenon of brain functioning he actually means something remarkably close to what I mean by consciousness being an emergent property of the brain. We settled on “phenomenon,” which is the result of evolution, and which has functions and effects. This, of course, as opposed to the sense of “epiphenomenon” in which something has no effect at all, and which in this context leads to an incoherent view of consciousness (but one that the “mysterians” really like).

At this point Rosenberg introduced yet another controversial topic: aboutness. How is it possible, from a naturalist’s perspective, to have “Darwinian systems” like our brains that are capable of semantic reference (i.e., meaning)? Terrence Deacon responded that the content of thought, its aboutness, is not a given brain state, but brain states are necessary to make semantic reference possible. Don Ross, in this context, invoked externalism: a brain state in itself doesn’t have a stable meaning or reference, that stable meaning is acquired only by taking into account the largest system that includes objects external to us. Dennett’s example was that externalism is obviously true for, say, money: bank accounts have numbers that represent money, but they are not money, and the system works only because the information internal to the bank’s computers refer to actual money present in the outside world.

Rosenberg seemed bothered by the use of intentional language in describing aboutness. But Dennett pointed out that intentionality is needed to explain a certain category of phenomena, just like — I suggested — teleological talk is necessary (or at the very least very convenient) to refer to adaptations and natural selection. And here I apparently hit the nail on the head: Rosenberg rejects the (naturalistic) concept of teleology, while Dennett and I accept. That is why Rosenberg has a problem with intentional language and Dennett and I don’t. 

And that, as it tuns out, was a pretty good place to end the second day. Tomorrow: scientism and the relationship between science and philosophy.

From the naturalism workshop, part I




by Massimo Pigliucci

Well, here we are, in Stockbridge, MA, in the middle of the Berkshires, sitting at a table that features of good number of very sharp minds, and yours truly. This gathering is the brainchild of cosmologist Sean Carroll, entitled “Moving Naturalism forward,” its point being to see what a bunch of biologists, physicists, philosophers and assorted others think about life, the universe and everything. And we have three days to do it. Participants included: Sean Carroll, Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Dan Dennett, Owen Flanagan, Rebecca Goldstein, Janna Levin, David Poeppel, Alex Rosenberg, Don Ross, Steven Weinberg, and myself.

Note to the gentle reader: although Sean has put together an agenda of broad topics to be discussed, this post and the ones following it will inevitably have the feel of a stream of consciousness. But one that will be interesting nonetheless, I hope!

During the roundtable introductions, Dawkins (as well as the rest of us) was asked what he would be willing to change his mind about; he said he couldn’t conceive of a sensible alternative to naturalism. Rosenberg, interestingly, brought up the (hypothetical) example of finding God’s signature in a DNA molecule (just like Craig Venter has actually done). Dawkins admitted that that would do it, though immediately raised the more likely possibility that that would be a practical joke played by a superhuman — but not supernatural — intelligence. Coyne then commented that there is no sensible distinction between superhuman and supernatural, in a nod to Clarke’s third law.

There appeared to be some interesting differences within the group. For instance, Rosenberg clearly has no problem with a straightforward functionalist computational theory of the mind; DeDeo accepts it, but feels uncomfortable about it; and Deacon outright rejects it, without because of that embracing any kind of mystical woo. Steven Weinberg asked the question of whether — if a strong version of artificial intelligence is possible — it follows that we should be nice to computers.

The first actual session was about the nature of reality, with an introduction by Alex Rosenberg. His position is self-professedly scientistic, reductionist and nihilist, as presented in his The Atheist’s Guide to Reality. (Rationally Speaking published a critical review of that book, penned by Michael Ruse.) Alex thinks that complex phenomena — including of course consciousness, free will, etc. — are not just compatible with, but determined by and reducible to, the fundamental level of physics. (Except, of course, that there appears not to be any such thing as the fundamental level, at least not in terms of micro-things and micro-bangings.)

The first response came from Don Ross (co-author with James Ladyman of Every Thing Must Go), who correctly pointed out that Rosenberg’s position is essentially a statement of metaphysical faith, given that fundamental physics cannot, in fact, derive the phenomena and explanations of interest to the special sciences (defined here as everything that is not fundamental physics).

Weinberg made the interesting point that when we ask whether X is “real” (where X may be protons or free will) the answer may be yes, with the qualification of what one means by the term “real.” Protons, in other words (and contra both Rosenberg and Coyne), are as real as free will for Weinberg, but that qualifier means different things when applied to protons than it does when applied to free will.

In response to Weinberg’s example that, say, the American Constitution “exists” not just as a piece of paper made of particles, Rosenberg did admit that the major problem for his philosophical views is the ontological status of abstract concepts, especially mathematical ones as they relate to the physical description of the world (like Schrödinger’s equation, for instance).

Dennett asked Rosenberg if he is concerned about the political consequences of his push for reductionism and nihilism. Rosenberg, to his credit, agreed that he has been very worried about this. But of course from a philosophical and epistemological standpoint nothing hinges on the political consequences of a given view, if such a view is indeed correct.

Following somewhat of a side track, Dennett, Dawkins and Coyne had a discussion about the use of the word “design” when applied to both biological adaptations and human-made objects. Contra Dawkins and Coyne, Dennett defends the use of the term design in biology, because biologists ask the question “what is this [structure, behavior] for?” thus honestly reintroducing talk of function and purpose in science. A broader point made by Dennett, which I’m sure will become relevant to further discussions, is that the appearance on earth of beings capable of reflecting on things makes for a huge break from everything else in the biosphere, a break that ought to be taken seriously when we talk about purpose and related concepts.

Owen Flanagan, talking to Rosenberg, saw no reason to “go eliminativist” on the basic furniture of the universe, which includes a lot more than just fermions qua fermions (see also bosons): it also includes consciousness, thoughts, libraries, and so on. And he also pointed out that, again, Rosenberg’s ontology potentially gets into serious trouble if we decide that things like mathematical objects are real in an interesting sense of the term (because they are not made of fermions). Flanagan pointed out that what we were doing in that room had to do with the meaning of the words being exchanged, not just with the movement of air molecules and the propagation of sounds, and that it is next to impossible to talk about meaning without teleology (not, he was immediately careful to add, in the Cartesian sense of the term).

Again interestingly, even surprisingly, Rosenberg agreed that meaning poses a huge problem for a scientistic account of the world, for a variety of reasons brought up by a number of philosophers, including Dennett and John Searle (the latter arguing along very different lines from the former, of course). He was worried that this will give comfort to anti-naturalists, but I pointed out that not being able to give a scientific (as distinct from a scientistic) account of something — now or ever (after all, there are presumably epistemic limits to human reason and knowledge) does not give much logical comfort to the super-naturalist, who would simply be arguing from ignorance.

Poeppel asked Rosenberg what he thinks explanations are, I assumed in the context of the obvious fact that fundamental physics does not actually explain the subject matters of the special sciences. Rosenberg’s answer was that explanations are a way to ally “epistemic hitches” that human beings have. At which point Dennett accused Rosenberg of being an essentialist philosopher (a la Parmenides), making a distinction between explanations in the everyday sense of the word and real explanations, such as those provided by science. But, argued Dennett, this is a very old fashioned way of doing philosophy, and it treats science in a more fundamentalist (not Dennett’s term) way than (most) scientists themselves do.

The afternoon session was devoted to evolution, complexity and emergence, with Terrence Deacon giving the introductory remarks. He began by raising the question of how do we figure out what does and does not fit in naturalism. His naturalistic ontology is clearly broader than Rosenberg’s, including, for instance, teleology (in the same sense as espoused earlier in the day by Dennett). Deacon rejects what Dennett calls “greedy” reductionism, because there are complex systems, relations, and other things that don’t sit well with extreme forms of reductionism. Relatedly, he suggested (and I agreed) that we need to get rid of talk of both “top-down” and indeed “bottom-up” causality, because it constraints us to think about the world in ways that are not useful. (Of course, top-down causality is precisely the thing rejected by greedy reductionists, while the idea that causality only goes bottom-up is the thing rejected by antireductionists.)

Ross concurred, and proposed that another good thing to do would be to stop talking about “levels” of organizations of reality and instead think about the scale of things (the concept of “scale” can be made non-arbitrary by referring to measurable degrees of complexity and/or to scales of energy). Not surprisingly, Weinberg insisted on the word levels, because he wants to say that every higher level does reduce to the bottom lowest one.

Deacon is interested in emergence because of the issue of the origin of life understood (metaphorically speaking) as a “phase transition” of sorts, which is in turn related to the question of how (biological) information “deals with” the constraints imposed by the second law of thermodynamics. In other words: the interesting question here is how did a certain class of information-rich complex systems manage to locally avoid the second law-mandated constant increase in entropy. (Note: Deacon was most definitely not endorsing a form of vitalism according to which life defies — globally — the second principle of thermodynamics. So this discussion is relevant because it sets out a different way of thinking about what it means for complex systems to be compatible with but not entirely determined by the fundamental laws of physics.)

All of the above, said Deacon, is tied up in what we mean by information, and he suggested that the well known Shannon formulation of information — as interesting as it is — is not sufficient to deal with the teleologically-oriented type of information that characterizes living organisms in general, and of course consciously purposeful human beings in particular.

Dennett seemed to have quite a bit of sympathy with Deacon’s ideas, though he focused on pre- or proto-Darwinian processes as a way to generate those information-rich, cumulative, second principle (locally) defying systems that we refer to as biological.

Rosenberg, as usual, didn’t seem to “be bothered by” the fact that we don’t have a good reductionist account of the origin of life. Methinks Rosenberg should be bothered a bit more by things for which reductionism doesn’t have an account and where emergentism seems to be doing better.

At this point I asked Weinberg (who has actually read my blog series on emergence on his way to the workshop!) why does he think that the behavior of complex systems is “entailed” by the fundamental laws. He conceded two important points, the second one of which is crucial: first, he readily agreed that of course nobody can (and likely will ever be able to) actually reduce, say, biology to physics (or even condensed matter physics to sub-nuclear physics); so, epistemic reduction isn’t the game at all. Second, he said that nobody really knows if ultimate (i.e., ontological) reduction is possible in principle, which was precisely my point; his only argument in favor of greedy reductionism seems to be a (weak) historical induction: physicists have so far been successful in reducing, so there is no reason to think they won’t be able to keep doing it. Even without invoking Hume’s problem of induction, there is actually very good historical evidence that physicists have been able to do so only within very restricted domains of application. It was gratifying that someone as smart and knowledgeable in physics as Weinberg couldn’t back up his reductionism with anything more than this. However, Levin agreed with Weinberg, insisting on the a priori logical necessity of reduction, given the successes of fundamental physics.

Weinberg also agreed that there are features of, say, phase transitions that are independent of the microphysical constituents of a given system; as well as that accounts of phase transitions in terms of lower level principles are only approximate. But he really thinks that the whole research program of fundamental physics would go down the drain if we accepted a robust sense of emergence. Well, maybe it would (though I don’t think so), but do we have any better reason to accept greedy reductionism than fundamental physicists’ amor proprio? (Or, as Coyne commented, the fact that if we start talking about emergence then the religionists are going to jump the gun for ideological purposes? My response to Jerry was: who cares?)

Don Ross argued that fundamental physics just is the discipline that studies patterns and constraints on what happens that apply everywhere at all times. The special sciences, on the contrary, study patterns and constraints that are more spatially or temporally limited. This can be done without any talk of bottom-up causality, which seems to make the extreme reductionist program simply unnecessary.

Flanagan brought up the existence of principles in the special sciences, like natural selection in biology, or operant conditioning in psychology. He then asked whether the people present imagine that it will ever be possible to derive those principles from fundamental physics. Carroll replied — acknowledging Weinberg’s earlier admission — that no, that will likely not be possible in practice, but in principle... But, again, that seems to me to amount to a metaphysical promissory note that will never be cashed.

Dennett: so, suppose we discover intelligent extraterrestrial life that is based on a very different chemistry from ours. Do we then expect them to have the same or entirely different economics? If lower levels entail (logically) higher phenomena, the answer should be in the negative. And yet, one can easily imagine that similar high-level constraints would act on the alien economy, thereby yielding a convergently similar economy “emerging” from a very different biochemical substrate. The same example, I pointed out, applies to the principle of natural selection. Goldstein and DeDeo engaged in an interesting side discussion on what exactly logical entailment, well, entails, as far as this debate is concerned.

Interesting point by Deacon: emergence is inherently diachronic, i.e., emergent properties are behaviors that did not appear up to a certain time in the history of the universe. This goes nicely with his contention that talk of causality (top-down or bottom-up) is unhelpful. In answer to a question from Rosenberg, Deacon also pointed out that this historical emergence may not have been determined by things that happened before, if the universe is not deterministic but contingent (as there are good reasons to believe).

Simon DeDeo took the floor talking about renormalization theory, which we have already encountered as a major way of thinking about the emergence of phase transitions. Renormalization is a general technique that can be used to move from any group/level to any other, not just in going from fundamental to solid state physics. This means that it could potentially be applied to connecting, say, biology with psychology, if all the involved processes involved finite steps. However, and interestingly, when systems are characterized by effectively infinite steps, mathematicians have shown that this type of group theory is subject to fundamental undecidability (because of the appearance of mathematical singularities). Seems to me that this is precisely the sort of thing we need to operationalize otherwise vague concepts like emergence. 

Another implication of what DeDeo was saying is that one could, in practice, reduce thermodynamics (macro-model) to statistical mechanics (micro-model), say. But there is no way to establish (it’s “undecidable”) whether there isn’t another micro-model that is equally compatible with the macro-model, which means that there would be no principled way to establish which micro-model affords the correct reduction. This implies that even synchronic (as opposed to diachronic) reduction is problematic, and that Rosenberg’s refrain, “the physical facts fix all the facts” is not correct. (As a side note, Dennett, Rosenberg and I agreed that DeDeo’s presentation is a way of formalizing the Duhem-Quine thesis in epistemology.)

It occurred to me at this point in the discussion that when reductionists like Weinberg say that higher level phenomena are reducible to lower level laws “plus boundary conditions” (e.g., you derive thermodynamics from statistical mechanics plus additional information about, say, the relationship between temperatures and pressures), they are really sneaking in emergence without acknowledging it. The so-called boundary conditions capture something about the process of emergence, so that it shouldn’t be surprising that the higher level phenomena are describable by a lower level “plus” scenario. After all, nobody here is thinking of emergence as a mystical spooky property.

And then the discussion veered into evolution, and particularly the relationship between the second law of thermodynamics and adaptation by natural selection. Rosenberg’s claim was that the former requires the latter, but both Dennett and I pointed out that that’s a misleading way of putting it: the second law is required for certain complex systems to evolve (in our type of universe, given its laws of physics). But the mere existence of the second law doesn’t necessitate  adaptation. Lots of other boundary conditions (again!) are necessary for that to be the case. And it is this tension between fundamental physics requiring (in the strong sense of logical entailment) vs merely being necessary (but not sufficient) for and compatible with certain complex phenomena that captures the major division between the two camps in which participants to the workshop are divided (of course, understanding that there is some porosity between the two camps themselves).

Tomorrow: morality, free will, and consciousness!