Tuesday, August 18, 2009

Should Scientific Methods and Data be Public?

At the last Eastern APA meeting in Philly, I attended an excellent session on The Epistemology of Experimental Practices, with Allan Franklin and Marcel Weber. During the discussion, I asked whether scientific methods and data should be public – that is, whether different investigators applying the same methods to the same questions should get the same data.

Franklin argued that publicity is not necessary, because some experiments might be too difficult or expensive to replicate, and different data analyses by different groups count as different experiments. This seems pretty wrong to me.

For one thing, I got the impression that Franklin didn’t fully understand what method publicity amounts to. Publicity does not require that all experiments be replicated; only that it is possible for different investigators to apply the same methods, and if they did, then they would get the same results. (Of course, much hinges on what we mean by “possible” and who counts as an investigator; for some more details, see here.)

For another thing, it’s better to say that actual replication of experiments is often unnecessary, as Marcel Weber said. Weber pointed out that experimentalists are part of a scientific network that shares techniques and materials, so they often feel they already know what was done. Nevertheless, Weber maintained that publicity is essential to science (and is implemented in the network itself, by the sharing of techniques etc.).

In fact, in his own talk, Allan Franklin listed a number of arguments/reasons for believing the results of experiments, along the lines of those listed in his Stanford Encyclopedia of Philosophy article on Experiments in Physics. All of Franklin’s reasons seem to have to do with publicity and the public validation of data.

Does anyone else have opinions on this? Should scientific methods and data be public or is this methodological principle obsolete?

I care about this because there are philosophers who have argued that introspection is a private yet legitimate method of observation, and this shows that method publicity is not necessary for science. I think this view is a disaster. If we reject method publicity, it’s not clear why we should reject all kinds of pseudo-scientific methods.

(And incidentally, I’ve also argued elsewhere that introspection is not a private method of scientific observation; rather, it’s a process of self-measurement by which public data are generated.)

(cross-posted at Brains.)

9 comments:

  1. In the social sciences or perhaps in all cases where background theories are weak and (in effect) statistical technique dominates empirical research then publicity should be a requirement (if only to ensure that we are not biasing the data to those that produce 'results'.) So, it is by no means an outdated methodological demand. But I take it you are interested in a different question.

    ReplyDelete
  2. The question seems to split into two, as you point to two reasons why publicity might fail:

    "Publicity does not require that all experiments be replicated; only that it is possible for different investigators to apply the same methods, and if they did, then they would get the same results."

    Since the requirement contains a conjunct, with two independent conjuncts, it could fail for one of two reasons. It could be impossible for for different investigators to apply the same methods. Or, it could be possible without the requirement being satisfied that they get the same results.

    The first kind of failure doesnt seem as bad as the second kind. I can think of lots of legitimate experiments that are impossible, either in practice or in principle, to replicate. (for it to be impossible in principle, it simply needs to be a sufficiently invasive experiment on a sui generis target--"is this grenade live?").

    But if it is a failure of the second kind, then I'm not sure that what you have is a successful experiment.

    Perhaps the disagreement with Franklin results from a failure to distinguish between these two kinds of failure of publicity.

    ReplyDelete
  3. Clearly data should be public, otherwise science will be an elitist institution controlled by scientists. Public data allows non-mainstream scholars such as myself to not just duplicate but also simply to analyze scientific conclusions which are, more often than not, spurious.

    NS

    ReplyDelete
  4. Please see the first half of my introduction to a bibliography for science and technology at the Ratio Juris blog* as to why what the late John Ziman called "post-academic" science constains structural features and institutional imperatives that establish and encourage norms contrary to that of publicity, however much some of us believe that scientific methods and data (at the very least) should be unabashedly public. Familiarity with intellectual property (IP) literature, especially on a global scale, should quickly disabuse one of the belief that contemporary science accords any priority or serious commitment whatsoever to a publicity norm.

    *Please see here: http://ratiojuris.blogspot.com/2009/08/science-technology-basic-bibliography.html

    ReplyDelete
  5. erratum: contains (it's a bit early in the morning here)

    ReplyDelete
  6. It's not just the audience for a published result that suffers from lack of publicity regarding methods. Some years ago I published on the problems encountered in the CDF collaboration's early experimental results on the top quark. Essentially, the problem was that one subgroup within the collaboration had chosen their data selection criteria in a way that was lacking transparency, leading to worries from some of their collaborators that data-selection criteria were chosen in a biased way. Whether the critics were right or not, their worries could not be alleviated in any convincing way. (As it happened, there were other sources of evidence within the data that were regarded as compensating for any such weaknesses.)

    This was not an issue of actual replication. The data selection criteria at issue would not have made much sense for a different detector, and this is definitely a case such as Allan has in mind, where no one would (or should) bother to recreate the exact same experiment. But it WAS an issue of reproducibility in the weaker sense that the worry was that the appearance of support was due to a method that was tied to idiosyncrasies in the particular data set in hand, rather than to stable features of the phenomenon that would recur if one were to repeat the experiment with the same detector, etc.

    So I think that publicity of methods should extend not just to making clear what data-selection criteria are used (for example) but should be understood as the need to be clear about those decision procedures that are relevant to the reliability of experimental conclusions.

    ReplyDelete
  7. Incidentally, there's a nice treatment of some of the salient issues here in Sissela Bok's book, Secrets: On the Ethics of Concealment and Revelation (1983), ch. 11, "Secrecy and Competition in Science," pp. 153-170. It's fitting that the chapter has for bookends one on "Trade and Corporate Secrecy" and the other on "Secrets of State."

    (Bok is now associated with the Harvard Center for Population and Development Studies at the Harvard School of Public Health)

    ReplyDelete
  8. Everyone, Thanks for the comments.

    Eric Schliesser and Kent, I agree.

    Eric, I agree with most of what you say but I'd like to insist that the first conjunct be satisfied too. It must be *possible* for different investigators to perform the same experiment. That means, others should be able to see whether the grenade explodes, or build another identical particle accellerator, or whathaveyou. Of course, in practice it will not always be done.

    Notedscholar and Patrick, the issue here is not (1) whether the data are made public, as in published in easily accessible ways, or (2) whether generating the data requires relying on theories and assumptions (of course it does). The issue is whether the methods are such that other competent scientists, upon being explained how to follow the methods, are actually able to apply the methods to the same questions and obtain the same results.

    ReplyDelete
  9. I was interested in replying to the question in the title of the post in the first instance (which I didn't take to be constricted to 'publishing in easily accessible ways' nor was my point about data 'relying on theories and assumptions': of course it does) as I believe the specific question you asked is in significant ways (in our world) related to that larger question, even if it's the case that whether or not "other competent scientists are able to apply the methods to the same question and obtain the same results" is a question not logically tied to the question of publicity as such.

    ReplyDelete

Note: Only a member of this blog may post a comment.