It is becoming increasingly hard to deny that values play a role in scientific practice—specifically non-epistemic, non-cognitive, or contextual values, e.g., moral, political, social, aesthetic and aesthetic values. I will focus on the testing phase, where theories are compared with evidence and certified (or not) as knowledge, as this is the most central arena for discussion value-free vs. value-laden science. Traditionally, philosophers of science have accepted a role for values in practice because it could be ghettoized into the “context of discovery,” while the “context of justification” could be treated as epistemically pure. Once we turn from the logical context of justification to the actual context of certification in practice, to the testing of hypotheses within concrete inquiries conducted by particular scientists, we can no longer ignore the role of value-judgments.
There are two main arguments in the literature for this claim: the inductive risk argument and an argument based on the underdetermination of theory by evidence that I will call “the gap argument” (Intemann, 2005). While both of these arguments have been historically very important and have successfully established important roles for values in science, they share a flawed assumption, the lexical priority of evidence over values. There are several problems with this assumption, one of which is that its plausibility is closely related to the value-free ideal of science: the best science would be one where we were guided only by considerations of evidence. Wherever this is possible, we should prefer value-free science. However, this situation may be rare or impossible, and so we must allow value-judgments to play a role where the evidence leaves some uncertainty. The lexical priority assumption leads to a tension because it continues to recognize the normative weight of the value-free ideal of science. While these arguments have allowed their proponents to construct value-laden ideals of science that preserve some version of the objectivity of science, that alternative is rendered unstable by the lexical priority assumption. They may be taken as insisting that the value-free ideal is the real ideal; in circumstances where it is impossible to satisfy, we have to settle for a pragmatic compromise that is as close as possible.
This is taken from the introduction of a paper I'm working on, and I'm trying to work out how precisely to express this worry and whether the worry seems like a real worry. I'd appreciate hearing your thoughts.
Hi Matt,
ReplyDeleteThanks for an interesting post. I share your concern that, as stated, this might not seem like a real worry, though I think you can fix that. Here's the main problem (as I see it): the paragraph presumes that the reader takes the value-free ideal to be an inherently bad thing. It seems to me that the Science and Values folks frequently put the bulk of their dialectical efforts into arguing that value-laden science is not inherently bad. However, this is still consistent with value-free science being inherently good (indeed, even better than value-laden science, as would be the case if value-freedom were an ideal). That being said, I suspect that there are consequences of the value-free ideal that some find undesirable (I'm blanking on what these might be right now, because it's 5AM), and if these consequences were rendered explicit in your introduction, I think you'd motivate your problem in a more compelling manner. Under this assumption the problem would amount to this: given the lexical priority of evidence over values, there will be contexts in which these pernicious consequences of the value-free ideal ought to be accepted.
This is interesting, but surely the point of value-freeness as an ideal is that it's something that should guide our epistemic choices whenever it is possible. Saying that it is *sometimes* impossible, or otherwise heavily constrained does not seem to tell against the ideal as a general principle of practical reasoning. I'm a big fan of ought implies can, but you need to endorse an unreasonable variant of that principle in order to justify the dismissal of the ideal on the basis of its imperfect realisation.
ReplyDeleteI am tempted to agree and disagree with the claim about priority, read in two different ways.
ReplyDelete1. The risk or gap arguments don't apply to pure mathematics. Proofs in Euclidean geometry do not depend on value commitments, because the theorems follow deductively from the postulates. The difference for science is that inferences from evidence are either ampliative or rely on contingent material postulates. So the involvement of values in inference is not a universal feature of inference, but a special feature of contingent inference about the world. So (in this sense) inference is prior to the influence of values.
2. For scientific inference, though, value-free inference does not make sense even as a limiting case. There will always be a trade-off between different risks (e.g. Type I and Type II errors) and which risks are worth taking is always ineliminably a value judgment. So there is no possible ampliative inference without values. One is not prior to the other.
Neither of these seem like a shortcoming of the risk and gap arguments, however. So maybe you have something else in mind?
"There will always be a trade-off between different risks (e.g. Type I and Type II errors) and which risks are worth taking is always ineliminably a value judgment."
ReplyDeleteWhich is perhaps why the "cult of statistical significance P<0.05" is/was entrenched: a convention on controlling Type I error at this arbitrary level removed one entry point for value judgements in study design and interpretation.
The hierarchies of strength of evidence for support of a theory, and for causation, in epidemiology and medicine, are ordered by their robustness to the effect of the values both of the experimenter and other scientists.
It's entirely likely that (i) this is obvious OR (ii) totally irrelevant; but...
ReplyDeleteQuestions of value-freedom with respect to hypothesis testing immediately remind me of psychological work on what is called 'confirmation bias'-- there's a large literature (and a lot of debate) on exactly how this works, but one formulation is that we (the royal 'we'... scientists included) are more likely to seek information which would confirm rather than deny a currently considered hypothesis. Two very relevant papers here are Snyder and Swann (1978) and Klayman and Ha (1987).
I'm embarrassed to admit that I'm so out of the philosophy of science loop that I'm unsure of whether the main question here is descriptive or normative-- if I'm correct in understanding that there are philosophers of science who believe that hypothesis testing can take place in a value-free vacuum, citing this (the confirmation bias stuff, I mean) empirical evidence might be really important.
As a matter of curiosity, I wonder-- how much of the philosophical debate makes contact with this kind of empirical evidence?
Mark - The debate generally starts out from empirical evidence, though usually the evidence is historical and sociological rather than psychological. As a descriptive matter, there have been cases where values definitely have played a role in science, even in hypothesis testing, including cases generally regarded as progressive. The main philosophical question is the normative one, about whether eliminating values as an ideal is preferable. I'd like to weigh in and say that this is not a worthwhile ideal, and that values must play a role in good science; indeed, a more thorough role than these two arguments would naturally lead to.
ReplyDeleteThere is perhaps a secondary descriptive question here, which is whether we should interpret the actual role of values in science as a detriment to progress whose role in the progressive cases is eliminable in principle (or, to put it differently, that the actual process happily approximates the "right" process, which would be value free) or whether the values were a needed ingredient for the progress to be made.