Introduction

In this essay, I will focus on the dialogue between Eric Winsberg and Wendy Parker about whether or not scientists’ predictions about climate change are ultimately influenced by social values. This dialogue refers back to the question on whether value judgments play a role in in the scientific process. However, Winsberg’s argument is unique in that it focuses on computer simulation models used for climate prediction. In short, Winsberg argues that scientists do include social values when making their predictions because they often make use of values in choosing how to construct the models. Parker, on the other hand, objects to Winsberg’s argument: she thinks that the method of producing precise probabilism for climate change predictions as it is currently the case with climate models is misplaced. She thinks that these models should be exchanged for coarser estimates which would in turn avoid the reliance on any social values. I provide a challenge for Parker’s proposed idea: I will show that crude estimates as proposed by Parker are either too subjective to be relevant, or involve transforming scientists’ estimates to fit certain categories which, as argued by Steele (2012), involves value judgments on pragmatic grounds. I will conclude that Parker’s objection is unsuccessful.

Winsberg’s Argument

I begin by situating and outlining Winsberg’s argument. Winsberg’s argument draws on a discussion about whether value judgments are an integral part in the procedures of science. Rudner (1953) has argued that ethical values must be part of science, given that empirical evidence can never establish a hypothesis with certainty and the decision whether to accept or reject a hypothesis depends on whether the evidence is strong enough. This decision, however, reflects a judgment about the ethical importance of making a mistake in accepting or rejecting the hypothesis (1953: 1-2). Jeffrey (1956) has challenged Rudner’s conclusion by arguing that scientists do not accept or reject hypotheses, but only assign probabilities to hypotheses. Consequently, scientists’ work remains value-free, and it is only decision-makers, who decide whether to act on the scientists’ probabilities, that make value judgments (Jeffrey 1956: 245) (Parker 2014: 24).

Winsberg’s (2012) argument is situated as a response to Jeffrey. Winsberg’s thesis focuses on hypotheses about future climate change in particular, and states that it is impossible for climate scientists to keep out social value judgments when determining probabilities about climate change (2012: 111). The novel part of Winsberg’s argument lies in the fact that social values are thought to come into play when constructing computer simulation models. Models are used because of the complex nature that shapes climate change, and their output provides scientists with specific probability estimates. However, there is great uncertainty in how to build these models, and methodological choices influence the outcome of the probabilities produced. Winsberg’s crucial point is that in making these methodological decisions, it is not epistemic grounds, but rather ethical or social grounds that will often lead scientists to make a choice (Winsberg 2012: 124) (Parker 2014: 25-26).

In particular, Winsberg argues that there are two ways that social values impact methodological choices. First, scientists might be aware that different methodological choices might have different inductive risk profiles (i.e. the risk of having a false positive / false negative when accepting / rejecting hypotheses). The value judgments involved here would involve considerations of which methodological choices might be ethically better / worse. The second way includes giving priority to some predictive tasks over others given their social or economic relevance. This also takes into account value judgments (Winsberg 2012: 124).

Parker’s Objection

Parker disagrees with Winsberg that climate change predictions must take into account value judgments: she does so by objecting to the methods currently used to produce probability estimates about climate change. These probabilities, Parker argues, are “artificially precise” (Parker 2014: 27). In other words, a single probability distribution cannot accurately reflect scientists’ current understanding of the climate, given that scientists do not know enough about the climate system to be able to assign precise probabilities about its future behaviour. In simplistic terms, scientists are not able to determine the likelihood of an event occurring being “0.38 rather than 0.37” (Parker 2014: 27), for example.

Instead of using methods which produce precise probabilities, Parker argues that we should focus on making coarser predictions using qualitative terminology such as “likely”, or probability intervals. She does note that it is currently unclear what kinds of methods could be used to make these coarser predictions. However, these coarser predictions would be able to better reflect scientists’ current level of knowledge about the climate system. Additionally, coarser estimates would be less affected by scientists’ value judgments, the thought being that coarser estimates will not be as sensitive to any change in the scientists’ methodological modelling choices (Parker 2014: 27-28). This way, Parker aims to weaken the view that social values have an inevitable impact on climate predictions.

Reply to Parker: A Dilemma

I will now argue that either Parker is incorrect in arguing that coarser estimates avoid making value judgments, or, if they do, they are useless for policy-making. I do this in the form of a dilemma. The first horn of the dilemma Parker faces is the following: as mentioned, one way for scientists to make coarser estimates of future climate change is to use language of the form “likely” or “unlikely, for example. However, these terminologies are subjective and are thus going to differ across scientists. One scientist might judge the term “likely” to represent a probability equal or above 30 per cent, whilst another might see it as a likelihood above 40 per cent. Given these subjective differences, these coarse estimates are not going to prove helpful as scientific results which might inform policy decisions. Again, if scientists provide subjective estimates which are not easily comparable, they will hinder their use for effective decision-making for future climate policy. This way of making coarse estimates is therefore not helpful.

One way to avoid this is to calibrate the language that scientists use such that estimates such as “likely” take on specific meanings. In other words, scientists’ use of words would be classified such that they correspond to set probability intervals. In fact, this is what is currently being done within the IPCC (Intergovernmental Panel on Climate Change) AR4 Synthesis Report (IPCC 2007: 27): likelihood intervals are assessed along the following scale: “extremely likely” corresponds to more than 95%, “very likely” to more than 90%, etc. Similarly, scientists’ confidence levels also have set intervals: having “very high confidence” corresponds to a chance of at least 9 out of 10, “high confidence” to roughly 8 out of ten, etc (IPCC 2007: 27). Again, by using intervals like these, one avoids the subjectiveness of different scientists.

However, this move leads to the second horn of the dilemma: as argued by Katie Steele (2012), when scientists are asked to sort their expert beliefs / estimates into another form (e.g. some specific categories / intervals) which are less complex, they are forced to make a decision as to how to report their beliefs. When doing so, Steele continues, scientists need to consider (perhaps even just implicitly) the consequences of what this classification of their expert beliefs might entail, and this, crucially, commits the scientists to make value judgments about the desirability of the consequences. Again, when scientists sort their expert beliefs into categories such as “high confidence” or “medium confidence” for the IPCC report, for example, they must consider the consequences that their classifications will have which in turn requires making some value judgments (Steele 2012: 898-899).

Replies to the Dilemma

I will now consider three potential replies to the dilemma. The first reply focuses on the first horn of the dilemma, and the second and third replies focus on the second horn.

1. Subjective estimates are not a problem

One reply in response to the first horn would be to propose an alternative solution than the mentioned in the last section (which was to calibrate scientists’ language): one could argue that instead of calibrating scientists’ language, having subjective terminologies across scientists is unproblematic as long as their meanings (i.e. what probability intervals scientists take their terminology (e.g. “likely”) to stand for) are made explicit. The idea is that this would make it possible to compare scientists’ estimates with each other, even if it becomes slightly harder than if scientists’ had a uniform terminology. This reply would thus avoid the first horn whilst also ensuring coarse estimates which do not contain value judgments.

I do not think that this reply works: to make scientists’ subjective estimates, i.e. their subjective probability intervals comparable and enable them to be used successfully for policy-making, it will still be necessary to fit different scientists’ estimates into one uniform form of probability intervals (and thus convert them into a consistent terminology). This, however, then leads us again to the second horn of the dilemma outlined above: if scientists must convert their probability intervals into a uniform form, this will require them to make value judgments, as argued by Steele (2012). Scientists thus cannot avoid making subjective estimates which are to be useful for policy-making without having to convert their estimates into another form and thus make value judgments.

2. Disanalogy Between Steele and Parker’s Probability Intervals

A potential reply to the second horn of the dilemma is as follows: the examples that Steele gives involves scientists having to transform their existing beliefs to fit a certain category for the use of policy-makers. Parker’s point of having cruder categories, meanwhile, differs from Steele in that the classification of scientists’ estimates takes place internal to science. Again, unlike for Steele, the idea is that the classification of terminology such as “high confidence” or “likely” as corresponding to certain percentage intervals is not meant to be transformed for policy-makers. It is instead used internally within science to ensure consistency among scientists. Given this, one might conclude that we can avoid the second horn of the dilemma as scientists’ transformations of their estimates into different categories is not done for policy-making, and thus does not involve considering the consequences of the re-classification.

I do not think that this reply works: even if the re-classification is not meant directly for policy work, but only to have a consistent use of terminology within science, I believe that scientists will be aware that their results, and by extension, their labels for what estimates are “likely”, for instance, will have direct policy implications. This is especially true given the focus of policy work on climate change. Thus, this objection is not successful in avoiding the dilemma.

3. Intervals Are Not Cruder Than Scientists’ Own Estimates

Another reply to the second horn is to say that scientists’ estimates (i.e. their probability intervals) are not cruder than the set categories (i.e. the IPCC probability intervals) they are classifying their estimates into – in other words, both are probability intervals. As such, one might conclude, Steele’s argument about scientists evaluating which less complex categories to fit their more complex expert beliefs into does not apply, and is misplaced in this situation.

In response, I do not think it is essential that the set categories are necessarily less complex than the scientists’ own estimates or beliefs. Even if the scientists’ own beliefs are also in the form of probability intervals, unless these intervals correspond directly with the set probability intervals that are to be used across the scientific discipline, the scientists will still have to make evaluative judgments about which categories or set intervals to fit their own beliefs into. This, as before, will require them to consider the consequences and thus, to make value judgments.

Pending any other objections to my dilemma, I conclude that Parker’s objection to Winsberg does not succeed in challenging the latter’s argument that model-based assignments of probability to hypotheses about climate predictions are impacted by social values.

Conclusion

I have outlined Winsberg’s argument for why climate model predictions involve value judgments and Parker’s objection that precise estimates should be exchanged for coarser estimates which are more insensitive to value judgments. I have argued that Parker’s objection to Winsberg is unsuccessful because it faces the following dilemma: either the scientists estimates are subjective in which case they are useless, or they involve making value judgments for pragmatic reasons.

References

  1. IPCC (Intergovernmental Panel on Climate Change) (2007). Climate Change 2007: Synthesis Report; Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (ed.) Core Writing Team, R. K. Pachauri, and A. Reisinger. Geneva: IPCC, pp.1-104.
  2. Jeffrey, R. C. (1956) ‘Valuation and Acceptance of Scientific Hypotheses’, Philosophy of Science 23(3), pp. 237-246.
  3. Parker, W. (2014) ‘Values and Uncertainties in Climate Prediction, Revisited’, Studies in History and Philosophy of Science 46, 24-30.
  4. Rudner, R. (1953) ‘The Scientist Qua Scientist Makes Value Judgments’, Philosophy of Science 20(1), pp.1-6).
  5. Steele, K. (2012) ‘The Scientists Qua Policy Adviser Makes Value Judgements’, Philosophy of Science 79(5), pp. 893-904.
  6. Winsberg, E. (2012) ‘Values and Uncertainties in the Predictions of Global Climate Models’, Kennedy Institute of Ethics Journal 22(2), pp.111-137.