Butler & Kefford pointed out that prior literature describes climate stress as amplifying pre-existing conflict risk, rather than being the "sole cause" as Adams et al. suggest previous studies suggest (yes, this is getting confusing). We agree with this interpretation and the evidence seem to back it up pretty clearly. Our analyses indicate fairly consistent percentage changes in conflict risk induced by climatic shifts, so places with high initial risk get more of a boost from climatic events.
Glieck, Lewandowsky & Kelly argued that the earlier articles were an oversimplification of prior research, and that focusing on locations where conflict occurs is important for helping to trace out how climate induces conflict. I would think that this point would resonate with Adams et al, since some of those authors do actual case studies as part of their research portfolio. It does kind of seem to me that case studies would be the most extreme case of "selection on the outcome" as Adams et al define it.
Finally, here's what we originally wrote to fit on a single MS Word page:
Claims of Bias in Climate-Conflict Research Lack Evidence
Solomon Hsiang and Marshall Burke
A recent article by Adams et al. [1] and accompanying editorial [2] criticize the field of research studying links between climate and conflict as systematically biased, sowing doubt in prior findings [3,4]. But the underlying analysis fails to demonstrate any evidence of biased results.
Adams et al. claim that because most existing analyses focus on conflict-prone locations, the conclusions of the literature must be biased. This logic is wrong. If it were true, then the field of medicine would be biased because medical researchers spend a disproportionate time studying ill patients rather than studying each of us every day when we are healthy.
Adams et al.’s error arises because they confuse sampling observations within a given study based on the dependent variable (a major statistical violation) with the observation that there are more studies in locations where the average of a dependent variable, the conflict rate, is higher (not a violation). Nowhere does Adams et al. provide evidence that any prior analysis contained actual statistical errors.
We are also concerned about the argument advanced by Adams et al. and repeated in the editorial that it is “undesirable” to study risk factors for populations at high risk of conflict because it may lead to them being “stigmatized.” Such logic would imply that study of cancer risk factors for high risk patients should not proceed because success of these studies may lead to the patients being stigmatized. We believe that following such recommendations will inhibit scientific research and lead to actual systematic biases in the literature.
Research on linkages between climate and conflict is motivated by the desire to identify causes of human suffering so it may be alleviated. We do not believe that shying away from findings in this field is an effective path towards this goal.
References
1. Adams, Ide, Barnett, Detges. Sampling bias in climate–conflict research. Nature Climate Change (2018).
2. Editorial. Don’t jump to conclusions about climate change and civil conflict. Nature, 555, 275-276 (2018).
3. Hsiang, Burke, Miguel. Science (2013) doi:10.1126/science.1235367
4. Burke, Hsiang, Miguel. Ann. Rev. Econ. (2015) doi:10.1146/annurev-economics-080614-115430