Friday, August 2, 2013

A climate for conflict

Sol, Ted Miguel, and I are excited to have a paper we've been working on for a while finally come out. The title is "Quantifying the impact of climate on human conflict", and it's out in this week's issue of Science.  Seems like this should get Sol's tenure package at Berkeley off to a pretty nice start.  

Our goal in the paper is rather modest, and it's all in the title:  we collect estimates from the rapidly growing number of studies that look at the relationship between some climate variable and some conflict outcome, make sure they are being generated by a common and acceptable methodology (one that can credibly estimate causal effects), and then see how similar these estimates are.  We want to understand whether all these studies spread across different disciplines are actually telling us very different things, as the ongoing debate on climate and conflict might seem to suggest.

We compute things in terms of standardized effects -- percent change in conflict per standard deviation change in climate -- and find that, by and large, estimates across the studies are not all that different.  All the exciting/tedious details are in the paper and the supplement (h
ere's a link to the paper for those without access, and here is a link to all the replication data and code).  We've also gotten some nice (and not so nice) press today that discusses different aspects of what we've done.  See herehere, herehere, here, herehere, here, here, and here for a sampler.  Most importantly, see here

We want to use this space to answer some FAQ about the study.  Or, really, some FHC:  Frequently Heard Criticisms.  A bunch of these have popped up already in the press coverage.  Some are quite reasonable, some (we believe) stem from a misunderstanding or misrepresentation of what we're trying to do in the paper, and some are patently false. 

So, in no particular order, some FHC we've gotten (in bold), and our responses.  We include direct quotes from our critics where possible.  Apologies for the lengthy post, but we are trying to get it all out there. 

UPDATE (Friday Aug 2, 4pm): We have now heard from 3 different folks - Andy Solow, Idean Salehyan, and Cullen Hendrix - who each noted that what they were quoted as saying in these articles wasn't the full story, and that they had said lots of nice things too.  After talking to them, it seems there is much more agreement on many of these issues than the press articles would have you believe, and in many cases it looks like the journalists went far out of their way to highlight points of disagreement. I guess this shouldn't be surprising, but we believe it does the debate a pretty serious disservice.  So our responses below should be interpreted as responses to quotes as they appeared, not necessarily responses to particular individuals' viewpoints on these issues, which the quotes might not adequately represent.  We are not trying to pick fights, just to clarify what our paper does and doesn't do.

1.  You confuse weather and climate. 
(verbatim from Richard Tol, courtesy Google Translate)

This is an old saw that you always get with these sorts of papers.  The implied concern is that most of the historical relationships are estimates using short-run variation in temperature and precip (typically termed "weather"), and then these are used to say something about future, longer-run changes in the same variables ("climate").  So the worry is that people might respond to future changes in climate -- and in particular, slower-moving changes in average temperature or precip -- differently than they have to past short-run variation.

This is a very sensible concern.  However, there are a couple reasons why we think our paper is okay on this front. First, we document in the paper hat the relationship between climate variables and conflict shows up at a variety of time scales, from hourly changes in temperature to century-scale changes in temperature and rainfall.  We use the word "climate" in the paper to refer to this range of fluctuations, and we find similar responses across this range, which provides some evidence that today's societies are not that much better at dealing with long-run changes than short-run changes.  This is consistent with evidence on climate effects in other related domains (e.g. agriculture).  

Second, we do not to make explicit projections about future impacts - our focus is on the similarity in historical responses.  Nevertheless, the reader is definitely given the tools to do their own back-of-the-envelope projections:  e.g. we provide effects in terms of standard deviations of temperature/ rainfall, and then provide a map of the change in temperature by 2050 in terms of standard deviations (which are really really large!).  That way, the reader can assume whatever they want about how historical responses map into future responses.  If you think people will respond in the future just like they did in the past, it's easy multiplication.  If you think they'll only be half as sensitive, multiply the effect size by 0.5, etc etc.  People can adopt whatever view they like on how future long-run responses might differ from past short-run responses; our paper does not take a stand on that. 

2. Your criterion for study inclusion are inconsistently applied, and you throw out studies that disagree with your main finding.
(Paraphrasing Halvard Buhaug and Jurgen Sheffran)

This one is just not true.  We define an inclusion criterion -- which mainly boils down to studies using standard techniques to account for unobserved factors that could be correlated with both climate and conflict --  and include every study that we could find that meets this criterion. In more technical terms, these are panel or longitudinal studies that include fixed effects to account for time-invariant or time-varying omitted variables.

In a few cases, there were multiple studies that analyzed the exact same dataset and outcomes, and in those cases we included either the study that did it first (if the studies were indistinguishable in their methods), or the study that did the analysis correctly (if, as was sometimes the case, one study met the inclusion criteria and another did not).  

So, the inclusion criterion had nothing to do with the findings of the study, and in the paper we highlight estimates from multiple studies whose do not appear to agree with the main findings of the meta-analysis (see Figure 5). We did throw out a lot of studies, and all of these were studies that could not reliably identify causal relationships between climate and conflict -- for instance if they only relied on cross-sectional variation.  We also throw out multiple studies that agreed very strongly with our main findings!  We provide very detailed information on the studies that we did and did not include in Section A of the SOM.

Sheffran refers to other recent reviews, and complains that our paper does not include some papers in those reviews.  This is true, for the methodological reasons just stated.  But what he does not mention is that none of these reviews even attempt to define an inclusion criterion, most of them review a very small percentage of the number of papers we do, and no make any attempt to quantitatively compare results across papers.  This is why our study is a contribution, and presumably why Science was interested in publishing it as a Research Article.

3. You lump together many different types of conflict that shouldn't be lumped together.  Corollary: There is no way that all these types of conflict have the same causal mechanism.  
(Idean Salehyan: "It’s hard to see how the same causal mechanism that would lead to wild pitches would be linked to war and state collapse")

This quote is very substantial misrepresentation of what we do.  First, nowhere do we make the claim that these types of conflict have the same causal mechanism.  In fact we go to great lengths to state that climate affects many different things that might in turn affect conflict, and the fact that the effect sizes are broadly similar across different types of conflict could be explained by climate's pervasive effect on many different potential intervening variables (economic conditions, food prices, institutional factors, ease of transportation, etc). 

Second, we take great pains to separate out conflicts into different categories, and only make comparisons within each category.  So we calculate effect sizes separately for individual-level conflict (things like assault and murder), and group level conflict (things like civil war).  So, again contra Idean's quote, we are not actually comparing baseball violence (which we term individual-level) with war and state collapse (which are group conflicts).  Read the paper, Idean! 

But the whole point of the paper is to ask whether effect sizes across these different types of conflict are in fact similar!  What we had in the literature was a scattering of studies across disciplines, often looking at different types of conflict and using different methodologies.  This disarray had led to understandable confusion about what the literature as a whole was telling us.  Our goal was to put all the studies on the same footing and ask, are these different studies actually telling us that different types of conflict in different settings respond very differently to climate?  Our basic finding is that there is much more similarity across studies than what is typically acknowledged in the debate.  

Whether this similarity is being driven by a common underlying mechanism, or by multiple different mechanisms acting at the same time, is something we do not know the answer to -- and what we highlight very explicitly as THE research priority going forward.  

4. You cherry pick the climate variables that you report
(paraphrasing Halvard Buhaug).

We try really hard not to do this.  Where possible, we focus on the climate variable that the authors focused on in the original study.  However, the authors themselves in these studies are almost completely unrestricted in how they want to parameterize climate.  You can run a linear model, a quadratic model, you can include multiple lags, you can create binary measures, you can create fancy drought measures that combine temperature and rainfall, etc etc.  Authors do all these things, and often do many of them in the same paper.  Since we can't include all estimates from every single paper, we try to pick out the author's preferred measure or estimate, and report that one.  In the cases where authors tested many different permutations and did not hint at their "preferred" estimate (e.g. in Buhaug's comments on our earlier PNAS paper), we pick the median estimate across all the reported estimates.  Section A2 in the SOM provides extra detail on all of these cases.

5. This paper is not based on any theory of conflict, so we learn nothing.
This is very related to FHC #3 above, and we get this from the political science crew a lot.  The thing is, there are a ton of different theories on the causes of conflict, and empirical work so far has not done a great job of sorting them out.  In some sense, we are being atheoretical in the paper -- we just want to understand whether the different estimates are telling us very different things.  As noted above, though, the fact that they're generally not would seem to be very important to people interested in theory! 

Claire Adida, a political science prof at UCSD (and @ClaireAdida on twitter), put it really nicely in an email:  "I don't understand this ostrich politics. How about saying something like 'we really want to thank these economists for doing a ton of work to show how confident we can be that this relationship exists. It's now our turn - as political scientists - to figure out what might be the causal mechanisms underlying this relationship.' " (Btw, I have no idea what "ostrich politics" means, but I really like it!) 

6. People will adapt, so your results are a poor guide for impacts under climate change. 
(paraphrasing Jurgen Sheffran, courtesy Google Translate; Cullen Hendrix: "I'm optimistic.  Unlike glaciers, humans have remarkable adaptive capacity"; as well as audience members in every seminar we've ever given on this topic).

This is very related to FHC #1 above. It is definitely possible that future societies will become much better at dealing with extreme heat and erratic rainfall.  However, to just assume that this is the case seems to us a dangerous misreading of the existing evidence.  As stated above, available evidence suggests that human societies are remarkably bad at dealing with deviations from average climate, be it a short-lived temperature increase or a long-term one.  See here and here for other evidence on this topic.

And it has to be the case that knowing something about how yesterday's and today's world respond to climate tells us more about future impacts than knowing nothing about how societies have responded to climate.  The alternative - that the present world tells us nothing about the future world - just does not appear consistent with how nearly anybody sees things. 

7.  A lot of your examples - e.g. about civilization collapse - do not pertain to the modern world.

We got this in peer review. It's true that a lot of these collapse stories are from way back.  The Akkadian empire collapsed before 2000 BC, after all!  In a companion paper, forthcoming in Climatic Change, Sol and I look a little more carefully at this, and it actually turns out that per capita incomes in many of these societies, pre-collapse, were remarkably similar to incomes in many developing countries today.  To the extent that economic conditions shape conflict outcomes -- a common belief in economics and political science -- then this provides at least some evidence that these historical events are not completely irrelevant to today. 

More basically, though, it seems like hubris to just assume that "this time things are different".  At the time of their collapse, each of these societies (the Akkadians, the Maya, some of the Chinese dynasties, Angkor Wat) were incredibly advanced by global standards, and they probably also did not figure that climate would play a role in their demise.  Because we don't yet have a firm grasp on why climate affects conflict, it again seems dangerous to assume that things are completely different today -- just as it seems dangerous to conclude that modern societies are going to be completely destroyed by climate change, a claim we make nowhere in the paper. 

However, we do hope that "this time is different"!  It would be quite nice if the Mayan and Angkor Wat examples did not, in fact, pertain to the modern world. 

8. You can't claim that there is an overall significant relationship between climate and conflict if many of the studies you analyze do not show a statistically significant effect. 
(Halvard Buhaug: "I struggle to see how the authors can claim a remarkable convergence of quantitative evidence when one-third of their civil conflict models produce a climate effect statistically indistinguishable from zero, and several other models disagree on the direction of a possible climate effect")

This is a basic misunderstanding of what a meta-analysis does.  The beauty of a meta-analysis is that, by pooling a bunch of different studies, you can dramatically increase statistical power by increasing your sample size. It's even possible to find a statistically significant result across many small studies even if no individual study found a significant result.  This happens in medical meta-analyses all the time, and is why they are so popular in that setting:  each individual study of some expensive drug or procedure often only includes a few individuals, and only by combining across studies do you have enough statistical power to figure out what's going on.

So the fact that some of the individual studies were statistically significant, and others were not, does not necessarily affect the conclusions you draw when you average across studies.  In our case, it did not:  the mean across studies can be estimated very precisely, as we show in Figures 4 and 5, and discuss in detail in the SOM.

A final point:  we find a striking consistency in findings in the studies that look at temperature in particular.  Of the 27 modern studies that looked at a relationship between temperature and conflict, all 27 estimated a positive coefficient.  This is extremely unlikely to happen by chance - i.e. very unlikely to happen if there were in fact no underlying relationship between temperature and conflict.  Think of flipping a coin 27 times and getting heads all 27 times.  The chance of that is less than 1 in a million.  This is not a perfect analogy -- coin flips of a fair coin are independent, our studies are not fully independent (e.g. many studies share some of the same data) -- but we show on page 19 in the SOM that even if you assume a very strong dependence across studies, our results are still strongly statistically significant. 

9. Conflict has gone down across the world, as temperatures have risen. This undermines the claims about a positive relationship between temperature and conflict.

(Idean Salehyan: "We've seen rising temperatures, but there's actually been a decline in armed conflict".)

There are a couple things wrong with this one. First, many types of conflict that we look at have not declined at all over time.  Here is a plot of civil conflicts and civil wars since 1960 from the PRIO data, summed across the world.  As coded in these data, civil conflicts are conflicts that result in at least 25 battle deaths (light gray in the plot), and civil wars are those that result in at least 1000 deaths (dark gray).  As you can see, both large wars and smaller conflicts peaked in the mid-1990s, and while the incidence of larger wars have fallen somewhat, the incidence of smaller conflicts is currently almost back up to its 1990s peak.  These types of conflicts are examined by many of the papers we study, and have not declined.   

As another check on this, I downloaded the latest version of the Social Conflict in Africa Dataset, a really nice dataset that Idean himself was instrumental in assembling.  This dataset tracks the incidence of protests, riots, strikes, and other social disturbances in Africa.  Below is a plot of event counts over time in these data.  Again, you'd be very hard pressed to say that this type of conflict has declined either.  So I just don't understand this comment.

Second, and more importantly, there are about a bazillion other things that are also trending over this period.  The popularity of the band New Kids On The Block as also fallen fairly substantially since the 1990s, but no-one is attributing changes in conflict to changes in NKOTB popularity (although maybe this isn't implausible).  The point is that identifying causal effects from these trends is just about impossible, since so many things are trending over time.  

Our study instead focuses on papers that use detrended data - i.e. those that use variation in climate over time in a particular place.  These papers, for instance, compare what happens to conflict in a hot year in a given country, to what happens in a cooler year in that country, after having account for any generic trends in both climate and conflict that might be in the data.  Done this way, you are very unlikely to erroneously attribute the effects of changes in conflict to changes in climate. 

10. You don't provide specific examples of conflicts that were caused by climate.  

(Halvard Buhaug: "Surprisingly, the authors provide no examples of real conflicts that plausibly were affected by climate extremes that could serve to validate their conclusion. For these and other reasons, this study fails to provide new insight into how and under what conditions climate might affect violent conflict")

I do not understand this statement.  We review studies that look at civil conflict in Somalia, studies that look at land invasions in Brazil, studies that look at domestic violence in one city in one year in Australia, studies that look at ethnic violence in India, studies that look at murder in a small set of villages in Tanzania.  T
he historical studies looking at civilization collapses in particular try to match single events to large contemporaneous shifts in climate.  We highlight these examples in both the paper and in the press materials that we released, and they were included in nearly every news piece on our work that we have seen.  So, again, this comment just does not make sense. 

Perhaps implicit in this claim is often some belief that we are climate determinists.  But as we say explicitly in the paper, we are not arguing that climate is the only factor that affects conflict, nor even that it is the most important factor affecting conflict.  Our contribution is to quantify its role across a whole host of settings, and our findings we hope will help motivate a bunch more research on why climate should shape conflict so dramatically (see Claire's quote above).

11.  You are data mining. Corollary: What you guys are demonstrating is a severe publication bias problem -- only studies that show a certain result get published.

(Andy Solow: "In the aggregate, if you work the data very hard, you do find relationships like this. But when you take a closer look, things tend to be more complicated." As an aside, Andy sent us a very nice email, noting in reference to the press coverage of our article: "From what I've seen so far, all the nice things I said - that you are smart, serious researchers working on an important and difficult problem, that your paper will contribute to the discussion, that you may well be right - have been lost in favor of concerns I have and that, as I took pains to point out, you are already aware of.")

This is related to FHC #2 and #4 above. We have defined a clear inclusion criterion, and only include studies that meet this criterion.  As detailed in the SOM Section A2, we do not include a number of studies that agree very strongly with our main findings - for instance Melissa Dell's very nice paper on the Mexican Revolution.  Again, our inclusion criteria is based on methodology, not findings. 

The publication bias issue is a tougher one, and one which we explicitly address in the paper -- it even gets its own section, so it's not something we're trying to hide from.  We test formally for it in the SOM (Section C), finding limited evidence that publication bias is behind our results.  We also note that it's not clear where the professional incentives now lie in terms of the sorts of results that are likely to get published or noticed.  The handful of climate/conflict skeptics have garnered a lot of press by very publicly disagreeing with our findings, and this has presumably been good for their careers.  Had they instead published papers that agreed with our findings, it's likely that the press would not have had these folks as their go-to this time around.  Similarly, journals are probably becoming less interested in publishing yet another paper that shows that higher temperatures lead to more conflict.  Because so many of the papers we review are really recent (the median publication date across our studies was 2011), we feel that it is unlikely that all of these results are false positives.


  1. Ad 1:
    A short-term elasticity is not an long-term elasticity. Confusing short-term elasticities with long-term elasticities is not okay. Taking an average is worse. Pretending that the average is a long-term elasticity ....

  2. I don't think they're saying that the short-run elasticity IS the long-run elasticity. I think this complaint is appropriately addressed in comment 1. If it's still a concern then we can think of the effects as an upper bound, and as Marshall says make some assumptions about the degree to which the short-run elasticity maps into the long-run elasticity.

  3. Jonathan is right - we are not making any assumptions about the short-run versus long-run elasticities, and folks can make whatever assumptions they like there. However, the two empirical papers we know of on the issue that actually measure the two elasticities using historical data, find that the short- and long-run elasticities are exactly the same. See FHC #6 above.

    Richard, would love to see evidence to the contrary.

    But this is not to say that future societies won't respond differently in the long run. Again, we have no idea - and do not make any firm assumptions about this in the paper.

  4. You use a kernel to average across temporal scales, don't you?

    You use the results to project the impact of climate change, don't you?


  5. No, we do not project the impacts of climate change in this paper. See #1.

  6. Also, it's worth mentioning that Marshall and Ted have an excellent paper with John Dykema, Shankar Satyanath, and David Lobell encouraging researchers to think very carefully about such projections given the associated climatic uncertainty (

    Conditional on working on this field these guys are the least likely to engage in the sort of extrapolation that you're concerned about Richard.

  7. Figure 6 exits in my imagination only?

    1. Figure 6 is just maps projected temperature change - it says nothing whatsoever about any other impacts.

  8. "... the reader is definitely given the tools to do their own back-of-the-envelope projections."

    "That way, the reader can assume whatever they want about how historical responses map into future responses."

    i.e. you are welcome to assume that historical responses bear zero relation to future climate impacts. I expect that it's somewhere between a zero and a one-to-one relation.

  9. The potential impact of publication bias is perhaps my biggest concern. Your point about the median of the papers being 2011 is hardly reassuring. That could be seen as reflecting a gold rush effect (analysis attracted by funding, visibility, etc.). What leads to your conclusion that "we feel that it is unlikely that all of these results are false positives"?

    You would only need perhaps half of them to be wrong to powerfully affect the overall trajectory of your findings, right?

    Also, with that assertion that there are only a "handful of climate/conflict skeptics," you're implying there is some dominant group of "climate/conflict adherents" and that there is false balance on this particular issue. I don't see that in the literature or IPCC conclusions?

  10. Andy-
    We formally test for publication bias in the paper (Section C in the SOM) by looking at the relationship between the statistical significance in each study and the study sample size. We find that studies with bigger sample sample size do indeed produce results with higher t-stats, which is not what you'd expect if everyone were just running regressions until they found an estimate that was marginally significant and reporting that one. This does not eliminate the possibility of publication bias, but suggests that it is not driving our results.

    Re the skeptics/adherents point: wasn't trying to claim there's a dominant group of anybody. Just that many different researchers have found that climate and conflict are related, and that this often gets missed when the press trots out the same skeptics over and over and presents everything as an ongoing, unsettled debate. Our results show that lots of different people across lots of disciplines are working on this problem, and that most of them have found pretty similar results. This was the contribution of our paper, and I think it got lost a little in some of the coverage.

  11. Dear G-Feed group,
    I have the case of an individual that you might want to know.

    From one of the most remote villages of Wayanad District, Kerala State (India) a poor agricultural labourer named Sasidharan started calling ever since I came to Kerala and took up Disaster Risk Reduction work here. He placed a proposition to me in the first call that he believes 'all major climatic extremes are preceded (please note 'preceded') by one or the other kind of human or conflicts'. He being a layman and me being a 'SCIENTIST' I was a bit taken aback at this story line. I tried to fathom the amount of objectivity that he had used to arrive at his conclusions and to my dismay it was only based on an intuitive feeling - for example a train accident in Italy is related to an extreme weather event in India. However, he claimed he has for the last several years recorded all his such 'predictions' in book and many of it he has shared with local media. I still did not believe. Then he suggested that I can keep a record of his calls and predictions and only when I am satisfied of his claims that I need to accept him. This was an open challenge which was difficult to reject. So for many months I kept penning down his calls & predictions. With all hesitation in accepting such intuitive 'predictions', I am forced to accept that there may be some truth in his story line. Your paper, particularly your acceptance of climate as a 'minute to centuries phenomenon' unlike a climatologist who would debate on climate and weather brings in scope to also test his line of the story I guess. A simple cross correlation exercise with 'human conflicts or extreme anthropogenic accidents' against 'climatic extreme' may be of use to test the proposition of this man. Your paper of course is strong in its own turf, but you may want to check the other lag period too to see if climate extremes are indicated by human conflicts or extreme anthropogenic accidents.

    with regards to the team who prepared and presented such a wonderful paper
    Dr. Sekhar L. Kuriakose
    Member, Kerala State Disaster Management Authority &
    Head (Scientist), HVRA Cell (

  12. Hi Marshall. Saw your op-ed in today's Times. Here's the letter I sent in response. I'm sure it won't see the light of day (except here!).

    "Burke, Hsiang, and Miguel (Weather and violence, September 1, 2013) argue that their analysis of the literature establishes a strong link between weather and the level of violence in human society. As they know, however, this result has been met with some skepticism in specialist circles. On the other hand, there is universal agreement that poverty, inequality, and weak civil institutions are at least as important as weather in explaining levels of violence. Policy-makers must not lose sight of this in all the excitement about climate change."

    Andy Solow