Friday, January 18, 2013

Blame it on the rain?


One of the things I’ve been puzzling over lately is why so many agronomists and others in agriculture seem to have the mantra that July rainfall makes the corn crop. These are generally smart people whose opinion I respect. For example, I’m a fan of Scott Irwin and Darrel Good’s blog over at Farmdoc daily, where they recently discussed the prospects for next July’s rainfall. They are definitely not the only ones to focus on rainfall. Whenever I present empirical work on weather and yield to a group of agronomists, at least one will invariably argue that the strong temperature effects are just an artifact of temperature tending to be high when rainfall is low. One problem with that argument is that a lot of the recent empirical work shows temperature as being important, and not all of it uses datasets with high correlations between temperature and rainfall.

Usually I try to explain the various mechanisms that link temperature to yields. And these discussions can often lead to interesting studies about which mechanisms matter more, such as one paper we have coming out soon. But I never really delved into the reason that rainfall is given so much credit for good or bad years. One likely reason is it’s just a lot easier to see whether or not it rains than to detect a shift in temperatures. So people will tend to remember dry years more than hot years. But there’s also some analysis that appears to support the rainfall hypothesis.

A lot of the work on US corn and weather traces back to Louis M. Thompson’s work 30 years ago. These were basically time series models at the state level, looking for instance at how Illinois yields changed over time in relation to weather. More recent work has updated this type of analysis, and emphasizes the role of July rainfall and temperature, but with a bigger role for rainfall. To recreate that type of analysis, I plot below detrended corn yields for Illinois (detrended by fitting a linear slope to represent gradual technology change, and then adjusting all years to 2006 technology) versus July rainfall (prec) and average maximum temperature (Tmax). I also plot prec and Tmax against each other. (Correlation coefficients are given in the bottom panels). Thanks to Wolfram for providing updated data.



You can see that yields are clearly low at low levels of rainfall, and that yields are also low at high Tmax. But you can also see that Tmax and rainfall have a strong negative correlation, which makes it hard to say whether Tmax, rainfall, or some combination (or neither) is the actual cause of yield loss. I also show three recent years in color (red = 2009, green = 2010, blue = 2011). What’s mildly interesting about these years is they don’t follow the normal correlation between Tmax and rainfall. 2009 was especially cool but with medium rainfall, and 2010 and 2011 were both unusually warm for the given amount of rainfall.

The main point, though, is that the colinearity issue cuts both ways. You can’t just decide it’s rainfall and say that temperatures are less important, no more than you can decide it’s all temperature. Many empirical studies try to move beyond these simple time series specifically because of the colinearity problem. One way to reduce colinearity between Tmax and rainfall is to restrict yourself to looking over a narrow range of one of the variables, so that it is essentially being held fixed while the other one is varying. This is hard to do with a time series that is about 50 years long in total, because you quickly run into problems with small sample sizes.

But it’s easier to do this if we look at time series from lots of counties at the same time, or a so-called panel analysis. As a simple illustration, the plot below shows all points for 1950-2011 for counties in the “three I” states: Illinois, Iowa, and Indiana. The left-hand plot just shows the same scatter as before between Tmax and rainfall. Notice there is still a strong negative correlation. But we can now select only points that are within a narrow range of rainfall (shown as green points) or a narrow range of temperature (shown as red). Then we can take the red points and see how rainfall matters when holding temperature constant (middle panel). Or we can take the green points and see how temperature matters when holding precipitation constant (right panel). The black lines in the right two panels show local polynomial fits to the data (using loess in R).



What do we see? Well, at least for these particular places and values of Tmax and rainfall, there does not appear to be much effect of changing rainfall when July Tmax is constant (at around 30C). But temperature changes do appear to be important when rainfall is held constant (at around 100mm).

Now, there are obviously lots of other combinations we could try, and the point isn’t that rainfall is always and completely unimportant. For example, if we hold Tmax constant at a higher level, where I’d expect rainfall to be more critical, we do in fact see a bigger effect of rainfall at low levels (see below).  But if nothing else, it should be clear that a lot of the credit given to July rainfall for US corn is not necessarily well deserved. Coincidentally, the singers of “blame it on the rain” also got a lot of underserved credit!

In future posts, I’ll try to get more into the reasons that temperature can dominate the effects of rainfall, even in a rainfed system. 


Wednesday, January 2, 2013

Some new - and not that hopeful - evidence on adaptation

Quantitative estimates of the potential impacts of future climate change are an important input to policy discussions, and researchers across a wide range of disciplines are starting to get in on the action.  Most impact studies proceed something like this:

  1. Choose an outcome and location of interest (say, corn yields in the US)
  2. Assemble some historical data and investigate how this outcome has responded to past changes in climate in that location
  3. Use these historical estimates to say something about what might happen in the future, given estimates of future changes in the climate variables you examined in (2). 
A key assumption in this approach is that past responses to climate variables (step 2) tell you something about how future populations might respond to similar changes (step 3).  But one problem is that there is often a mismatch between the types of climatic changes that are used to estimated responses in step 2, and the types of future changes in climate that we are particularly worried about in step 3.  

For instance, the historical response of (say) corn yields to climate is often estimated using year-to-year variation in temperature (what we typically call "weather") -- e.g. these estimates ask, if temperatures at a given location were 1C hotter than average in a given year, how much lower were corn yields in that year?  But the future climate changes that we're mainly worried about are gradual changes in temperature and precipitation that will play out over many decades.  The worry is then that if farmers can recognize and adapt to these gradual changes, for instance by changing when they plant or switching the cultivar or crop they grow, and these adaptions offset some or all of the losses they otherwise would have experienced, then estimates of "short-run" responses to climate fluctuations might be a poor guide to the impacts of longer-run more gradual changes.  

How can we know whether longer run responses to gradual changes in climate differ from shorter-run responses to climate fluctuations?  In a new working paper, Kyle Emerick and I try to shed some light on this by exploiting the surprisingly large variation in recent longer-run trends in temperature and precipitation across US agricultural areas.  It turns out that some US counties have warmed up a bunch over the last 30 years, and others have actually cooled slightly.  Why this is has happened is an active area of climate research, but appears tied to some combination of aerosols and natural climate variability (e.g. see new paper by Meehl et al, or google "U.S. warming hole" for more on this - probably better to have safe search on…).

Below is a plot of changes in average growing season temperature and precipitation between 1980-2000 for US counties east of the 100th meridian.  The color scale for each map is given by the histogram beneath, and as you can see, some places have cooled almost half a degree C, others warmed 1.5C, and some counties have seen precip fall by 40% and others rise by 40%.


Change in average growing season temperature, precipitation, and corn yield between 1980 and 2000.


What we do in the paper is estimate how yields of the main US crops (corn and soy) have responded to these gradual, multi-decade changes in climate.  We can then compare these longer run responses to estimates of shorter-run response to inter-annual fluctuations in temperature and precipitation.  The difference between these two responses is our quantitative estimate of "adaptation".  If farmers indeed have a lot of options available to them in the long run that are not available in the short run, then we would expect exposure to hot temperatures to be much less damaging in the long run than in the short run.  

We find very little evidence for longer-run adaptation to climate:  yield responses to longer-run increases in temperature are large, negative, and statistically indistinguishable from responses to shorter-run (annual) fluctuations in temperature.  This appears true for both corn and soy, and doesn't seem to depend much on the period over which we measure the changes or how we do the econometrics.

From a future impact perspective this is bad news, because it would imply that farmers do not appear to currently have a lot of options for mitigating damages to extreme heat exposure.  Indeed, we can generate impact estimates by combining our estimates of these recent longer-run responses with projections of future climate change, and our median projection is a roughly 15% decline in corn yields by 2050 -- almost on par with yield declines during the well-publicized 2012 heat wave/drought across the US.  

Nevertheless, there are some potential concerns with our approach and with our impact estimates.  One is that we are focusing on yield impacts, and it could be the case that farmers are making other adjustments that a narrow focus on yield will not pick up -- for instance they could be switching to other crops that we are not looking at, or they could be getting out of agriculture altogether, planting golf courses instead of corn.  Were this the case, though, we would expect to see substantial declines in area planted to corn in the counties that warmed dramatically, and we find little evidence that this is the case.  

Another concern is that maybe farmers have simply not realized that temperatures are changing, and that if future changes are particularly large or salient, they will be recognized and quickly responded to.  We have a couple responses to this.  The first is that recognition is an important part of adaptation:  because warming over the past few decades in many counties is roughly on par with warming expected over the next 3-4 decades, it's not immediately clear how near-term future changes in climate will somehow be way more obvious than past changes.  Second, we show in the paper that some factors that you might think could be correlated with recognition of or belief in recent climate change do not predict how counties responded to recent warming.  In particular, neither counties that are better educated (who might have better access to information on climate change, be loyal readers of G-FEED, etc) nor counties who voted more Democratic in the 2000 Presidential election (who studies show are more likely to believe in climate change) were any less harmed by recent warming than less educated or more Republican counties.  These are obviously not perfect tests of "recognition", but were the best we could think of (other suggestions welcome!).  

A final concern is that we are again relying on historical relationships to make claims about the future impacts.  That is, although we now use historical longer-run responses (instead of short-run responses) to predict impacts under future climate changes, we are still assuming that past responses to climate variables are a guide to how future populations will respond.  While in principle you could arbitrarily specify any relationship between past response and future response that you wanted to (e.g. future responses to 1C will be half as large as past responses), our "business-as-usual" assumption seems the most relevant to the policy question at hand:  what's a reasonable estimate of what might happen in the absence of new investments in adaptation.  To us, our findings of substantial losses in the face of past long-run changes in climate suggest that it would be dangerous to assume that farmers will easily adapt to future changes.  More likely, adaptation is going to require investment, and without these investments the summer of 2012 is not a bad picture of what the "new normal" might look like.