Tuesday, December 31, 2013

Massetti et al. - Part 1 of 3: Convergence in the Effect of Warming on US Agriuclture

Emanuele Massetti has posted a new paper (joined with Robert Mendelsohn and Shun Chonabayashi) that takes another look at the best climate predictor of farmland prices in the United States.  He'll present it at the ASSA meetings in Philadelphia - I have seen him present the paper at the 2013 NBER spring EEE meeting and at the 2013 AERE conference, and wanted to provide a few discussion points for people interested in the material.

A short background: several articles of contributors to this blog have found that temperature extremes are crucial at predicting agricultural output. To name a few: Maximilian Auffhammer and coauthors have shown that rice have opposite sensitivities to minimum and maximum temperature, and this relationship can differ over the growing season (paper). David Lobell and coauthors found that there is a highly nonlinear relationship between corn yields and temperature using data from field trials in Africa (paper), which is comparable to what Michael Roberts and I have found in the United States (paper).  The same relationship was observed by Marshal Burke and Kyle Emerick when looking at yield trends and climate trends over the last three decades (paper).

Massetti et al. argue that average temperature are a better predictor of farmland values than nonlinear transformations like degree days.  They exclusively rely on cross-sectional regressions (in contrast to the aforementioned panel regressions), re-examining earlier work Michael Hanemann, Tony Fisher and I have done where we found that degree days are better and more robust predictors of farmland values than average temperature (paper).

Before looking into the differences between the studies, it might be worthwhile to emphasize an important convergence in the sign and magnitude of predicted effect of a rise in temperature on US agriculture.  There has been an active debate whether a warmer climate would be beneficial or detrimental. My coauthors and I have usually been on the more pessimistic side, i.e., arguing that warming would be harmful. For example, a +2C and +4C increase, respectively, predicted a 10.5% and 31.6 percent decrease in farmland values in the cross-section of farmland values (short-term B1 and long-term B2 scenarios in Table 5)  and a 14.9 and 35.3 percent decrease in corn yields in the panel regression (Appendix Table A5).

Robert Mendelsohn and various coauthors have consistently found the opposite, and the effects have gotten progressively more positive over time.  For example, their initial innovative AER paper that pioneered the cross-sectional approach in 1994 argued that "[...] our projections suggest that global warming may be slightly beneficial to American agriculture." Their 1999 book added climate variation as an additional control and argued that "Including climate variation suggests that small amount of warming are beneficial," even in the cropland model.  A follow-up paper in 2003 further controls for irrigation and finds that "The beneficial effect of warmer temperatures increases slightly when water availability is included in the model."

There latest paper finds results that are consistent with our earlier findings, i.e., a +2C warming predicts decreases in farmland values of 20-27 percent (bottom of Table 1), while a +4C warming decreases farmland values by 39-49 percent. These numbers are even more negative than our earlier findings and rather unaffected whether average temperatures or degree days are used in the model.  While the authors go on to argue that average temperatures are better than degree days (more on this in future posts), it does change the predicted negative effect of warming: it is harmful.

Monday, December 23, 2013

The Red Queen strikes again

The weekend before last I attended an interesting CIMMYT meeting on remote sensing in Mexico City. Lots of cool stuff going on in remote sensing for agriculture, including use of drones in breeding programs, and near-term prospects for low-cost or free satellite data with high spatial and temporal resolution. But one of the most interesting parts of the meeting for me was catching up with Dave Hodson, a colleague who used to work in CIMMYT’s GIS group and now works full time on monitoring wheat rusts. He’s part of the Borlaug Global Rust Initiative (BGRI) which was started in the wake of the discovery of the UG99 strain of stem rust in 1999.

A quick review: rusts are a nightmare for wheat growers or breeders. They can decimate a wheat crop and can spread incredibly quickly and far. There are three main types of rust: stem rust, yellow (or stripe) rust, and leaf rust. One of the main precursors of the Green Revolution was improving rust resistance of wheat varieties, part of Norman Borlaug’s claim to fame. Breeders must continually make sure their varieties are not too susceptible to rust, and since rusts evolve over time it is often a race just to avoid going backward. That’s why it is often called Red Queen breeding, named after the scene in Throughthe Looking Glass where Alice learns she has to run just to stand still.

The same rust resistance genes were successful for a very long time, until the UG99 strain came along and proved to be a major problem for nearly all widely grown varieties. In stepped scores of wheat scientists, who quickly developed new resistant varieties that have since been widely adopted. With the help of Borlaug, and the Gates Foundation, the BGRI was set up to maintain an internationally coordinated system to monitor and respond to any future rust outbreaks.

Ok, now to the interesting part. A few weeks ago, surveyors in Ethiopia uncovered a sizable amount of wheat area (~10,000 ha) that had been wiped out by stem rust. These varieties were resistant to the known UG99 strains, so it seems that a new strain has emerged. It’s too early to know what this will imply, but an update was posted today on their website, including the picture of an affected field below.



As scary as rust is, the news isn’t all bad. The systems put in place by BGRI have already had several successes, though avoiding something bad happening rarely makes the news. For example, a few years ago, in late 2010, there was a big outbreak of yellow rust in Ethiopia. Roughly a third of the entire wheat crop was lost. This year, there were conditions favorable for yellow rust, and heavy incidence was spotted. But it was spotted early, and fungicides were used to contain the outbreak, and impacts were very small. (Why rusts seem to be happening more often is a topic of debate, and some would blame climate change, but that's a topic for another day).


As this new strain of UG99 emerges, you can see the capacity of BGRI and its partners spring to action. Samples of the spores have already been sent to labs around the world to assess what exactly they are dealing with. Fungicides are being targeted to the areas with active outbreaks. Modelers are looking at potential areas where the spores could spread in the near term, as shown in the figure below from their update. (Now is sowing time throughout much of the Middle East and West and South Asia, so spores reaching there could have big impacts). And breeders will likely soon be sending lines to Ethiopia for screening. 


To me this is a reminder of both how many things can go wrong when trying to produce food, but also how so many hard working, smart people help to bring resilience to modern agriculture. The next time you hear someone talking about “resilience” of agriculture as if it were solely the result of what particular mix of crops or soil biota are in a particular field, you should think about people like Dave Hodson and his colleagues. The resilience of modern agriculture, for better or worse, rests on the tireless but rarely celebrated work of people like them.

Thursday, December 19, 2013

The three wise men (of agriculture)

There’s a new book coming out soon that should be of interest to many readers of this blog. It’s written by Tony Fischer, Derek Byerlee, and Greg Edmeades, and called Crop yields and global food security: will yield increases continue to feed the world?” At 550 pages, it’s not a quick read, but I found it incredibly well done and worthwhile. I’m not sure yet when the public release will be, but I’m told it will be a free download in early 2014 at the Australian Centre for International Agricultural Research website.

The book starts by laying out the premise that, in order to achieve improvements in global food security without massive land use change, yields of major crops need to increase about 1.3% of current levels per year for the next 20 years. They explain very clearly how they arrive at this number given trends in demand, with a nice comparison with other estimates. The rest of the book is then roughly in two parts. First is a detailed tour of the worlds cropping system to assess the progress over the last 20 years, and second is a discussion of the prospects for and changes needed to achieve the target yield gains.

For some, the scope of the book may be too narrow, and the authors fully recognize that yield progress is not alone enough to achieve food security. But for me, the depth is a welcome change from a lot of more superficial studies of yield changes around the world. These are three men who understand the different aspects of agriculture better than just about anyone.

The book is not just a review of available information; the first part presents a lot of new analysis as well. Tony Fischer has dug into the available data on farm and experimental plot yields in each region, with his keen eye for what constitutes a credible study or yield potential estimate (think Warren Buffet reading a financial prospectus). This effort results in an estimate of yield potential and yield gap (the difference between potential and farm yields) by mega-environment and their linear rate of change for the past 20 years. The authors then express all trends as a percentage of trend yield in 2010, which makes it much easier to compare estimates from various studies that often report in kg/ha or bushels/acre or some other unit.
There are lots of insights in the book, but here is a sample of three that seemed noteworthy:

  1. Yield potential continues to exhibit significant progress for all major crops in nearly all of their mega-environments. This is counter to many claims of stagnating progress in yield potential.
  2. Yield gaps for all major crops are declining at the global scale, and these trends can account for roughly half of farm yield increases globally since 1990. But there’s a lot of variation. I thought it interesting, for example, that maize gaps are declining much faster in regions that have adopted GM varieties (US, Brazil, Argentina) than regions that haven’t (Europe, China). Of course, this is just a simple correlation, and the authors don’t attempt to explain any differences in yield gap trends.
  3. Yield gaps for soy and wheat are both quite small at the global level. Soy in particular has narrowed yield gaps very quickly, and in all major producers it is now at ~30%, which is the lower limit of what is deemed economically feasible with today’s technology. One implication of this is that yield potential increases in soy are especially important. Another is that yield growth in soy could be set to slow, even as demand continues to rise the most of any major crop, setting up a scenario for even more rapid soy area expansion.

Any of these three points could have made for an important paper on their own, and there are others in the book as well. But to keep this post at least slightly shorter than the actual book, I won’t go on about the details. One more general point, though.  The last few years of high food prices has brought a flurry of interest to the type of material covered in this book. For those of us who think issues of food production are important in the long-term, this is generally a welcome change. But one downside is that the attention attracts all sorts of characters who like to write and say things to get attention, but don’t really know much about agriculture or food security. Sometimes they oversimplify or exaggerate. Sometimes they claim as new something that was known long ago. This book is a good example of the complete opposite of that – three very knowledgeable and insightful people homing in on the critical questions and taking an unbiased look at the evidence.


(The downside is that it is definitely not a light and breezy read. I assigned parts of it to my undergrad class, and they commented on how technical and ”dense” it was. For those looking for a lighter read, I am nearly done with Howard Buffet’s “40 Chances”. I was really impressed with that one as well – lots of interesting anecdotes and lessons from his journeys around the world to understand food security. It’s encouraging that a major philanthropist has such a good grasp of the issues and possible solutions.) 

Tuesday, December 17, 2013

Yet another way of estimating the damaging effects of extreme heat on yields

Following up on Max's post on the damaging effects of extreme heat, here is yet another way of looking at it.  So far, my coauthor Michael Roberts and I have estimated three models that links yields to temperature:

  1. An eighth-order polynomial in temperature
  2. A step function (dummy intervals for temperature ranges)
  3. A piecewise linear function of temperature
Another semi-parametric way to estimate this to derive splines in temperature.  Specifically, I used the daily minimum and maximum temperature data we have on a 2.5x2.5mile grid, fit a sinusoidal curve between the minimum and maximum temperature, and then estimated the temperature at each 0.5hour interval.  The spline is evaluated for each temperature reading and summed over all 0.5hour intervals and days of the growing season (March-August).

So what is it good for? Well, it's smoother than the dummy intervals (which by definition assume constant marginal impact within each interval), yet more flexible than the 8th-order polynomial, and doesn't require different bounds for different crops like the piecewise linear function.

Here's the result for corn (the 8 spline knots are shown as red dashed lines), normalized relative to a temperature of 0 degree Celsius.

The regression have the same specification as our previous paper, i.e., the regress log yields on the flexible temperature measure, a quadratic in season-total precipitation, state-specific quadratic time trends as well as county fixed effects for 1950-2011.  

Here's the graph for soybeans:

A few noteworthy results: The slope of the decline is similar to what we found before:  A linear approximation seems appropriate (restricted cubic splines are forced to be linear above the highest knot, but not below). In principle, yields of any type of crop could be regressed on these splines.

Sunday, December 1, 2013

It's not the model. Really it isn't

There is a most lively discussion as to whether climate change will have significant negative impacts on US agriculture. There are a number of papers by my co-bloggers (I am not worthy!) showing that extreme heat days will have significant negative impacts on yields for all major crops except for rice. I will talk about rice another day. For the main crop growing regions in the US, climate models project a significant increase in these extreme heat days. This will likely, short of miraculous adaptation, lead to significant yield losses. To put it simply, this part of the literature has shown a sensitivity of yields to extreme temperatures and linked it with projected increases in these extreme temperature events

On the other hand, there are a number of papers, which argue that climate change will have no significant impacts on US agriculture. Seo, in a recent issue of Climatic Change, essentially argues that the literature projecting big impacts confuses weather ("panel models") and climate ("cross sectional models") and that using weather instead of climate as a source of identification leads to big impacts. As Wolfram Schlenker and I note in a comment this is simply not true for five reasons:

1) Even the very limited number of papers he cites, which use weather as the source of variation to identify a sensitivity, clearly state what this means when interpreting the resulting coefficients. There is no confusion here.

2) He fails to discuss the fact that the bias from adaptation when using weather as a source of variation could go in either direction.

3) It is simply not true that all panel models find big impacts and all Ricardian cross sectional models find small impacts. There are big and small impacts to be found in both camps.

4) There is recent work by Burke and Emerick, which uses the fixed effects identification strategy with climate on the right hand side! I wish I would have thought of that. They can compare their "long differences" (a.k.a. climate) sensitivity results to more traditional weather sensitivity results and find no significant difference between the two. This will either enrage both camps or make them very happy, since it suggests that the difference between sources of variation (weather versus climate) in this setting is not huge. 

5) The big differences in studies may finally not be due to differences in sensitivities, but differences in the climate model used. Burke et al. point out that uncertainty over future climate is a major driver of variation in impacts. We refer the reader to this excellent study, which discusses a much broader universe of studies and very carefully discusses the sources of uncertainty in impacts estimates.

We are of the humble opinion, that the most carefully done studies using both identification strategies yield similar estimates for the Eastern United States.


Wednesday, November 20, 2013

Fixed Effects Infatuation

The fashionable thing to do in applied econometrics, going on 15 years or so, is to find a gigantic panel data set, come up with a cute question about whether some variable x causes another variable y, and test this hypothesis by running a regression of y on x plus a huge number of fixed effects to control for "unobserved heterogeneity" or deal with "omitted variable bias."  I've done a fair amount of work like this myself. The standard model is:

y_i,t = x_i,t + a_i + b_t + u_i,t

where a_i are fixed effects that span the cross section, b_t are fixed effects that span the time series, and u_i,t is the model error, which we hope is not associated with the causal variable x_i,t  conditional on a_i and b_t.

If you're really clever, you can find geographic or other kinds of groupings of individuals, like counties, and include group-by-year fixed effects:

y_i,t = x_i,t + a_i + b_g,t + u_i,t

The generalizable point of my lengthy post the other day on storage and agricultural impacts of climate change, was that this approach, while useful in some contexts, can have some big drawbacks. Increasingly, I fear applied econometricians misuse it.  They found their hammer and now everything is a nail.

What's wrong with fixed effects? 

A practical problem with fixed effects gone wild is that they generally purge the data set of most variation.  This may be useful if you hope to isolate some interesting localized variation that you can argue is exogenous.  But if the most interesting variation derives from a broader phenomenon, then there may be too little variation left over to identify an interesting effect.

A corollary to this point is that fixed effects tend to exaggerate attenuation bias of measurement errors since they will comprise a much larger share of the overall variation in x after fixed effects have been removed.

But there is a more fundamental problem.  To see this, take a step back and think generically about economics.  In economics, almost everything affects everything else, via prices and other kinds of costs and benefits.  Micro incentives affect choices, and those choices add up to affect prices, cost and benefits more broadly, and thus help to organize the ordinary business of life.  That's the essence of Adam's Smith's "invisible hand," supply and demand, and equilibrium theory, etc.  That insight, a unifying theoretical theme if there is one in economics, implies a fundamental connectedness of human activities over time and space.   It's not just that there are unobserved correlated factors; everything literally affects everything else.  On some level it's what connects us to ecologists, although some ecologists may be loath to admit an affinity with economics.

In contrast to the nature of economics, regression with fixed effects is a tool designed for experiments with repeated measures.  Heterogeneous observational units get different treatments, and they might be mutually affected by some outside factor, but the observational units don't affect each other.  They are, by assumption, siloed, at least with respect to consequences of the treatment (whatever your x is).  This design doesn't seem well suited to many kinds of observational data.

I'll put it another way.  Suppose your (hopefully) exogenous variable of choice is x, and x causes z, and then both x and z affect y.  Further, suppose the effects of x on z spill outside of the confines of your fixed-effects units.  Even if fixed effects don't purge all the variation in x, they may purge much of the path going from x to z and z to y, thereby biasing the reduced form link between x and y. In other words, fixed effects are endogenous.

None of this is to say that fixed effects, with careful account of correlated unobserved factors, can be very useful in many settings.  But the inferences we draw may be very limited.  And without care, we may draw conclusions that are very misleading. 

Monday, November 11, 2013

Can crop rotations cure dead zones?

It is now fairly well documented that much of the water quality problems leading to the infamous "dead zone" in the Gulf of Mexico (pictured above) come from fertilizer applications on corn. Fertilizer on corn is probably a big part of similar challenges in the Chesapeake Bay and Great Lakes.

This is a tough problem.  The Pigouvian solution---taxing fertilizer runoff, or possibly just fertilizer---would help.  But we can't forget that fertilizer is the main source of large crop productivity gains over the last 75 years, gains that have fed the world.  It's hard to see how even a large fertilizer tax would much reduce fertilizer applications on any given acre of corn.

However, one way to boost crop yields and reduce fertilizer applications is to rotate crops. Corn-soybean rotations are most ubiquitous, as soybean fixes nitrogen in the soil which reduces need for applications on subsequent corn plantings.  Rotation also reduces pest problems.  The yield boost on both crops is remarkable.  More rotation would means less corn, and less fertilizer applied to remaining corn, at least in comparison to planting corn after corn, which still happens a fair amount.

I've got a new paper (actually, an old but newly revised), coauthored with Mike Livingston of USDA and Yue Zhang, a graduate student at NCSU, that might provide a useful take on this issue.  This paper has taken forever.  We've solved a fairly complex stochastic dynamic model that takes the variability of prices, yields and agronomic benefits of rotation into account. It's calibrated using the autoregressive properties of past prices and experimental plot data.  All of these stochastic/dynamics can matter for rotations. John Rust once told me that Bellman always thought crop rotations would be a great application for his recursive method of solving dynamic problems.

Here's the jist of what we found:

Always rotating, regardless of prices, is close to optimal, even though economically optimal planting may rotate much less frequently.  One implication is that reduced corn monoculture and fertilizer application rates might be implemented with modest incentive payments of $4 per acre or less, and quite possibly less than $1 per acre.

In the past I've been skeptical that even a high fertilizer tax could have much influence on fertilizer use. But given low-cost substitutes like rotation, perhaps it wouldn't cost as much as some think to make substantial improvements in water quality.

Nathan Hendricks and coauthors have a somewhat different approach on the same issue (also see this paper).  It's hard to compare our models, but I gather they are saying roughly similar things.

Friday, November 8, 2013

More fun with MARS

(But not as much fun as watching Stanford dominate Oregon last night).

In a recent post I discussed the potential of multivariate adaptive regression splines (MARS) for crop analysis, particularly because they offer a simple way of dealing with asymmetric and nonlinear relationships. Here I continue from where I left off, so see previous post first if you haven’t already.

Using the APSIM simulations (for a single site) to train MARS resulted in the selection of four variables. One of them was related to radiation which we don’t have good data on, so here I will just take three of them, which were related to: July Tmax, May-August Tmax, and May-August Precipitation. Now, the key point is we are not using those variables as the predictors themselves, but instead using hinge functions based on them. The below figure shows specifically what thresholds I am using (based on the MARS results from previous post) to define the basis hinge functions.  



I then compute these predictor values for each county-year observation in a panel dataset of US corn yields, then subtract county means from all variables (equivalent to introducing county fixed effects), and fit three different regression models:

Model 1: Just quadratic year trends (log(Yield) ~ year + year^2). This serves as a reference “no-weather” model.
Model 2: log(Yield) ~  year + year^2 + GDD  + EDD + prec + prec^2. This model adds the predictors we normally use based on Wolfram and Mike’s 2009 paper, with GDD and EDD meaning growing degree days between 8 and 29 °C and extreme degree days (above 29 °C). Note these measures rely on daily Tmin and Tmax data to compute the degree days.
Model 3: log(Yield) ~  year + year^2 + the three predictors shown in the figure above. Note these are based only on monthly average Tmax or total precipitation.

The table below shows the calibration error as well as the mean out-of-sample error for each model. What’s interesting here is that the model with the three hinge functions performs just as well as (actually even a little better than) the one based on degree day calculations. This is particularly surprising since the hinge functions (1) use only monthly data and (2) were derived from simulations at a single site in Iowa. Apparently they are representative enough to result in a pretty good model for the entire rainfed Corn Belt.

Model
Calibration R2
Average root mean square error for calibration
Average root mean square error for out-of-sample data
 (for 500 runs)
% reduction in out-of-sample error
1
0.59
0.270
.285
--
2
0.66
0.241
.259
8.9
3*
0.68
0.235
.254
10.7
*For those interested, the coefficients on the three hinge terms are -.074, -.0052, and -.061 respectively

The take home here for me is that even a few predictors based on monthly data can tell you a lot about crop yields, BUT it’s important to account for asymmetries. Hinge functions let you do that, and process-based crop models can help identify the right hinge functions (although there are probably other ways to do that too).

So I think this is overall a promising approach – one could use selected crop model simulations from around the world, such as those out of agmip, to identify hinge functions for different cropping systems, and then use these to build robust and simple empirical models for actual yields. Alas I probably won’t have time to develop it much in the foreseeable future, but hopefully this post will inspire something.

Monday, November 4, 2013

Weather, storage and an old climate impact debate.

This somewhat technical post is a belated followup to a comment I wrote with Tony Fisher, Michael Hanemann and Wolfram Schlenker, which was finally published last year in the American Economic Review.  I probably should have done this a long time ago, but I needed to do a little programming.  And I've basically been slammed nonstop.

First the back story:  The comment re-examines a paper by Deschanes and Greenstone (DG) that supposedly estimates a lower bound on the effects of climate change by relating county-level farm profits to weather.  They argue that year-to-year variation in weather is random---a fair proposition---and control for unobserved differences across counties using fixed effects.  This is all pretty standard technique.

The overarching argument was that with climate change, farmers could adapt (adjust their farming practices) in ways they cannot with weather, so the climate effect on farm profits would be more favorable than their estimated weather effect.

Now, bad physical outcomes in agriculture can actually be good for farmers' profits, since demand for most agricultural commodities is pretty steep: prices go up as quantities go down.  So, to control for the price effects they include year fixed effects.  And since farmers grow different crops in different parts of the country and there can be local price anomalies, they go further and use state-by-year fixed effects so as to squarely focus on quantity effects in all locations.

Our comment pointed out a few problems:  (1) there were some data errors like missing temperature data apparently coded with zeros and much of the Midwest and most of Iowa dropped from the sample without explanation; (2) in making climate predictions they applied state-level estimates to county-level baseline coefficients, in effect making climate predictions that regress to the state mean (e.g., Death Valley and Mt. Witney have different baselines but the same future); (3) all those fixed effects wash out over 99 percent of weather variation, leaving only data errors for estimation; (4) the standard errors didn't appropriately account for the panel nature of the spatially correlated errors.

These data and econometric issues got the most attention.  Correct these things and the results change a lot.  See the comment for details.

But, to our minds, there is a deeper problem with the whole approach.  Their measure of profits was really no such thing, at least not in an economic sense: it was reported sales minus a crude estimate of current expenditures.  The critical thing here is that farmers often do not sell what they produce.  About half the country's grain inventories are held on farm.  Farms also hold inventory in the form of capital and livestock, which can be held, divested or slaughtered.  Thus, effects of weather in one year may not show up in profits measured in that year.  And since inventories tend to be accumulated in plentiful times and divested in bad times, these inventory adjustments are going to be correlated with the weather and cause bias.

Although DG did not consider this point originally, they admitted it was a good one, but argued they had a simple solution: just include the lags of weather in the regression. When they attempted this, they found lagged weather was not significant, and thus that this issue was unimportant.  This argument is presented in their reply to our comment.

We were skeptical about their proposed solution to the storage issue.  And so, one day long ago, I proposed to Michael Greenstone, that we test his proposed solution. We could solve a competitive storage model, assume farmers store as a competitive market would, and then simulate prices and quantities that vary randomly with the weather.  Then we could regress sales (consumption X price) against our constructed weather and lags of weather plus price controls. If the lags worked in this instance, where we knew the underlying physical structure, then it might work in reality.

Greenstone didn't like this idea, and we had limited space in the comment, so the storage stuff took a minimalist back seat. Hence this belated post.

So I recently coded a toy storage model in R, which is nice because anyone can download and run this thing  (R is free).  Also, this was part of a problem set I gave to my PhD students, so I had to do it anyway.

Here's the basic set up:

y    is production which varies randomly (like the weather).
q    is consumption, or what's clearly sold in a year.
p    is the market price, which varies inversely with q (the demand curve)
z    is the amount of the commodity on hand (y plus carryover from last year).

The point of the model is to figure out how much production to put in or take out of storage.  This requires numerical analysis (thus, the R code).  Dynamic equilibrium occurs when there is no arbitrage: where it's impossible to make money by storing more or storing less.

Once we've solved the model, which basically gives q, p as a function of z, we can simulate y with random draws and develop a path of q and p.  I chose a demand curve, interest rate and storage cost that can give rise to a fair amount of price variability and autocorrelation, which happens to fit the facts.  The code is here.

Now, given our simulated y, q and p, we might estimate:

(1)   q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

(the ... means additional lags, as many as you like.  I use five.)

This expression makes sense to me, and might have been what DG had in mind: quantity in any one year is a function of this year's weather and a reasonable number past years, all of which affect today's output via storage.  For the regression to fully capture the true effect of weather, the sum of b# coefficients should be one.

Alternatively we might estimate:

(2)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

This is almost like DG's profit regression, as costs of production in this toy model are zero, so "profit" is just total sales.   But DG wanted to control for price effects in order to account for the pure weather effect on quantity, since the above relationship, the sum of the b# coefficients is likely negative.  So, to do something akin to DG within the context of this toy model we need to control for price.  This might be something like:

(3)  p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c p_t + error

Or, if you want to be a little more careful, recognizing there is a nonlinear relationship, we might have a more flexible control for p_t, and use a polynomial. Note that we cannot used fixed effects like DG because this isn't a panel.  I'll come back to this later.  In any case, with better controls we get:
 
(4)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c1 p_t  + c2 p_t^2 + c3 p_t^3 +  error

At this point you should be worrying about having p_t on both the right and left side.  More on this in a moment.  First, let's take a look at the results:

Equation 1:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)     1.68       1.32    1.28     0.20
y               0.39       0.03   15.62     0.00
l.y             0.23       0.03    9.17     0.00
l2.y            0.10       0.03    3.83     0.00
l3.y            0.07       0.03    2.66     0.01
l4.y            0.07       0.03    2.69     0.01
l5.y            0.06       0.03    2.34     0.02


The sum of the y coefficients is 0.86.  I'm sure if you put in enough lags they would sum to 1. You shouldn't take the Std. Error or t-stats seriously for this or any of the other regressions, but that doesn't really matter for the points I want to make. Also, if you run the code, the exact results will differ because it will take a different random draw of y's, but the flavor will be the same.

Equation 2:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  4985.23     166.91   29.87        0
y             -72.15       3.19  -22.63        0
l.y           -43.67       3.20  -13.64        0
l2.y          -22.52       3.21   -7.03        0
l3.y          -15.61       3.21   -4.87        0
l4.y          -13.58       3.19   -4.26        0
l5.y          -12.26       3.19   -3.85        0


All the coefficients are negative.  As we expected, good physical outcomes for y mean bad news for profits, since prices fall through the floor.  If you know a little about the history of agriculture, this seems about right.  So, let's "control" for price.

Equation 3:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  2373.15     167.51   14.17        0
y             -28.12       2.91   -9.66        0
l.y           -17.72       2.10   -8.43        0
l2.y          -11.67       1.63   -7.17        0
l3.y           -8.07       1.57   -5.16        0
l4.y           -5.99       1.56   -3.84        0
l5.y           -5.68       1.54   -3.68        0
p               7.84       0.44   17.65        0


Oh, good, the coefficients are less negative.  But we still seem to have a problem.  So, let's improve our control for price by making it a 3rd order polynomial:

Equation 4:
            Estimate Std. Error       t value Pr(>|t|)
(Intercept)  1405.32          0  1.204123e+15     0.00
y               0.00          0  2.000000e-02     0.98
l.y             0.00          0  3.000000e-02     0.98
l2.y            0.00          0  6.200000e-01     0.53
l3.y            0.00          0 -3.200000e-01     0.75
l4.y            0.00          0 -9.500000e-01     0.34
l5.y            0.00          0 -2.410000e+00     0.02
poly(p, 3)1  2914.65          0  3.588634e+15     0.00
poly(p, 3)2  -716.53          0 -1.795882e+15     0.00
poly(p, 3)3     0.00          0  1.640000e+00     0.10


The y coefficients are now almost precisely zero. 

By DG's interpretation, we say that weather has no effect on profit outcomes and thus climate change is likely to have little influence on US agriculture.  Except in this simulation we know that in the underlying physical reality is that one unit of y ultimately has a one unit effect on the output.  DG's interpretation is clearly wrong.

What's going on here? 

The problem comes from an attempt to "control" for price.  Price, after all, is a key (the key?) consequence of the weather. Because storage theory predicts that prices incorporate all past production shocks, whether they are caused by weather or something else, in controlling for price, we remove all weather effects on quantities.  So, DG are ultimately mixing up cause and effect, in their case by using a zillion fixed effects. One should take care in adding "controls" that might actually be an effect, especially when you supposedly have a random source of variation.  David Freedman, the late statistician who famously critiqued regression analysis in the social sciences and provided inspiration to the modern empirical revolution in economics, often emphasized this point.

Now, some might argue that the above analysis is just a single crop, that it doesn't apply to DG's panel data. I'd argue that if you can't make it work in a simpler case, it's unlikely to work in a case that's more complicated.  More pointedly, this angle poses a catch 22 for the identification strategy: If  inclusion of state-by-year fixed effects does not absorb all historic weather shocks, then it implies that the weather shocks must have been crop- or substate-specific, in which case there is bias due to endogenous price movements even after the inclusion of these fixed effects. On the other hand, if enough fixed effects are included to account for all endogenous price movements, then lagged weather by definition does not add any additional information and should not be significant in the regression.  Prices are a sufficient statistic for all past and current shocks.

All of this is to show that the whole DG approach has problems.  However, I think the idea of using lagged weather is a good one if combined with a somewhat different approach.  We might, for example, relate all manner of endogenous outcomes (prices, quantities, and whatever else) to current and past weather. This is the correct  "reduced form."  From these relationships, combined with some minimalist economic structure, we might learn all kinds of interesting and useful things, and not just about climate change.   This observation, in my view, is the over-arching contribution of my new article with Wolfram Schlenker in the AER

I think there is a deeper lesson in this whole episode that gets at a broader conversation in the discipline about data-driven applied microeconomics over the last 20 years.  Following Angrist, Ashenfelter, Card and Krueger, among others, everyone's doing experiments and natural experiments.  A lot of this stuff has led to some interesting and useful discoveries.  And it's helped to weed out some applied econometric silliness.

Unfortunately, somewhere along the way, some folks lost sight of basic theory.   In many contexts we do need to attach our reduced forms to some theoretical structure in order to interpret them.  For example, bad weather causing profits to go up in agriculture actually makes sense, and indicates something bad for consumers and for society as a whole.

And in some contexts a little theory might help us remember what is and isn't exogenous.

Thursday, October 31, 2013

Taking crop analysis to MARS

I couldn’t bear to watch the clock wind down on October without a single post this month on G-FEED. So here goes a shot at the buzzer…

A question I get asked or emailed fairly often by students is whether they should use a linear or quadratic model when relating crop yields to monthly or growing season average temperatures. This question comes from all around the world so I guess it’s a good topic for a post, especially since I rarely get the time to email them back quickly.  If you are mainly interested in posts about broad issues and not technical nerdy topics, you can stop reading now.

The short answer is you can get by with a linear model if you are looking over a small range of temperatures, such as year to year swings at one location. But if you are looking across a bigger range, such as across multiple places, you should almost surely use something that allows for nonlinearity (e.g., an optimum temperature somewhere in the middle of the data).

There are issues that arise if using a quadratic model that includes fixed effects for location, a topic which Wolfram wrote about years ago with Craig McIntosh. Essentially this re-introduces the site mean into the estimation of model coefficients, which creates problem of interpretation related to a standard panel model with fixed effects.

A bigger issue that this question points to, though, is the assumption by many that the choice is simply between linear and quadratic. Both are useful simple approaches to use, especially if data is scarce. But most datasets we work with these days allow much more flexibility in functional form. One clear direction that people have gone is to go to sub-monthly or even sub-daily measures and use flexible spline or degree day models to compute aggregate measures of temperature exposure throughout the season, then use those predictors in the regression.  I won’t say much about that here, except that it makes a good deal of sense and people who like those approaches should really blog more often.

Another approach, though, is to use more flexible functions with the monthly or seasonal data itself. This can be useful in cases where we have lots of monthly data, but not much daily data, or where we simply want something that is faster and easier than using daily data. One of my favorite methods of all time are multivariate adaptive regression splines, also called MARS. This was developed by Jerome Friedman at Stanford about 20 years ago (and I took a class from him about 10 years ago). This approach is like a three-for-one, in that it allows for nonlinearities, can capture asymmetries, and is a fairly good approach to variable selection. The latter is helpful in cases where you have more months than you think are really important for crop yields.

The basic building block of MARS is the hinge function, which is essentially a piecewise linear function that is zero on one side of the hinge, and linearly increasing on the other side. Two examples are shown below, taken from the wikipedia entry on MARS.


MARS works by repeatedly trying different hinge functions, and adds whichever one gives the maximum reduction in the sum of squared errors. As it adds hinge functions, you can have it added by itself or have it multiply an existing hinge in the model, which allows for interactions (I guess that makes it a four-for-one method). Despite searching all possible hinge functions (which covers all variables and hinges at all observed values of each variable), it is a fairly fast algorithm. And like most data mining techniques, there is some back-pruning at the end so it isn’t too prone to overfitting.

For a long time I liked MARS but couldn’t figure out how to apply it to data where you want to include spatial fixed effects to account for omitted variables. Unlike models with pre-determined predictors, such as monthly average temperature or squared temperature, MARS has to search for the right predictors. And before you know what the predictors are, you can’t substract out the site-level means as you would in a fixed-effect model. So you can’t know what the predictors are until you search, but you can’t search if you can’t compute the error of the model correctly (because you haven’t included fixed-effects.)

One semi-obvious solution would be to just ignore fixed-effects, find the hinge-function predictors, and then rerun the model with the selected predictors but including fixed effects. That seems ok but all the problems of omitted variables would still be affecting the selection process.

Recently, I settled on a different idea – first use a crop simulation model to develop a pseudo-dataset for a given crop/region, then run MARS on this simulated data (where omitted variables aren’t an issue) to find the predictors, and then use those predictors on an actual dataset, but including fixed effects to account for potential omitted variables.

I haven’t had much time to explore this, but here’s an initial attempt. First, I used some APSIM simulations for a site in Iowa that we ran for a recent paper on U.S. maize. Running MARS on this, allowing either monthly or seasonal average variables to enter the model, results in just four variables that are able to explain nearly 90% of the yield variation across years. Notice the response functions (below) show the steepest sensitivity for July Tmax, which makes sense. Also, rainfall is important but only up to about 450mm over the May-August period. In both cases, you can see how the results are definitely not linear and not symmetric. And it is a little surprising that only four variables can capture so much of the simulated variation, especially since they all contain no information at the sub-monthly time scale.


Of course this approach relies on assuming the crop model is a reasonable representation of reality. But recall we aren’t using the crop model to actually define the coefficients, just to define the predictors we will use. The next step is to then compute these predictors for actual data across the region, and see how well it works at predicting crop yields. I actually did that a few months ago but can’t find the results right now, and am heading off to teach. I’ll save that for a post in the near future (i.e. before 2015).

Monday, September 30, 2013

Climate Change and Resource Rents

With the next IPCC report coming out, there's been more reporting on climate change issues.  Brad Plumer over a Wonkblog has nice summary that helps to illustrate how much climate change is already "baked in" so to speak.

I'd like to comment one point.  Brad writes "Humans can only burn about one-sixth of their fossil fuel reserves if they want to keep global warming below 2ºC."

I'd guess some might quibble with the measurement a bit, since viable reserves depends on price and technology, plus many unknowns about much fossil fuel there really is down there.  But this is probably in the ballpark, and possibly conservative.

Now imagine you own a lot of oil, coal and/or natural gas, you're reading Brad Plumber, and wondering what might happen to climate policy in the coming years.  Maybe not next year or even in the next five or ten years, but you might expect that eventually governments will start doing a lot more to curb fossil fuel use.  You might then want to sell your fossil fuels now or very soon, while you can.   If many resource owners feel this way, fossil fuel prices could fall and CO2 emissions would increase.  

This observation amounts to the so-called "green paradox."  Related arguments suggest that taxing carbon may have little influence on use, and subsidizing renewable fuels and alternative technologies, without taxing or otherwise limiting carbon-based fuels, might make global warming worse, since it could push emissions toward the present.

Research on these ideas, mostly theoretical, is pretty hot in environmental economics right now.  It seems like half the submissions I manage at JEEM touch on the green paradox in one way or another.  

All of it has me thinking about a point my advisor Peter Berck often made when I was in grad school. At the time, we were puzzling over different reasons why prices for non-renewable resources--mostly metals and fossil fuels--were not trending up like Hotelling's rule says they should.  Peter suggested that we may never use the resources up, because if we did, we'd choke on all the pollution.  Resource use would effectively be banned before it could be used. If resource owners recognized this, they'd have no incentive to hold or store natural resources and the resource rent (basically the intrinsic value based on its finite supply) would be zero, which could help explain non-increasing resource prices.

For all practical purposes, Peter understood the green paradox some 15-20 years ago.  Now the literature is finally playing catch up.  

Wednesday, September 25, 2013

David is a confirmed genius

The MacArthur Foundation just confirmed what we've known all long: that G-FEED's very own David Lobell is a genius.  Hopefully the $625k that MacArthur is forking over will free up David to do some additional blogging on his preferred choice of tennis racquet.

Big congrats to David!!

Thursday, September 12, 2013

The noise lovers

I try not to use this blog much for rants, but it’s an easy way to post more frequently. So… we’ve been getting some feedback on our recent paper on drought stress in U.S. maize, some of it positive, some not. One thing that comes up, as with a lot of prior work, is doubts about how important temperature is. Agronomists often talk about how important the timing of rainfall is. And about how heat doesn’t hurt nearly as much if soils are very moist. To me this is another way of saying (1) other factors matter too and (2) there are interactions between temperature and other factors. The answer to both of these is “Of course!”

I am struck by the similarity of these discussions to those that Marshall posted about the empirical work on conflict. They go something like this:
Person A: “We’ve looked at the data and see a clear response that people tend to wake up when you turn the lights on”
Person B: “But people also wake up because they went to bed early last night and aren’t tired any more.”
A: “Yeah, ok”
B: “And the lights probably won’t wake them up if they are passed out drunk.”
A: “Ok, great point”
B: “I’ve woken up thousands of times in my 30-year career, and rarely did I get woken because someone turned the light on”
A: “ok”
B: “So then how can you possibly claim that turning on lights causes people to wake up”
A: “What? Are you serious? Did you even read the paper?”
B: “I don’t need to, I am an expert on waking up.”
And so on. I sometimes don’t know whether people seriously don’t understand the difference between explaining some vs. all of the variance, or if they just look for any opportunity to plug their own area of expertise. When we claim to see a clear signal in the data, it is not a claim that there is no noise around that signal. And some of the noise might include interactions with other variables. In fact, if there wasn’t any noise then the signal would have been known long ago.

The other day I was thinking of replacing my old tennis racquet, so I went to Google and typed in “prince thunder” (the name of my current racquet).  Turns out the top results were about a song that the artist Prince wrote called Thunder. Does that mean that Google is entirely useless? No, it means there is some error if you try to predict what someone wants based only on a couple of words they type. But with their enormous datasets, they are pretty good at picking up signals and getting the answer right more often than not. For most people typing those words, they were probably looking for the song.

Back to the crop example. Of course, heat will matter less or more depending on the situation. And of course getting rainfall right before key stages is more important than getting it after. These are both reasons for the scatter in any plot of heat (or any other variable) against yields. But neither of those refutes the fact that higher temperatures tend to result in more water stress, and lower yields. Or in Sol and Marshall’s case, that higher temperatures tend to increase the risk of conflict.


I sometimes think if one of us were to discover some magic combination of predictors that explained 95% of the variance in some outcome, there would be a chorus of people saying “you left out my favorite 5%!” Don’t get me wrong, there are lots of legitimate questions about cause and effect, stationarity, etc. that are worth looking into. But how much time should we really spend having the same old conversation about the difference between a signal and noise?