Wednesday, November 20, 2013

Fixed Effects Infatuation

The fashionable thing to do in applied econometrics, going on 15 years or so, is to find a gigantic panel data set, come up with a cute question about whether some variable x causes another variable y, and test this hypothesis by running a regression of y on x plus a huge number of fixed effects to control for "unobserved heterogeneity" or deal with "omitted variable bias."  I've done a fair amount of work like this myself. The standard model is:

y_i,t = x_i,t + a_i + b_t + u_i,t

where a_i are fixed effects that span the cross section, b_t are fixed effects that span the time series, and u_i,t is the model error, which we hope is not associated with the causal variable x_i,t  conditional on a_i and b_t.

If you're really clever, you can find geographic or other kinds of groupings of individuals, like counties, and include group-by-year fixed effects:

y_i,t = x_i,t + a_i + b_g,t + u_i,t

The generalizable point of my lengthy post the other day on storage and agricultural impacts of climate change, was that this approach, while useful in some contexts, can have some big drawbacks. Increasingly, I fear applied econometricians misuse it.  They found their hammer and now everything is a nail.

What's wrong with fixed effects? 

A practical problem with fixed effects gone wild is that they generally purge the data set of most variation.  This may be useful if you hope to isolate some interesting localized variation that you can argue is exogenous.  But if the most interesting variation derives from a broader phenomenon, then there may be too little variation left over to identify an interesting effect.

A corollary to this point is that fixed effects tend to exaggerate attenuation bias of measurement errors since they will comprise a much larger share of the overall variation in x after fixed effects have been removed.

But there is a more fundamental problem.  To see this, take a step back and think generically about economics.  In economics, almost everything affects everything else, via prices and other kinds of costs and benefits.  Micro incentives affect choices, and those choices add up to affect prices, cost and benefits more broadly, and thus help to organize the ordinary business of life.  That's the essence of Adam's Smith's "invisible hand," supply and demand, and equilibrium theory, etc.  That insight, a unifying theoretical theme if there is one in economics, implies a fundamental connectedness of human activities over time and space.   It's not just that there are unobserved correlated factors; everything literally affects everything else.  On some level it's what connects us to ecologists, although some ecologists may be loath to admit an affinity with economics.

In contrast to the nature of economics, regression with fixed effects is a tool designed for experiments with repeated measures.  Heterogeneous observational units get different treatments, and they might be mutually affected by some outside factor, but the observational units don't affect each other.  They are, by assumption, siloed, at least with respect to consequences of the treatment (whatever your x is).  This design doesn't seem well suited to many kinds of observational data.

I'll put it another way.  Suppose your (hopefully) exogenous variable of choice is x, and x causes z, and then both x and z affect y.  Further, suppose the effects of x on z spill outside of the confines of your fixed-effects units.  Even if fixed effects don't purge all the variation in x, they may purge much of the path going from x to z and z to y, thereby biasing the reduced form link between x and y. In other words, fixed effects are endogenous.

None of this is to say that fixed effects, with careful account of correlated unobserved factors, can be very useful in many settings.  But the inferences we draw may be very limited.  And without care, we may draw conclusions that are very misleading. 

Monday, November 11, 2013

Can crop rotations cure dead zones?

It is now fairly well documented that much of the water quality problems leading to the infamous "dead zone" in the Gulf of Mexico (pictured above) come from fertilizer applications on corn. Fertilizer on corn is probably a big part of similar challenges in the Chesapeake Bay and Great Lakes.

This is a tough problem.  The Pigouvian solution---taxing fertilizer runoff, or possibly just fertilizer---would help.  But we can't forget that fertilizer is the main source of large crop productivity gains over the last 75 years, gains that have fed the world.  It's hard to see how even a large fertilizer tax would much reduce fertilizer applications on any given acre of corn.

However, one way to boost crop yields and reduce fertilizer applications is to rotate crops. Corn-soybean rotations are most ubiquitous, as soybean fixes nitrogen in the soil which reduces need for applications on subsequent corn plantings.  Rotation also reduces pest problems.  The yield boost on both crops is remarkable.  More rotation would means less corn, and less fertilizer applied to remaining corn, at least in comparison to planting corn after corn, which still happens a fair amount.

I've got a new paper (actually, an old but newly revised), coauthored with Mike Livingston of USDA and Yue Zhang, a graduate student at NCSU, that might provide a useful take on this issue.  This paper has taken forever.  We've solved a fairly complex stochastic dynamic model that takes the variability of prices, yields and agronomic benefits of rotation into account. It's calibrated using the autoregressive properties of past prices and experimental plot data.  All of these stochastic/dynamics can matter for rotations. John Rust once told me that Bellman always thought crop rotations would be a great application for his recursive method of solving dynamic problems.

Here's the jist of what we found:

Always rotating, regardless of prices, is close to optimal, even though economically optimal planting may rotate much less frequently.  One implication is that reduced corn monoculture and fertilizer application rates might be implemented with modest incentive payments of $4 per acre or less, and quite possibly less than $1 per acre.

In the past I've been skeptical that even a high fertilizer tax could have much influence on fertilizer use. But given low-cost substitutes like rotation, perhaps it wouldn't cost as much as some think to make substantial improvements in water quality.

Nathan Hendricks and coauthors have a somewhat different approach on the same issue (also see this paper).  It's hard to compare our models, but I gather they are saying roughly similar things.

Friday, November 8, 2013

More fun with MARS

(But not as much fun as watching Stanford dominate Oregon last night).

In a recent post I discussed the potential of multivariate adaptive regression splines (MARS) for crop analysis, particularly because they offer a simple way of dealing with asymmetric and nonlinear relationships. Here I continue from where I left off, so see previous post first if you haven’t already.

Using the APSIM simulations (for a single site) to train MARS resulted in the selection of four variables. One of them was related to radiation which we don’t have good data on, so here I will just take three of them, which were related to: July Tmax, May-August Tmax, and May-August Precipitation. Now, the key point is we are not using those variables as the predictors themselves, but instead using hinge functions based on them. The below figure shows specifically what thresholds I am using (based on the MARS results from previous post) to define the basis hinge functions.  



I then compute these predictor values for each county-year observation in a panel dataset of US corn yields, then subtract county means from all variables (equivalent to introducing county fixed effects), and fit three different regression models:

Model 1: Just quadratic year trends (log(Yield) ~ year + year^2). This serves as a reference “no-weather” model.
Model 2: log(Yield) ~  year + year^2 + GDD  + EDD + prec + prec^2. This model adds the predictors we normally use based on Wolfram and Mike’s 2009 paper, with GDD and EDD meaning growing degree days between 8 and 29 °C and extreme degree days (above 29 °C). Note these measures rely on daily Tmin and Tmax data to compute the degree days.
Model 3: log(Yield) ~  year + year^2 + the three predictors shown in the figure above. Note these are based only on monthly average Tmax or total precipitation.

The table below shows the calibration error as well as the mean out-of-sample error for each model. What’s interesting here is that the model with the three hinge functions performs just as well as (actually even a little better than) the one based on degree day calculations. This is particularly surprising since the hinge functions (1) use only monthly data and (2) were derived from simulations at a single site in Iowa. Apparently they are representative enough to result in a pretty good model for the entire rainfed Corn Belt.

Model
Calibration R2
Average root mean square error for calibration
Average root mean square error for out-of-sample data
 (for 500 runs)
% reduction in out-of-sample error
1
0.59
0.270
.285
--
2
0.66
0.241
.259
8.9
3*
0.68
0.235
.254
10.7
*For those interested, the coefficients on the three hinge terms are -.074, -.0052, and -.061 respectively

The take home here for me is that even a few predictors based on monthly data can tell you a lot about crop yields, BUT it’s important to account for asymmetries. Hinge functions let you do that, and process-based crop models can help identify the right hinge functions (although there are probably other ways to do that too).

So I think this is overall a promising approach – one could use selected crop model simulations from around the world, such as those out of agmip, to identify hinge functions for different cropping systems, and then use these to build robust and simple empirical models for actual yields. Alas I probably won’t have time to develop it much in the foreseeable future, but hopefully this post will inspire something.

Monday, November 4, 2013

Weather, storage and an old climate impact debate.

This somewhat technical post is a belated followup to a comment I wrote with Tony Fisher, Michael Hanemann and Wolfram Schlenker, which was finally published last year in the American Economic Review.  I probably should have done this a long time ago, but I needed to do a little programming.  And I've basically been slammed nonstop.

First the back story:  The comment re-examines a paper by Deschanes and Greenstone (DG) that supposedly estimates a lower bound on the effects of climate change by relating county-level farm profits to weather.  They argue that year-to-year variation in weather is random---a fair proposition---and control for unobserved differences across counties using fixed effects.  This is all pretty standard technique.

The overarching argument was that with climate change, farmers could adapt (adjust their farming practices) in ways they cannot with weather, so the climate effect on farm profits would be more favorable than their estimated weather effect.

Now, bad physical outcomes in agriculture can actually be good for farmers' profits, since demand for most agricultural commodities is pretty steep: prices go up as quantities go down.  So, to control for the price effects they include year fixed effects.  And since farmers grow different crops in different parts of the country and there can be local price anomalies, they go further and use state-by-year fixed effects so as to squarely focus on quantity effects in all locations.

Our comment pointed out a few problems:  (1) there were some data errors like missing temperature data apparently coded with zeros and much of the Midwest and most of Iowa dropped from the sample without explanation; (2) in making climate predictions they applied state-level estimates to county-level baseline coefficients, in effect making climate predictions that regress to the state mean (e.g., Death Valley and Mt. Witney have different baselines but the same future); (3) all those fixed effects wash out over 99 percent of weather variation, leaving only data errors for estimation; (4) the standard errors didn't appropriately account for the panel nature of the spatially correlated errors.

These data and econometric issues got the most attention.  Correct these things and the results change a lot.  See the comment for details.

But, to our minds, there is a deeper problem with the whole approach.  Their measure of profits was really no such thing, at least not in an economic sense: it was reported sales minus a crude estimate of current expenditures.  The critical thing here is that farmers often do not sell what they produce.  About half the country's grain inventories are held on farm.  Farms also hold inventory in the form of capital and livestock, which can be held, divested or slaughtered.  Thus, effects of weather in one year may not show up in profits measured in that year.  And since inventories tend to be accumulated in plentiful times and divested in bad times, these inventory adjustments are going to be correlated with the weather and cause bias.

Although DG did not consider this point originally, they admitted it was a good one, but argued they had a simple solution: just include the lags of weather in the regression. When they attempted this, they found lagged weather was not significant, and thus that this issue was unimportant.  This argument is presented in their reply to our comment.

We were skeptical about their proposed solution to the storage issue.  And so, one day long ago, I proposed to Michael Greenstone, that we test his proposed solution. We could solve a competitive storage model, assume farmers store as a competitive market would, and then simulate prices and quantities that vary randomly with the weather.  Then we could regress sales (consumption X price) against our constructed weather and lags of weather plus price controls. If the lags worked in this instance, where we knew the underlying physical structure, then it might work in reality.

Greenstone didn't like this idea, and we had limited space in the comment, so the storage stuff took a minimalist back seat. Hence this belated post.

So I recently coded a toy storage model in R, which is nice because anyone can download and run this thing  (R is free).  Also, this was part of a problem set I gave to my PhD students, so I had to do it anyway.

Here's the basic set up:

y    is production which varies randomly (like the weather).
q    is consumption, or what's clearly sold in a year.
p    is the market price, which varies inversely with q (the demand curve)
z    is the amount of the commodity on hand (y plus carryover from last year).

The point of the model is to figure out how much production to put in or take out of storage.  This requires numerical analysis (thus, the R code).  Dynamic equilibrium occurs when there is no arbitrage: where it's impossible to make money by storing more or storing less.

Once we've solved the model, which basically gives q, p as a function of z, we can simulate y with random draws and develop a path of q and p.  I chose a demand curve, interest rate and storage cost that can give rise to a fair amount of price variability and autocorrelation, which happens to fit the facts.  The code is here.

Now, given our simulated y, q and p, we might estimate:

(1)   q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

(the ... means additional lags, as many as you like.  I use five.)

This expression makes sense to me, and might have been what DG had in mind: quantity in any one year is a function of this year's weather and a reasonable number past years, all of which affect today's output via storage.  For the regression to fully capture the true effect of weather, the sum of b# coefficients should be one.

Alternatively we might estimate:

(2)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + error

This is almost like DG's profit regression, as costs of production in this toy model are zero, so "profit" is just total sales.   But DG wanted to control for price effects in order to account for the pure weather effect on quantity, since the above relationship, the sum of the b# coefficients is likely negative.  So, to do something akin to DG within the context of this toy model we need to control for price.  This might be something like:

(3)  p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c p_t + error

Or, if you want to be a little more careful, recognizing there is a nonlinear relationship, we might have a more flexible control for p_t, and use a polynomial. Note that we cannot used fixed effects like DG because this isn't a panel.  I'll come back to this later.  In any case, with better controls we get:
 
(4)   p_t q_t = a + b0  y_t + b1 y_{t-1} + b2 y_{t-2} + b3 y_{t-3} +  ... + c1 p_t  + c2 p_t^2 + c3 p_t^3 +  error

At this point you should be worrying about having p_t on both the right and left side.  More on this in a moment.  First, let's take a look at the results:

Equation 1:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)     1.68       1.32    1.28     0.20
y               0.39       0.03   15.62     0.00
l.y             0.23       0.03    9.17     0.00
l2.y            0.10       0.03    3.83     0.00
l3.y            0.07       0.03    2.66     0.01
l4.y            0.07       0.03    2.69     0.01
l5.y            0.06       0.03    2.34     0.02


The sum of the y coefficients is 0.86.  I'm sure if you put in enough lags they would sum to 1. You shouldn't take the Std. Error or t-stats seriously for this or any of the other regressions, but that doesn't really matter for the points I want to make. Also, if you run the code, the exact results will differ because it will take a different random draw of y's, but the flavor will be the same.

Equation 2:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  4985.23     166.91   29.87        0
y             -72.15       3.19  -22.63        0
l.y           -43.67       3.20  -13.64        0
l2.y          -22.52       3.21   -7.03        0
l3.y          -15.61       3.21   -4.87        0
l4.y          -13.58       3.19   -4.26        0
l5.y          -12.26       3.19   -3.85        0


All the coefficients are negative.  As we expected, good physical outcomes for y mean bad news for profits, since prices fall through the floor.  If you know a little about the history of agriculture, this seems about right.  So, let's "control" for price.

Equation 3:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  2373.15     167.51   14.17        0
y             -28.12       2.91   -9.66        0
l.y           -17.72       2.10   -8.43        0
l2.y          -11.67       1.63   -7.17        0
l3.y           -8.07       1.57   -5.16        0
l4.y           -5.99       1.56   -3.84        0
l5.y           -5.68       1.54   -3.68        0
p               7.84       0.44   17.65        0


Oh, good, the coefficients are less negative.  But we still seem to have a problem.  So, let's improve our control for price by making it a 3rd order polynomial:

Equation 4:
            Estimate Std. Error       t value Pr(>|t|)
(Intercept)  1405.32          0  1.204123e+15     0.00
y               0.00          0  2.000000e-02     0.98
l.y             0.00          0  3.000000e-02     0.98
l2.y            0.00          0  6.200000e-01     0.53
l3.y            0.00          0 -3.200000e-01     0.75
l4.y            0.00          0 -9.500000e-01     0.34
l5.y            0.00          0 -2.410000e+00     0.02
poly(p, 3)1  2914.65          0  3.588634e+15     0.00
poly(p, 3)2  -716.53          0 -1.795882e+15     0.00
poly(p, 3)3     0.00          0  1.640000e+00     0.10


The y coefficients are now almost precisely zero. 

By DG's interpretation, we say that weather has no effect on profit outcomes and thus climate change is likely to have little influence on US agriculture.  Except in this simulation we know that in the underlying physical reality is that one unit of y ultimately has a one unit effect on the output.  DG's interpretation is clearly wrong.

What's going on here? 

The problem comes from an attempt to "control" for price.  Price, after all, is a key (the key?) consequence of the weather. Because storage theory predicts that prices incorporate all past production shocks, whether they are caused by weather or something else, in controlling for price, we remove all weather effects on quantities.  So, DG are ultimately mixing up cause and effect, in their case by using a zillion fixed effects. One should take care in adding "controls" that might actually be an effect, especially when you supposedly have a random source of variation.  David Freedman, the late statistician who famously critiqued regression analysis in the social sciences and provided inspiration to the modern empirical revolution in economics, often emphasized this point.

Now, some might argue that the above analysis is just a single crop, that it doesn't apply to DG's panel data. I'd argue that if you can't make it work in a simpler case, it's unlikely to work in a case that's more complicated.  More pointedly, this angle poses a catch 22 for the identification strategy: If  inclusion of state-by-year fixed effects does not absorb all historic weather shocks, then it implies that the weather shocks must have been crop- or substate-specific, in which case there is bias due to endogenous price movements even after the inclusion of these fixed effects. On the other hand, if enough fixed effects are included to account for all endogenous price movements, then lagged weather by definition does not add any additional information and should not be significant in the regression.  Prices are a sufficient statistic for all past and current shocks.

All of this is to show that the whole DG approach has problems.  However, I think the idea of using lagged weather is a good one if combined with a somewhat different approach.  We might, for example, relate all manner of endogenous outcomes (prices, quantities, and whatever else) to current and past weather. This is the correct  "reduced form."  From these relationships, combined with some minimalist economic structure, we might learn all kinds of interesting and useful things, and not just about climate change.   This observation, in my view, is the over-arching contribution of my new article with Wolfram Schlenker in the AER

I think there is a deeper lesson in this whole episode that gets at a broader conversation in the discipline about data-driven applied microeconomics over the last 20 years.  Following Angrist, Ashenfelter, Card and Krueger, among others, everyone's doing experiments and natural experiments.  A lot of this stuff has led to some interesting and useful discoveries.  And it's helped to weed out some applied econometric silliness.

Unfortunately, somewhere along the way, some folks lost sight of basic theory.   In many contexts we do need to attach our reduced forms to some theoretical structure in order to interpret them.  For example, bad weather causing profits to go up in agriculture actually makes sense, and indicates something bad for consumers and for society as a whole.

And in some contexts a little theory might help us remember what is and isn't exogenous.