Monday, September 21, 2015

El Niño is coming, make this time different

Kyle Meng and I published an op-ed in the Guardian today trying to raise awareness of the potential socioeconomic impacts, and policy responses, to the emerging El Niño.  Forecasts this year are extraordinary.  In particular, for folks who aren't climate wonks and who live in temperate locations, it is challenging to visualize the scale and scope of what might come down the pipeline this year in the tropics and subtropics. Read the op-ed here.

Countries where the majority of the population experience hotter conditions under El Niño are shown in red. Countries that get cooler under El Niño are shown in blue (reproduced from Hsiang and Meng, AER 2015)

Tuesday, August 18, 2015

Daily or monthly weather data?

We’ve had a few really hot days here in California. It won’t surprise readers of this blog to know the heat has made Marshall unusually violent and Sol unusually unproductive. They practice what they preach. Apart from that, it’s gotten me thinking back to a common issue in our line of work - getting “good” measures of heat exposure. It’s become quite popular to be as precise as possible in doing this – using daily or even hourly measures of temperature to construct things like ‘extreme degree days’ or ‘killing degree days’ (I don’t really like the latter term, but that’s beside the point for now).

I’m all for precision when it is possible, but the reality is that in many parts of the world we still don’t have good daily measures of temperature, at least not for many locations. But in many cases there are more reliable measures of monthly than daily temperatures. For example, the CRU has gridded time series of monthly average max and min temperature at 0.5 degree resolution.

It seems a common view is that you can’t expect to do too well with these “coarse” temporal aggregates. But I’m going to go out on a limb and say that sometimes you can. Or at least I think the difference has been overblown, probably because many of the comparisons between monthly and daily weather show the latter working much better. But I think it’s overlooked that most comparisons of regressions using monthly and daily measures of heat have not been a fair fight.

What do I mean? On the one hand, you typically have the daily or hourly measures of heat, such as extreme degree days (EDD) or temperature exposure in individual bins of temperature. Then they enter into some fancy pants model that fits a spline or some other flexible function that capture all sorts of nonlinearities and asymmetries. Then on the other hand, for comparison you have a model with a quadratic response to growing season average temperature. I’m not trying to belittle the fancy approaches (I bin just as much as the next guy), but we should at least give the monthly data a fighting chance. We often restrict it to growing season rather than monthly averages, often using average daily temperatures rather than average maximums and minimums, and, most importantly, we often impose symmetry by using a quadratic. Maybe this is just out of habit, or maybe it’s the soft bigotry of low expectations for those poor monthly data.

As an example, suppose, as we’ve discussed in various other posts, that the best predictor of corn yields in the U.S. is exposure to very high temperatures during July. In particular, suppose that degree days above 30°C (EDD) is the best. Below I show the correlation of this daily measure for a site in Iowa with various growing season and monthly averages. You can see that average season temperature isn’t so good, but July average is a bit better, and July average daily maximum even better. In other words, if a month has a lot of really hot days, then that month's average daily maximum is likely to be pretty high.

You can also see that the relationship isn’t exactly linear. So a model with yields vs. any of these monthly or growing season averages likely wouldn’t do as well as EDD if the monthly data entered in as a linear or quadratic response. But as I described in an old post that I’m pretty sure no one has ever read, one can instead define simple assymetric hinge functions based on monthly temperature and rainfall. In the case of U.S. corn, I suggested these three based on a model fit to simulated data:

This is now what I’d consider more of a fair fight between daily and monthly data. The table below is from what I posted before. It compares the out-of-sample skill of a model using two daily-based measures (GDD and EDD), to a model using the three monthly-based hinge functions above. Both models include county fixed effects and quadratic time trends. In this particular case, the monthly model (3) even works slightly better than the daily model (2). I suspect the fact it’s even better relates less to temperature terms than to the fact that model (2) uses a quadratic in growing season rainfall, which is probably less appropriate than the more assymetric hinge function – which says yields respond up to 450mm of rain and are flat afterwards.

Calibration R2
Average root mean square error for calibration
Average root mean square error for out-of-sample data
 (for 500 runs)
% reduction in out-of-sample error

Overall, the point is that monthly data may not be so much worse than daily for many applications. I’m sure we can find some examples where it is, but in many important examples it won’t be. I think this is good news given how often we can’t get good daily data. Of course, there’s a chance the heat is making me crazy and I’m wrong about all this. Hopefully at least I've provoked the others to post some counter-examples. There's nothing like a good old fashioned conflict on a hot day.

Friday, August 7, 2015

US weather and corn yields 2015.

Here's the annual update on weather in the US, averaged over the areas where corn is grown.  The preferred model by Michael Roberts and myself [paper] splits daily temperature into beneficial moderate heat (degree days 10-29C, or 50-84F) and harmful extreme heat (degree days above 29C, or 84F). These two variables (especially the one on extreme heat) are surprisingly powerful predictors of annual corn yields.  So how does 2015 look like? Below are the numbers through the end of July.

First, here's the cumulative occurrence of extreme heat for March 1st, 2015 - July 31, 2015. The grey dashed lines are annual time series from 1950-2011, the black line is the average (1950-2010), and the colored lines show the last four years.  2012 (blue) was very hot and had very low yields as predicted by the model.  On the other hand, 2014 (green) had among the lowest number of harmful extreme degree days. The current year, 2015 (magenta line), comes in slightly below normal so far.

Second, beneficial moderate heat is above average. Usually the two are positively correlated (when extreme heat is above normal, so is moderate heat). Lower-than average harmful extreme heat and above-average beneficial moderate heat suggests that we should see another above-average year in terms of crop yields (Qualification: August is still outstanding and farmers planted late this year in many areas due to the cold winter, suggesting that August might be more important than usual). This is supported by the fact that corn futures have been coming down lately.

Finally, Kyle Meng and Solomon Hsiang have just pointed out to me that the very strong El Nino signal likely suggests that other parts of the global will see significant production shortfalls, so hopefully some of this can be mitigated by higher than average US yields - the power of trade.

Tuesday, August 4, 2015

Answering Matthew Kahn's questions about climate adaptation

Matt has taken the bait and asked me a five good questions about my snarky, contrarian post on climate adaptation.  Here are his questions and my answers.

Question 1.  This paper will be published soon by the JPE. Costinot, Arnaud, Dave Donaldson, and Cory B. Smith. Evolving comparative advantage and the impact of climate change in agricultural markets: Evidence from 1.7 million fields around the world. No. w20079. National Bureau of Economic Research, 2014.

It strongly suggests that adaptation will play a key role protecting us. Which parts of their argument do you reject and why?

Answer:  This looks like a solid paper, much more serious than the average paper I get to review, and I have not yet studied it.  I’m slow, so it would take me awhile to unpack all the details and study the data and model.  Although, from a quick look, I think there are a couple points I can make right now. 

First, and most importantly, I think we need to be clear about the differences between (i) adaptation (ii) price response and trade; (iii) innovation that would happen anyway; (iv) climate-change-induced innovation; (v) and price-induced innovation.  I’m pretty sure this paper is mainly about (ii), not about adaptation as conventionally defined within literature, although there appears to be some adaptation too.  I need to study this much more to get a sense of the different magnitudes of elasticities they estimate, and whether I think they are plausible given the data.

To be clear: I think adaptation, as conventionally defined, pertains to changing production behavior when changing climate while holding all other factors (like prices, trade, technology, etc.) constant.  My annoyance is chiefly that people are mixing up concepts.  My second annoyance is that too many are perpetually optimistic--some economists wear it like a badge, and I don’t think evidence or history necessarily backs up that optimism.

Question 2. If farmers know that they face uncertain risks due to climate change, what portfolio choices can they engage in to reduce the variability of their earnings? What futures markets exist to allow for hedging? If a risk averse economic agent knows "that he does not know" what ambiguous risks she faces, she will invest in options to protect herself. Does your empirical work capture this medium term investment rational plan? Or do you embrace the Berkeley behavioral view of economic agents as myopic?

Some farmers have subsidized crop insurance (nearly all in the U.S. do). But I don't think insurance much affects production choices at all. Futures markets seem to “work” pretty well and could be influenced by anticipated climate change.  We actually use a full-blown rational expectations model to estimate how much they might be affected by anticipated climate change right now: about 2% higher than they otherwise would be. 

Do I think people are myopic? Very often, yes.  Do I think markets are myopic?  By and large, no, but maybe sometimes.  I believe less in bubbles than Robert Shiller, even though I'm a great admirer of his work.  Especially for commodity markets (if not the marcoeconomy) I think rational expectations models are a good baseline for thinking about commodity prices, very much including food commodity prices.  And I think rational expectations models can have other useful purposes, too.  I actually do think the Lucas enterprise has created some useful tools, even if I find the RBC center of macro more than a bit delusional.

I think climate and anticipated climate change will affect output (for good and bad), which will affect prices, and that prices will affect what farmers plant, where they plant it, and trade.  But none of this, I would argue, is what economists conventionally refer to as adaptation.  A little more on response to price below...

Again, my beef with the field right now is that we are too blase about miracle of adaptation.  It’s easy to tell horror stories that the data cannot refute.  Much of economist tribe won’t look there—it feels taboo.  JPE won’t publish such an article. We have blinders on when uncertainty is our greatest enemy.

Question 3. If specific farmers at specific locations suffer, why won't farming move to a new area with a new comparative advantage? How has your work made progress on the "extensive margin" of where we grow our food in the future?

The vast majority of arable land is already cropped.  That which isn’t is in extremely remote and/or politically difficult parts of Africa.  Yes, there will be substitution and shifting of land.  But these shifts will come about because of climate-induced changes in productivity.  In other words, first-order intensive margin effects will drive second-order extensive margin effects.  The second order effects—some land will move into production, some out--will roughly amount to zero.  That’s what the envelope theorem says.  To a first approximation, adaptation in response to climate change will have zero aggregate effect, not just with respect to crop choice, but with respect to other management decisions as well.  I think Nordhaus himself made this point a long time ago.

However, there will also be intensive and extensive margin responses to prices.  Those will be larger than zero.  But I think the stylized facts about commodity prices ( from the rational expectations commodity price model, plus other evidence ) tell us that supply and demand are extremely inelastic.

Question 4. The key agriculture issue in adapting to climate change appears to be reducing international trade barriers and improving storage and reducing geographic trade costs. Are you pessimistic on each of these margins? Container ships and refrigeration keep getting better, don't they?

I think storage will improve, because almost anyone can do it, and there’s a healthy profit motive.  It’s a great diversification strategy for deep-pocketed investors. I think many are already into this game and more will get into it soon.  Greater storage should quell a good share of the greater volatility, but it actually causes average prices to rise, because there will be more spoilage.  But I’m very “optimistic” if you will, about the storage response.  I worry some that the storage response will be too great.

But I’m pretty agnostic to pessimistic about everything else.  Look what happened in earlier food price spikes.  Many countries started banning exports.  It created chaos and a huge “bubble” (not sure if it was truly rational or not) in rice prices.  Wheat prices, particularly in the Middle East, shot up much more than world prices because government could no longer retain the subsidized floors. As times get tougher, I worry that politics and conflict could turn crazy.  It’s the crazy that scares me.  We’ve had a taste of this, no?  The Middle East looks much less stable post food price spikes than before. I don’t know how much food prices are too blame, but I think they are a plausible causal factor.

Question 5. With regards to my Climatopolis work, recall that my focus is the urbanized world. The majority of the world already live in cities and cities have a comparative advantage in adapting to climate conditions due to air conditioning, higher income and public goods investments to increase safety.

To be fair: I’m probably picking on the wrong straw man.  What’s bothering me these days has much less to do with your book and more to do with the papers that come across my desk every day.  I think people are being sloppy and a bit closed minded, and yes, perhaps even tribal.  I would agree that adaptation in rich countries is easier.  Max Auffhammer has a nice new working paper looking at air conditioning in California, and how people will use air conditioning more, and people in some areas will install air conditioners that don’t currently have them--that's adaptation.  This kind of adaptation will surely happen, is surely good for people but bad for energy conservation.  It’s a really neat study backed by billions of billing records.   But the adaptation number—an upper bound estimate—is small.

I thought of you and your book because people at AAEA were making the some of the same arguments you made, and because you’re much bigger fish than most of the folks in my little pond.  Also, I think your book embodies many economists’ perhaps correct, but perhaps gravely naïve, what-me-worry attitude.

Why Science for Climate Adaptation is Difficult

Matthew Kahn, author of the cheeky book Climatopolis: How Our Cities will Thrive in the Hotter Future, likes to compliment our research (Schlenker and Roberts, 2009) on potential climate impacts to agriculture by saying it will cause valuable innovation that will prevent its dismal predictions from ever occurring.

Matt has a point, one that has been made many times in other contexts by economists with Chicago School roots.  Although in Matt’s case (and most all of the others), it feels more like a third stage of denial than a serious academic argument.

It’s not just Matt.  Today, the serious climate economist (or Serious?) is supposed to write about adaptation.  It feels taboo to suggest that adaptation is difficult.  Yet, the conventional wisdom here is almost surely wrong.  Everyone seems to ignore or miscomprehend basic microeconomic theory: adaptation is a second or higher-order effect, probably as ignorable as it is unpredictable.

While the theory is clear, the evidence needs to be judged on a case-by-case basis. Although it seems to me that much of the research so far is either flawed or doesn’t measure adaptation at all.  Instead it confounds adaptation—changes in farming and other activities due to changes in climate—with something else, like technological change that would have happened anyway, response to prices, population growth or other factors.

For example, some farmers may be planting earlier or later due to climate change.   They may also be planting different crops in a few places. But farmers are also changing what, when and where they plant due to innovation of new varieties that would have come about even if Spring weren’t coming a little earlier.  The effects of climate change on farm practices are actually mixed, and in the big picture, look very small to me, at least so far.

The other week the AAEA meetings in San Francisco, our recent guest blogger Jesse Tack was reminding me of Matt’s optimistic views, and in the course of our ensuing conversations about some of his current research, it occurred to me just why crop science surrounding climate-related factors is so difficult. The reason goes back to struggles of early modern crop science, and the birth of modern statistics and hypothesis testing, all of which probably ushered in the Green Revolution.

How’s all that?  Well, modern statistical inference and experimental design have some earlier roots, but most of it can be traced to two books, The Statistical Manual for Research Workers, and The Arrangement of Field Experiments, both written by Ronald Fisher in the 1920s. Fisher developed his ideas while working at Rothamsted, one of the oldest crop experiment stations in the world.  In 1919 he was hired to study the vast amount of data collected since the 1840s, and concluded that all the data was essentially useless because all manner of events affecting crop yields (mostly weather) had hopelessly confounded the many experiments, which we unrandomized and uncontrolled. It was impossible to separate signal from noise. To draw scientific inferences and quantify uncertainties, would require randomized controlled trials, and some new mathematics, which Fisher then developed.  Fisher’s statistical techniques, combined with his novel experimental designs, literally invented modern science. It’s no surprise then that productivity growth in agriculture accelerated a decade or two later.

So what does this have to do with adaptation?  Well, the crux of adaptation involves higher-order effects: the interaction of crop varieties, practices and weather.  It’s not about whether strain X has a typically higher yield than strain Y.  It’s about the relative performance of strain X and strain Y across a wide range weather conditions.

Much like the early days of modern science, this can be very hard to measure because there’s so much variability in the weather and other factors. Scientists cannot easily intervene to control the temperature and CO2 like they can varieties and crop practices.  And when they do, other experimental conditions (like soil moisture) are usually carefully controlled such that no water or pest stresses occur.  Since these other factors are also likely influenced by warming temperatures (like VPD-induced drought, also here), so it’s not really clear whether these experiments tell us what we need to know about the effects of climate change.

(An experiment with controlled temperatures and CO2 concentrations)

Then, of course, is the curse of dimensionality.  To measure interactions of practices, temperature and CO2, requires experimentation on a truly grand scale.   If we constrain ourselves to actual weather events, in most parts of the world we have only one crop per year, so the data accumulate slowly, will be noisy, and discerning cause and effect basically impossible. In the end, it’s not much different from Ronald Fisher trying to discern truth from his pre-1919 experiment station data that lacked randomly assigned treatments and controls.

I would venture to guess that these challenges in the agricultural realm likely apply to other areas as well.

So, given the challenges, the high cost, and basic microeconomic prediction that adaptation is a small deal anyway, how much should we actually spend on adaptation versus prevention?

Tuesday, June 9, 2015

Effect of warming temperatures on US wheat yields (Guest post by Jesse Tack)

This post discusses research from a paper coauthored with Andrew Barkley and Lanier Nalley in the Proceedings of the National Academy of Sciences. The paper can be found here. We utilize Kansas field-trial data for dryland winter wheat yields. A major strength of this data is that we were able to match yield data with daily temperature observations across eleven locations for the years 1985-2013.

So, there is a lot of variation in the data, and we can accurately measure local temperature exposure. Max, Sol, Wolfram, and Adam Sobel have a nice paper on the importance of such accuracy here, and Wolfram has blogged on the importance of daily versus more aggregate (e.g. monthly) measures here.

Although not the main focus of our paper, we find that the frequency at which temperature exposures are measured has a large impact on simulated warming impacts (see the supplementary information here). Any stats geek – myself included – will tell you that accurate identification requires sufficient variation, and the more variation the better! Mike and Wolfram have some great posts on constructing temperature measures here and here.

We follow their prescribed method for interpolating temperature exposures and constructing degree days. However, it is still common in many empirical analyses to use minimum and maximum temperatures to construct a measure of average temperature and call it a day. Don’t do this! You are missing so much important variation in temperature exposure that can be measured using the interpolation approach outlined by Mike and Wolfram.

Another consideration not often taken into account in climate change impact studies is that warming temperatures can have both positive and negative yield impacts. Extreme temperatures on both the low (cold) and high (heat) end of the temperature distribution are typically bad for crops. So if we think of warming as a shifting of the distribution to the right, the result is fewer of the former (positive effect) and more of the latter (negative effect).

So what? Well, we find that the net warming impact is negative for winter wheat in Kansas (more heat trumps less freeze), but omitting the beneficial effects of freeze reduction leads to vastly overestimated impacts (Figure 1).

Figure 1. Predicted warming impacts under alternative uniform temperature changes across the entire Fall-Winter-Spring growing season. Impacts are reported as the percentage change in yield relative to historical climate. The preferred model includes the effects from a reduction in freezing temperatures, while the alternative holds freeze effects at zero. Bars show 95% confidence intervals using standard errors clustered by year and variety.

The upshot here is that an accurate identification of warming impacts for winter wheat requires accounting for both ends of the temperature distribution. It would be interesting to know if this finding applies to other crops as well.

An additional strength of our data is that we observe 268 wheat varieties in-sample, which allows us to estimate heterogeneous heat resistance. As with other crops, winter wheat has experienced a steady increase in yields over time due to successful breeding efforts. Much of this increase is driven by a lengthened grain-filling stage, which increases yield potential under ideal weather conditions but introduces additional susceptibility to high temperature exposure during this critical period. David has some great posts on evolving weather sensitivities here, here, and here.

Essentially, if this line of reasoning holds we should expect to see a tradeoff between average yields and heat resistance across varieties. We group varieties by the year in which they were released to the public and allow the effect of extreme heat to vary across this grouping. [Aside: there are practical reasons why we group by release year that are discussed in the paper, we are experimenting with other grouping schemes in on-going projects].

We find that there does indeed exist a tradeoff between heat resistance and average yield, with higher yielding varieties less able to resist temperatures above 34°C (Figure 2). If the least resistant variety is switched to the most resistant variety, average yield is reduced by 6.6% and heat resistance is increased by 17.1%. We also find that newer varieties are less heat resistant than older varieties. Linear regressions using estimates for the 268 varieties indicate that these relationships are statistically significant (P-values < 0.05).

Figure 2. Mean (average) yields and heat resistance are summarized by release year. Heat resistance is measured as the percentage impact on mean yield from an additional degree day above 34°C. The smaller the number in absolute value the more heat resistant the variety is.

These findings point to a need for future breeding efforts to focus on heat resistance, and there is currently much work being done in this area. Check out the Kansas State University Wheat Genetics Resource Center (WGRC) and the International Maize and Wheat Improvement Center (CIMMYT) here and here.

From a historical perspective, our results indicate that such advancements will likely come at the expense of higher average yields. However, there is potentially a huge upside to developing a new variety that combines high yields with improved heat resistance. Under such a scenario, reduced freeze exposure could outweigh increased heat, leading to a net positive warming effect.

In the absence of such a silver bullet variety, the average-yield/heat-resistance tradeoff presents an interesting challenge for producer adaptation, which will ultimately be driven by some economic decision-making process. Producers are individuals, or families, and as such they have a certain tolerance for exposing themselves to risk. Much work has been done showing that farmers enjoy smoothing their consumption over time, which is akin to reducing profit variation. Farrell Jensen and Rulon Pope have a nice paper on this here.

So from a climate change adaptation perspective, it is important ask whether producers prefer a variety that offers high average yield but low heat resistance, or a variety with lower average yields coupled with high resistance? Are there important risk preference differences across producers, or are they a fairly homogeneous group? Currently, we don’t have a firm answer for these pertinent questions.

There has been much work in the agricultural economics literature on risk preference heterogeneity and the extent to which producers will trade off average yield for a reduction in yield variance. However, yield variance captures deviations both above and below the average, which might not be the relevant measure of risk under a warming climate since we are largely concerned with negative (i.e. downside) yield effects.

Martin Weitzman refers to this as fat-tailed uncertainty, and has done some really interesting work in this area (e.g. here). Jean Paul Chavas and John Antle are agricultural economists that seem to be working in this direction using the partial moments framework that John developed, see here, here, and here.

Knowledge about the willingness of producers to trade off yield for risk reduction should clearly be an important focus of future breeding efforts. Historically, plant physiologists and geneticists have worked independent of agricultural economists, but this should change as climate change presents a clear need for well-conceived interdisciplinary research.

In closing, it is worth pointing out that public policy will also likely have a strong effect on the welfare implications for producers under warming. Direct funding support for research provides one linkage, but another often overlooked linkage arrives in the form of subsidized agricultural production. For example, do policies that protect producers against large-scale crop losses provide a disincentive to adopt heat resistant varieties? Wolfram and Francis Annan have looked at this issue here and find that U.S. corn and soybean producers’ adaptation potential is skewed by government programs, in turn implying that producers will choose subsidized yield guarantees over costly adaptation measures.

Thus, even if we come to know what the optimal adaptation path is, it is not clear how we will get there. Economists love to talk of the unintended consequences of public policy. Sometimes it seems that every good policy has a dark side. It’s called the dismal science for a reason ;-)   

Monday, May 11, 2015

Introducing SCYM

In the hopes of figuring out how to raise crop yields or farmer incomes around the world, it would be really nice if we had a quick and accurate way of actually measuring yields for individual fields. That has motivated a lot of work over the years on using satellite data, and we have a paper out this week describing another step in that direction.

As I see it there are three main ingredients needed for yield remote sensing to be successful on a meaningful scale. One is the raw data. As Marshall’s recent post explained, there are several new satellite data providers that are really transforming our ability to track individual fields, even in smallholder areas.

Second is the ability to process the data at scale. Five years ago, for example, I would have to hire a research assistant to download imagery, make sure it was geometrically and radiometrically calibrated (i.e. properly lined up and in meaningful units), and then apply whatever algorithms we had. That just didn’t scale very well, in terms of labor or on-site data storage or processing. When a collaborator would ask “could you produce yield estimates for my study area,” I would have to think about how many weeks or months of work that would entail. But a couple of years ago I was introduced to Google’s Earth Engine, which is “a planetary-scale platform for environmental data & analysis.” In practical terms, it means that they have a lot of geospatial data (including all historical Landsat imagery), a lot of built-in algorithms for processing, and an interface to run your own code on the data and visualize or save the output. Part of why it works is that data providers, like the USGS for Landsat, have gotten better at providing well calibrated data. Earth Engine is very cool, and the more I’ve worked with it, the more I can see how this transforms our ability to extract value out of data already collected.

Third, and arguably the rate-limiting step nowadays, is to have algorithms that can translate satellite data into accurate yield estimates. It’s easy enough to do this if you have lots of ground data to calibrate to for a particular site, but that’s generally not scalable (unless people get clever about crowdsourcing ground “truth”). What seemed to be lacking was a very generic, scalable algorithm. So in the last 8 months or so we’ve been working to develop and test one idea about how to do this. I’m calling it a scalable satellite-based crop yield mapper (SCYM, pronounced “skim”), and a description of it has just been published in Remote Sensing of Environment. Conveniently, SCYM also stands for Steph Curry’s Your MVP.

The basic idea is that if you don’t have lots of ground data to calibrate a model, why not generate lots of fake ground data? Then for whatever combination of observations you actually have (say, for instance, satellite images on 2 or 3 specific days, and measures of daily weather), you can look into your fake data to see what the best fit model is to predict the desired variable (“yield”) from the measured predictors. The paper provides more detail, which I won’t bore readers with here. But to give a sense of the type of output, below shows an animation of our maize yield estimates over part of Iowa for 2008-2013. Red are high yields, blue are low.

The figure below shows a comparison between these and ground “truth” estimates for maize, which we take from the dataset described in a previous post.

The cool thing about this is that it’s quite generic. To illustrate that, we reran the model for soybeans, with results nearly as good as for maize.

Hopefully this type of thing will help make faster progress on understanding yields and farm productivity, and figuring out what actually works for improving them. One general lesson out of this for me is that sometimes making something really scalable requires scrapping an old approach. We had been previously running crop models for specific sites and years, but that wasn't possible within the Earth Engine system. I think SCYM (which trains a regression using simulations over lots of sites and years) is more robust than what we had, and along with the new satellite data and Earth Engine-type systems, it might just provide a way to do yield mapping at scale.

Wednesday, April 22, 2015

The inconsistent farmer

One function of this blog, other than to raise the level of guilt in Sol’s life (we are still waiting for his March post -- from 2013), is to help us work through ideas that are possibly wrong, possibly unoriginal, or very likely both at the same time. So here’s one idea I’d welcome feedback on. Let’s call it the “inconsistent farmer” problem. 

In the early days of work on climate change and agriculture, the notion that modelers were being too pessimistic in how they treated farmer’s ability to adapt was captured by the phrase “dumb farmer.” Of course, the idea was not that farmers are actually dumb, but that modelers were treating them as such. Modelers would simulate a “reference” farmer without climate change, and then that same farmer with the same exact practices and crops but with climate change. The idea of calling this a “dumb farmer” is that a real (i.e. smart) farmer would notice the change in climate and adjust. Obviously, a lot of work since has added simulations with hypothetical adjustments.

But let’s revisit the basic setup, in terms of the “reference” farmer. Generally speaking these were meant to characterize the current crops and practices at the time of the study. But the farmer in question was being exposed to some future climate, say of 2050 or 2080. So even the reference farmer was in some sense “dumb” or “backwards” in that 50 years had passed and they were still using the cropping systems of circa 2000.

All this is probably old hat for anyone who has read the literature. But what seems to go less noticed is that the impact models that then use the yield impacts derived from crop models are generally assuming some exogenous yield trend. For example, the recent AgMIP papers have some scenarios out to 2050, with the assumed yield increases summarized below in the table from Nelson et al.

So on the one hand the crop models assume current farmers, and on the other hand the economic impact models assume the more sophisticated future farmer. A farmer can’t be both things at the same time, so we have the “inconsistent farmer.”

Now why would anyone possibly care about this? I’d say for two big reasons. The first is that future technologies could have a very different sensitivity to climate than current ones. This is the idea behind previous posts here and here and here, so I won’t spend much time on it here. But there is some new evidence along these lines, such as the studies on soybean here and here that show modern cultivars are more sensitive to hot weather, like what we saw for wheat. For example, below is one plot for soybean from Rincker et al. showing genetic yield gains (comparing newer vs. older varieties) as a function of the favorability of the environment. The stronger yield trends in good conditions means that the difference between good and bad growing conditions is bigger for the newer cultivars, at least in absolute terms.

Second, and maybe more important, is that there is potentially a lot of double counting going on when people examine adaptation. Or put differently, there’s a lot of overlap between the types of things that explain “exogenous” yield trends, and the types of things that crop modelers use as “adaptation” in their models. For example, I was recently in Eastern India looking at various strategies to get wheat sown earlier. When you ask farmers what the benefits of sowing early are, they generally tell you it's because wheat doesn't like the hot spring so yields are higher if you sow earlier. If you ask them whether the spring weather has been changing, they generally say it's getting warmer. But if you ask them if they are doing anything different because of that warming, they generally say no, they just get lower yields. They don't view the earlier sowing as a benefit specific to climate change. It's a change that would help them anyway.

This idea of double counting is similar to the notion of adaptation illusions that I wrote about earlier. But it depends on the degree to which these “adaptive” measures are already part of the baseline “exogenous” yield trends. To get at that, it’s important to really understand not just the types of things being considered as adaptations but also the source of recent yield growth and the likely drivers of future yield growth. And if the latter are going to be a big part of the “exogenous” trend, they should probably be out of bounds for modelers to incorporate as adaptations.  

I realize that a lot of this seems like semantic details. But I don’t think it is. My sense is that there are real risks of understating the climate change impacts. Either because we are specifying a reference scenario that uses cropping systems that are less sensitive to climate than their future descendants will be, or by allowing technologies to be called on to reduce impacts (i.e. adapt) when in fact they would have already been deployed in the “exogenous” reference. I suppose the first factor could also go the other way, in which we would be overstating impacts because future crops will be less sensitive (i.e. irrigated).

For the double counting, you can look at the types of adaptations that modelers employ and simply ask whether you really think these aren’t part of what will drive the “exogenous” yield trend. Drought tolerance, shifting sow dates, more irrigation and fertilizer – these are all things that have been important sources of recent yield growth and will continue to play a role in future trends. Below is a quick schematic to try to explain this point. The reference farmer is generally assumed to continue on a trajectory of yield growth, shown here as linear to keep it simple (green line). Climate changes then can affect this trajectory, and often impacts are calculated both without and with adaptation (red and blue line). But if one lists the types of things that are implicit in the "exogenous" trend, and then the things generally invoked as adaptations, there is a lot of overlap. These are good things, but in scoping out the prospects for future supply, we shouldn't count them twice.

Wednesday, April 1, 2015

Discounting Climate Change Under Secular Stagnation

Ben Bernanke, recent former Chair of the Federal Reserve, has a new blog.  And he's writing about low interest rates and so-called secular stagnation, a pre-WWII phrase recently resurrected by Larry Summers.

The topic is dismal--hey, they're economists! But for those in the field it's a real hoot to see these titans of economic thought relieved of their official government duties and able to write openly about what they really think.

These two share many views, but Ben has a less dismal outlook than Larry.  Larry thinks we're stuck in a low-growth equilibrium, and low or even negative interest rates are here to stay without large, persistent fiscal stimulus.  Ben thinks this situation is temporary, if long lived.  He writes:
I generally agree with the recent critique of secular stagnation by Jim Hamilton, Ethan Harris, Jan Hatzius, and Kenneth West. In particular, they take issue with Larry’s claim that we have never seen full employment during the past several decades without the presence of a financial bubble. They note that the bubble in tech stocks came very late in the boom of the 1990s, and they provide estimates to show that the positive effects of the housing bubble of the 2000’s on consumer demand were largely offset by other special factors, including the negative effects of the sharp increase in world oil prices and the drain on demand created by a trade deficit equal to 6 percent of US output. They argue that recent slow growth is likely due less to secular stagnation than to temporary “headwinds” that are already in the process of dissipating. During my time as Fed chairman I frequently cited the economic headwinds arising from the aftermath of the financial crisis on credit conditions; the slow recovery of housing; and restrictive fiscal policies at both the federal and the state and local levels (for example, see my August and November 2012 speeches.)
These are good points. But then Larry has a compelling response, too.  I particularly agree with Larry about the basic economic plausibility of  persistent equilibrium real interest rates that are well below zero.  He writes:
Do Real Rates below Zero Make Economic Sense? Ben suggests not– citing my uncle Paul Samuelson’s famous observation that at a permanently zero or subzero real interest rate it would make sense to invest any amount to level a hill for the resulting saving in transportation costs.  Ben grudgingly acknowledges that there are many theoretical mechanisms that could give rise to zero rates. To name a few: credit markets do not work perfectly, property rights are not secure over infinite horizons, property taxes that are explicit or implicit, liquidity service yields on debt, and investors with finite horizons.
Institutional uncertainty seems like a big deal that can't be ignored when thinking about long-run growth and real interest rates (these are closely connected).  People are pessimistic about growth these days, for seemingly pretty good reasons.  Institutional collapse may be unlikely, but far from impossible.  Look at history.  If we think negative growth is possible, savings are concentrated at the top of the wealth distribution, and people are loss averse, it's not hard to get negative interest rates.

Still, I kind of think we'd snap out of this if we had a bit more fiscal stimulus throughout the developed world, combined with a slightly higher inflation target--say 3 or 4 percent.  But keep in mind I'm just an armchair macro guy.

The point I want to make is that these low interest rates, and the possibility of secular stagnation, greatly affects the calculus surrounding optimal investments to curb climate change.  The titans of environmental economics--Weitzman, Nordhaus and Pindyck--have been arguing about the discount rate we should use to weigh distant future benefits against near-future costs of abating greenhouse gas emissions.  They're arguing about this because the right price for emissions is all about the discount rate.  Everything else is chump change by comparison.

Nordhaus and Pindyck argue that we should use a higher discount rate and have a low price on greenhouse gas emissions.  Basically, they claim that curbing greenhouse gas emissions involves a huge transfer of wealth from current, relatively poor people to future supremely rich people.  And a lot of that conclusion comes from assuming 2%+ baseline growth forever. Weitzman counters that there's a small chance that climate change will be truly devastating, causing losses so great that the future may not be as well off as we expect.  Paul Krugman has a great summary of this debate.

Anyway, it always bothered me that Nordhaus and Pindyck had so much optimism built into baseline projections.  Today's low interest rates and the secular stagnation hypothesis paint a different picture.  Quite aside from climate change, growth and real rates look lower than the 2% baseline many assume, and a lot more uncertain.  And that means Weitzman-like discount rates (near zero) make sense even without fat-tailed uncertainty about climate change impacts.