Wednesday, January 28, 2015

Food Waste Delusions

A couple months ago the New York Times convened a conference "Food for Tomorrow: Farm Better. Eat Better. Feed the World."  Keynotes predictably included Mark Bittman and Michael Pollan.  It featured many food movement activists, famous chefs, and a whole lot of journalists. Folks talked about how we need to farm more sustainably, waste less food, eat more healthfully and get policies in place that stop subsidizing unhealthy food and instead subsidize healthy food like broccoli.

Sounds good, yes? If you're reading this, I gather you're familiar with the usual refrain of the food movement.  They rail against GMOs, large farms, processed foods, horrid conditions in confined livestock operations, and so on.  They rally in favor of small local farms who grow food organically, free-range antibiotic free livestock, diversified farms, etc.  These are yuppies who, like me, like to shop at Whole Foods and frequent farmers' markets.  

This has been a remarkably successful movement.  I love how easy it has become to find healthy good eats, bread with whole grains and less sugar, and the incredible variety and quality of fresh herbs, fruits, vegetables and meat.  Whole Paycheck Foods Market has proliferated and profited wildly.  Even Walmart is getting into the organic business, putting some competitive pressure on Whole Foods. (Shhhh! --organic isn't necessarily what people might think it is.)

This is all great stuff for rich people like us. And, of course, profits.  It's good for Bittman's and Pollan's book sales and speaking engagements.  But is any of this really helping to change the way food is produced and consumed by the world's 99%?  Is it making the world greener or more sustainable?  Will any of it help to feed the world in the face of climate change?

Um, no.  

Sadly, there were few experts in attendance that could shed scientific or pragmatic light on the issues.  And not a single economist or true policy wonk in sight. Come on guys, couldn't you have at least invited Ezra Klein or Brad Plummer?  These foodie journalists at least have some sense of incentives and policy. Better, of course, would be to have some real agricultural economists who actually know something about large-scale food production and policies around the world. Yeah, I know: BORING!

About agricultural polices: there are a lot of really bad ones, and replacing them with good policies might help.  But a lot less than you might think from listening to foodies.  And, um, we do subsidize broccoli and other vegetables, fruits, and nuts.  Just look at the water projects in the West. 

Let me briefly take on one issue du jour: food waste.  We throw away a heck of a lot of food in this country, even more than in other developed countries.  Why?  I'd argue that it's because food is incredibly cheap in this country relative to our incomes.  We are the world's bread basket.  No place can match California productivity in fruit, vegetables and nuts.  And no place can match the Midwest's productivity in grains and legumes.  All of this comes from remarkable coincidence of climate, geography and soils, combined with sophisticated technology and gigantic (subsidized) canal and irrigation systems in the West.  

Oh, we're fairly rich too.  

Put these two things together and, despite our waste, we consume more while spending less on food than any other country.  Isn't that a good thing?  Europeans presumably waste (a little) less because food is more scarce there, so people are more careful and less picky about what they eat. Maybe it isn't a coincidence that they're skinnier, too.

What to do? 

First, it's important to realize that there are benefits to food waste.  It basically means we get to eat very high quality food and can almost always find what we want where and when we want it.  That quality and convenience comes at a cost of waste.  That's what people are willing to pay for.  

If anything, the foodism probably accentuates preference for high quality, which in turn probably increases waste.  The food I see Mark Bittman prepare is absolutely lovely, and that's what I want.  Don't you?

Second, let's suppose we implemented a policy that would somehow eliminate a large portion of the waste.  What would happen?  Well, this would increase the supply of food even more.  And sinse we have so much already, and demand for food is very inelastic, prices would fall even lower than they are already.  And the temptation to substitute toward higher quality--and thus waste more food--would be greater still.  

Could the right policies help?  Well, maybe.  A little. The important thing here is to have a goal besides simply eliminating waste.  Waste itself isn't problem. It's not an externality like pollution.  That goal might be providing food for homeless or low income families.  Modest incentive payments plus tax breaks might entice more restaurants, grocery stores and others to give food that might be thrown out to people would benefit from it.  This kind of thing happens already and it probably could be done on a larger scale. Even so, we're still going to have a lot of waste, and that's not all bad. 

What about correcting the bad policies already in place?  Well, water projects in the West are mainly sunk costs.  That happened a long time ago, and water rights, as twisted as they may be, are more or less cemented in the complex legal history.   Today, traditional commodity program support mostly takes the form of subsidized crop insurance, which is likely causing some problems.  The biggest distortions could likely be corrected with simple, thoughtful policy tweaks, like charging higher insurance premiums to farmers who plant corn after corn instead of corn after soybeans.  But mostly it just hands cash (unjustly, perhaps) to farmers and landowners.  The odds that politicians will stop handing cash to farmers is about as likely as Senator James Inhofe embracing a huge carbon tax.  Not gonna happen.

But don't worry too much.  If food really does get scarce and prices spike, waste will diminish, because poorer hungry people will be less picky about what they eat.

Sorry for being so hard on the foodies.  While hearts and forks are in the right places, obviously I think most everything they say and write is naive.  Still, I think the movement might actually do some good.  I like to see people interested in food and paying more attention to agriculture.  Of course I like all the good eats.  And I think there are some almost reasonable things being said about what's healthy and not (sugar and too much red meat are bad), even if what's healthy has little to do with any coherent strategy for improving environmental quality or feeding the world.  

But perhaps the way to change things is to first get everyones' attention, and I think foodies are doing that better than I ever could.

Saturday, January 17, 2015

The Hottest Year Ever Recorded, But Not in the Corn Belt

Here's Justin Gillis in his usual fine reporting of climate issues, and the map below from NOAA, via the New York Times.

Note the "warming hole" over the Eastern U.S., especially the upper Midwest, the all important corn belt region.  We had a bumper crop this year, and that's because while most of the world was remarkably warm, the corn belt was remarkably cool, especially in summer.

Should we expect the good fortune to continue?  I honestly don't know...

Monday, January 12, 2015

Growth Effects, Climate Policy, and the Social Cost of Carbon (Guest post by Fran Moore)

Thanks to my thesis advisor (David) for this opportunity to write a guest post about a paper published today in Nature Climate Change by myself and my colleague at Stanford, Delavane Diaz. G-FEED readers might be familiar with a number of new empirical studies suggesting that climate change might affect not just economic output in a particular year, but the ability of the economy to grow. Two studies (here and here) find connections between higher temperatures and slower economic growth in poorer countries and Sol has a recent paper showing big effects of tropical cyclones on growth rates. Delavane and I simply take one of these empirical estimates and incorporate it into Nordhaus’ well-known DICE integrated assessment model (IAM) to see how optimal climate policy changes if growth-rates are affected by climate change.

The figure below shows why these growth effects are likely to be critical for climate policy. If a temperature shock (left) affects output, then there is a negative effect that year, but the economy rebounds the following year to produce no long-term effect. If growth rates are affected though, there is no rebound after the temperature shock and the economy is permanently smaller than it would otherwise be. So if temperature permanently increases (right), impacts to the growth rate accumulate over time to give very large impacts.

No IAMs so far have incorporated climate change impacts to economic growth. Of the three models used by the EPA to determine the social cost of carbon, two (PAGE and FUND) have completely exogenous growth rates. DICE is slightly more complicated because capital stocks are determined endogenously by the savings rate in the model. But any climate impacts on growth rates are very very small and indirect, so DICE growth rates are effectively exogenous.

We take the 2012 estimate by Dell, Jones and Olken (DJO) as our starting point and modify DICE in order to try and accurately incorporate their findings. We needed to make three major changes: firstly we split the global DICE model into two regions to represent developed and developing regions because DJO find big growth effects in poor countries but only modest effects in rich; secondly, we allowed temperature to directly affect growth rates by affecting either the growth in total factor productivity or the depreciation of capital, calibrating the model to DJO; and finally, since DJO estimates are the short-run impact of weather fluctuations, we explicitly allow for adaptation in order to get the response to long-term climate change (making some fairly optimistic assumptions about how quick and effective adaptation will be).

The headline result is given in the graph below that shows the welfare-maximizing emissions trajectory for our growth-effects model (blue) and for our two-region version of the standard DICE model (red). DICE-2R shows the classic “climate policy ramp” where mitigation is increased only very gradually, allowing emissions to peak around 2050 and warming of over 3°C by 2100. But our growth-effects model gives an optimal mitigation pathway that eliminates emissions in the very near future in order to stabilize global temperatures well below 2°C.

I think its worth just pointing out how difficult it is to get a model based on DICE to give a result like this. The “climate policy ramp” feature of DICE output is remarkably stable – lots of researchers have poked and prodded the various components of DICE without much result. Until now, the most widely discussed ways of getting DICE to recommend such rapid mitigation was either using a very low discount rate (a la Stern) or hypothetical, catastrophic damages at high temperatures (a la Weitzman). One of the main reasons I think our result is interesting is that it shows the climate policy ramp finding breaks down in the face of damages calibrated to empirical results at moderate temperatures, even including optimistic adaptation assumptions and standard discounting.

There are a bunch more analyses and some big caveats in the paper, but I won’t go into most of them here in the interests of space. One very important asterisk though is that the reason why poor countries are more sensitive to warming than rich countries has a critical impact on mitigation policy. If poorer countries are more vulnerable because they are poor (rather than because they are hot), then delaying mitigation to allow them time to develop could be better than rapid mitigation today. We show this question to be a big source of uncertainty and I think it’s an area where some empirical work to disentangle the effect of temperature and wealth in determining vulnerability could be pretty valuable.

I’ll just conclude with some quick thoughts that writing this paper has prompted about the connection between IAMs and the policy process. It does seem very surprising to me that these IAMs have been around for about 20 years and only now is the assumption of exogenous economic growth being questioned. Anyone with just a bit of intuition about how these models work would guess that growth-rate impacts would be hugely important (for instance, one of our reviewers called the results of this paper ‘obvious’), yet as far as I can tell the first paper to point out this sensitivity was just published in 2014 by Moyer et al.. This is not just an academic question because these models are used directly to inform the US government’s estimate of the social cost of carbon (SCC) and therefore to evaluate all kinds of climate and energy regulations. The EPA tried to capture possible uncertainties in its SCC report but didn’t include impacts to economic growth and so comes up with a distribution over the SCC that has to be too narrow: our estimate of the SCC in 2015 of $220 per ton CO2 is not only 6 times larger than the EPA’s preferred estimate of $37, but is almost twice the “worst case” estimate of $116 (based on the 95th percentile of the distribution). So clearly an important uncertainty has been missing, which seems a disservice both to climate impact science and to the policy process it seeks to inform. Hopefully that is starting to change.

So that’s the paper. Thanks again to the G-FEEDers for this opportunity and I’m happy to answer any questions in the comments or over email.
-Fran (

Saturday, January 10, 2015

Searching for critical thresholds in temperature effects: some R code

If google scholar is any guide, my 2009 paper with Wolfram Schlenker on the nonlinear effects of temperature on crop outcomes has had more impact than anything else I've been involved with.

A funny thing about that paper: Many reference it, and often claim that they are using techniques that follow that paper.  But in the end, as far as I can tell, very few seem to actually have read through the finer details of that paper or try to implement the techniques in other settings.  Granted, people have done similar things that seem inspired by that paper, but not quite the same.  Either our explication was too ambiguous or people don't have the patience to fully carry out the technique, so they take shortcuts.  Here I'm going to try to make it easier for folks to do the real thing.

So, how does one go about estimating the relationship plotted in the graph above?

Here's the essential idea:  averaging temperatures over time or space can dilute or obscure the effect of extremes.  Still, we need to aggregate, because outcomes are not measured continuously over time and space.  In agriculture, we have annual yields at the county or larger geographic level.  So, there are two essential pieces: (1) estimating the full distribution of temperatures of exposure (crops, people, or whatever) and (2) fitting a curve through the whole distribution.

The first step involves constructing the distribution of weather. This was most of the hard work in that paper, but it has since become easier, in part because finely gridded daily weather is available (see PRISM) and in part because Wolfram has made some STATA code available.  Here I'm going to supplement Wolfram's code with a little bit of R code.  Maybe the other G-FEEDers can chime in and explain how to do this stuff more easily.

First step:  find some daily, gridded weather data.  The finer scale the better.  But keep in mind that data errors can cause serious attenuation bias.  For the lower 48 since 1981, the PRISM data above is very good.  Otherwise, you might have to do your own interpolation between weather stations.  If you do this, you'll want to take some care in dealing with moving weather stations, elevation and microclimatic variations.  Even better, cross-validate interpolation techniques by leaving one weather station out at a time and seeing how well the method works. Knowing the size of the measurement error can also help correcting bias.  Almost no one does this, probably because it's very time consuming... Again, be careful, as measurement error in weather data creates very serious problems (see here and here).

Second step:  estimate the distribution of temperatures over time and space from the gridded daily weather.  There are a few ways of doing this.  We've typically fit a sine curve between the minimum and maximum temperatures to approximate the time at each degree in each day in each grid, and then aggregate over grids in a county and over all days in the growing season.  Here are a couple R functions to help you do this:

# This function estimates time (in days) when temperature is
# between t0 and t1 using sine curve interpolation.  tMin and
# tMax are vectors of day minimum and maximum temperatures over
# range of interest.  The sum of time in the interval is returned.
# noGrids is number of grids in area aggregated, each of which 
# should have exactly the same number of days in tMin and tMax <- function( t0, t1 , tMin, tMax, noGrids )  {
  n <-  length(tMin)
  t0 <-  rep(t0, n)
  t1 <-  rep(t1, n)
  t0[t0 < tMin] <-  tMin[t0 < tMin]
  t1[t1 > tMax] <-  tMax[t1 > tMax]
  u <- function(z, ind) (z[ind] - tMin[ind])/(tMax[ind] - tMin[ind])  
  outside <-  t0 > tMax | t1 < tMin
  inside <-  !outside <- ( 2/pi )*( asin(u(t1,inside)) - asin(u(t0,inside)) ) 
  return( sum( ) 

# This function calculates all 1-degree temperature intervals for 
# a given row (fips-year combination).  Note that nested objects
# must be defined in the outer environment.
aFipsYear <- function(z){
  afips    = Trows$fips[z]
  ayear    = Trows$year[z]
  tempDat  = w[ w$fips == afips & w$year==ayear, ]
  Tvect = c()
  for ( k in 1:nT ) Tvect[k] =
              t0   = T[k]-0.5, 
              t1   = T[k]+0.5, 
              tMin = tempDat$tMin, 
              tMax = tempDat$tMax,
              noGrids = length( unique(tempDat$gridNumber) )

The first function estimates time in a temperature interval using the sine curve method.  The second function calls the first function, looping through a bunch of 1-degree temperature intervals, defined outside the function.  A nice thing about R is that you can be sloppy and write functions like this that use objects defined outside of the environment. A nice thing about writing the function this way is that it's amenable to easy parallel processing (look up 'foreach' and 'doParallel' packages).

Here are the objects defined outside the second function:

w       # weather data that includes a "fips" county ID, "gridNumber", "tMin" and "tMax".
        #   rows of w span all days, fips, years and grids being aggregated
tempDat #  pulls the particular fips/year of w being aggregated.
Trows   # = expand.grid( fips.index, year.index ), rows span the aggregated data set
T       # a vector of integer temperatures.  I'm approximating the distribution with 
        #   the time in each degree in the index T

To build a dataset call the second function above for each fips-year in Trows and rbind the results.

Third step:  To estimate a smooth function through the whole distribution of temperatures, you simply need to choose your functional form, linearize it, and then cross-multiply the design matrix with the temperature distribution.  For example, suppose you want to fit a cubic polynomial and your temperature bins that run from from 0 to 45 C.  The design matrix would be:

D = [    0          0          0   
            1          1           1
            2          4           8
           45     2025    91125]

These days, you might want to do something fancier than a basic polynomial, say a spline. It's up to you.  I really like restricted cubic splines, although they can over smooth around sharp kinks, which we may have in this case. We have found piecewise linear works best for predicting out of sample (hence all of our references to degree days).  If you want something really flexible, just make D and identity matrix, which effectively becomes a dummy variable for each temperature bin (the step function in the figure).  Whatever you choose, you will have a (T x K) design matrix, with K being the number of parameters in your functional form and T=46 (in this case) temperature bins. 

To get your covariates for your regression, simply cross multiply D by your frequency distribution.  Here's a simple example with restricted cubic splines:

DMat <- rcspline.eval(0:45)
XMat <- as.matrix(TemperatureData[,3:48])%*%DMat
fit <- lm(yield~XMat, data=regData)

Note that regData has the crop outcomes.  Also note that we generally include other covariates, like total precipitation during the season,  county fixed effects, time trends, etc.  All of that is pretty standard.  I'm leaving that out to focus on the nonlinear temperature bit. 

Anyway, I think this is a cool and fairly simple technique, even if some of the data management can be cumbersome.  I hope more people use it instead of just fitting to shares of days with each maximum or mean temperature, which is what most people following our work tend to do.  

In the end, all of this detail probably doesn't make a huge difference for predictions.  But it can make estimates more precise, and confidence intervals stronger.  And I think that precision also helps in pinning down mechanisms.  For example, I think this precision helped us to figure out that VPD and associated drought was a key factor underlying observed effects of extreme heat.

Monday, December 22, 2014

Prettiest pictures of 2014

The next person that says big data, puts a fiver in the "most overused terms in meetings in the year 2014" jar. I am excited about the opportunities of ever larger micro datasets, but even more thrilled by how much thought is going into the visualization of these datasets. One of my favorite macho nerd blogs Gizmodo just put up a number of the 2014 best data visualizations. If you also think that these are of so purdy, come take Sol's class at GSPP where he will teach you how to make graphs worthy of this brave new word.

Source: Gizmodo. 

Predictions, paradigms, and paradoxes

Failure can be good. I don’t mean in the “learn from your mistakes” kind of way. Or in the “fail year after year to even put up a good fight in the Big Game” kind of way. But failure is often the sidekick of innovation and risk-taking. In places with lots of innovation, like Silicon Valley, failure doesn’t have the stigma it has in other places. Because people understand that being new and different is the best way to innovate, but also the best way to fail.

Another area that failure can be really useful is in making predictions. Not that bad predictions are useful in themselves, but they can be useful if they are bad in different ways than other predictions. Then averaging predictions together can result in something quite good. The same way that a chorus can sound good even if each person is singing off key.

We have a paper out today that provides another example of this, in the context of predicting wheat yield responses to warming. By “we” I mean a group of authors that contributed 30 different models (29 “process-based” models and 1 statistical model by yours truly), led by Senthold Asseng at U Florida. I think a few of the results are pretty interesting. For starters, the study used previously unpublished data that includes experiments with artificial heating, so the models had to predict not only average yields but responses to warming when everything else was held constant. Also, the study was designed in phases so that models were first asked to make predictions knowing only the sow date and weather data for each experiment. Then they were allowed to calibrate to phenology (flowering and maturity dates), then to some of the yield data. Not only was the multi-model median better than any individual model, but it didn’t really improve much as the models were allowed to calibrate more closely (although the individual model predictions themselves improved). This suggests that using lots of models makes it much less important to have each model try as hard as they can to be right.

A couple of other aspects were particularly interesting to me. One was how well the one statistical model did relative to the others (but you’ll have to dig in the supplement to see that). Another was that, when the multi-model median was used to look at responses to both past and projected warming at 30 sites around the world, 20 of the 30 sites showed negative impacts of past warming (for 1981-2010, see figure below). That agrees well with our high level statement in the IPCC report that negative impacts of warming have been more common than positive ones. (I actually pushed to add this analysis after I got grilled at the plenary meeting on our statement being based on too few data points. I guess this an example of policy-makers feeding back to science).

Add this recent paper to a bunch of others showing how multi-model medians perform well (like here, here, and here), and I think the paradigm for prediction in the cropping world has shifted to using at least 5 models, probably more. So what’s the paradox part? From my perspective, the main one is that there’s not a whole lot of incentive for modeling groups to participate in these projects. Of course there’s the benefit of access to new datasets, and insights into processes that can be had by comparing models. But they take a lot of time, groups generally are not receiving funds to participate, there is not much intellectual innovation to be found in running a bunch of simulations and handing them off, and the resulting publications have so many authors that most of them get very little credit. In short, I was happy to participate, especially since I had a wheat model ready from a previous paper, but it was a non-trivial amount of work and I don’t think I could advise one of my students to get heavily involved.

So here’s the situation: making accurate predictions is one of the loftiest goals of science, but the incentives to pursue the best way to make predictions (multi-model ensembles) are slim in most fields. The main areas I know of with long-standing examples of predictions using large ensembles are weather (including seasonal forecasts) or political elections. In both cases the individual models are run by agencies or polling groups with specific tasks, not scientists trying to push new research frontiers. In the long-term, I’d guess the benefit of better crop predictions on seasonal to multi-decadal time scales would probably be worth the investment by USDA and their counterparts around the world to operationalize multi-model approaches. But relying on the existing incentives for research scientists doesn’t seem like a sustainable model.

Wednesday, December 10, 2014

Adapting to extreme heat

Since we are nearing the holidays, I figured I should write something a bit more cheerful and encouraging than my standard line on how we are all going to starve.  My coauthor Michael Roberts and I have emphasized for a while the detrimental effect of extreme heat on corn yields and the implications for a warming planet.  When we looked at the sensitivity to extreme heat over time, we found an improvement (i.e., less susceptibility) roughly around the time hybrids were introduced in the 30s,  but that improvement soon vanished again around the 1960s.   Heat is as susceptible to heat now as it was in 1930.  Our study simply allowed the effect of extreme heat to vary smoothly across time, but wasn't tied to a particular event.

David Popp has been working a lot on innovation and he suggested to look at the effect of hybrid corn adaptation on the sensitivity to extreme heat in more detail.  Richard Sutch had a nice article on how hybrid corn was adopted slowly across states, but fairly quickly within each state.  David and I thought we could use the fairly rapid rollout within state but slow rollout across state as source of identification of the role of extreme heat. Here's a new graph of the rollout by state:
The first step was to extended the daily weather data back to 1901 to take a look at the effect of extreme heat on corn yields over time - we wanted a pre-period to rule out that crappy weather data in the early 1900s results in a lot of attenuation bias but get significant results with comparable coefficients when we use data from the first three decades of the 20th century.

In a second step we interact the weather variables with the fraction of the planted area that is hybrid corn. We find evidence that the introduction of corn is reducing the sensitivity of hybrid corn from -0.53 to -0.33 in the most flexible specification in column (3b) below, which is an almost 40% reduction. Furthermore, the sensitivity to precipitation fluctuations seems to diminish as well. (Disclaimer: these are new results, so they might change a bit once I get rid of my coding errors).
The table regresses state-level yields in 41 states on weather outcomes.  All regressions include state-fixed effects as well as quadratic time trends. Columns (b) furthermore include year fixed effects to pick up common shocks (e.g., global corn prices). Columns (1a)-(1b) replicate the standard regression equation we have been estimating before, columns (2a)-(2b) allow the effect of extreme heat to change with the fraction of hybrid corn that is planted, while columns (3a)-(3b) allow the effect of all four weather variables to change in the fraction of hybrid corn.

In summary: there is evidence that at least for the time period when hybrid corn were adopted that innovation in crop varieties lead to an improvement in heat tolerance, which would be extremely useful as climate change is increasing the frequency of these harmful temperatures.  On that (slightly more upbeat note): happy holidays.

Wednesday, November 26, 2014

Feeding 9... er... 11 billion people

Demographers have been telling us for a while that global populations will level off to about 9 billion, and this "9 billion" number has indeed become the conventional wisdom -- so much so that one of your trusty G-FEED bloggers actually teaches a (presumably excellent) class called "Feeding Nine Billion".

With the current global population at just over 7 billion, the belief that population might level off at 9 billion has given some solace to folks worried about the "pile of grain" problem, i.e. the general concern that feeding a bunch of extra mouths around the world might prove difficult.  9 billion people by 2100 implies a much slower population growth rate over the coming century than was observed over the last century, and while a scandalous 800 million people in the world continue to go to bed hungry every night, there has been notable success in reducing the proportion of the world population who don't have enough to eat even as populations have skyrocketed.  This success, if you can call it that, has in part to do with the ability of the world's agricultural producers to so far "keep up" with the growing demand for food induced by growing incomes and populations, as evidenced by the general decline in real food prices over the last half century (the large food price spikes in the last 5-7 years notwithstanding).

But a paper last month by Gerland et al in Science (gated version here), straightforwardly titled "World population stabilization unlikely this century", provides some uncomfortable evidence that the magic 9 billion number might be a substantial underestimate of the population we're likely to see by the end of this century.  Turns out that fertility rates have not fallen as fast in Africa as expected:  while the total fertility rate has fallen, the decline has only been about a quarter as fast as what was observed in the 1970s and 80s in Latin America and Asia.  This is apparently due both to slow declines in African families' desired family sizes, as well as a substantial unmet need for contraception. Here's a plot from this paper showing the relatively slow decline in African fertility:

So run the world forward for 85 years taking these slower-than-expected fertility declines into account, and you get population projections much higher than 9 billion.  In fact, the mean estimate in the Gerland et al paper of population in 2100 is 11 billion, with their 95% confidence interval barely scraping 9 billion on the low end and 13 (!) billion on the high end.  In fact, their 95% confidence interval for 2050 barely contains 9 billion. Here's the relevant plot (R users will appreciate the near-unadulterated use of the ggplot defaults):

Figure 1 from Gerland et al 2014, Science

So perhaps David should retitle his class, "11 is the new 9", or, "Feeding 9 billion in the next 20 years", or, "Feeding 11 billion (95% CI, 9 billion to 13 billion)".  In any case, these 2+ billion extra mouths are not entirely welcome news for those worried about the global pile of grain.  These much larger numbers imply that even greater progress needs to be made on improving agricultural yields if we want to (a) keep prices at reasonable levels and (b) not have to massively expand agricultural land use to do it.  Thanks, Gerland et al!  

Thursday, November 20, 2014

Estimating the impacts of CO2 Fertilization...

While David is shaking in his boots about Saturday's matchup of the Cal Bears against Stanford (both teams have a shameful 5-5 record so far), I have been spending time perusing NASA's recent explosion of multimedia offerings. The video that caught my attention comes from a recent paper displaying the transport of CO2 across the globe. That got me thinking...

My illustrious co-bloggers have documented extensive evidence that extreme heat is bad for crops. We also know that rainfed crops do not appreciate a lack of or too much rainfall. How do we know this? The G-Feed crowd likes to use econometric methods to estimate dose response functions between yields/output and temperature/precipitation. In order to attach a causal interpretation to the estimated coefficients of these dose response functions, one needs "exogenous" (read roughly random) sources of variation in temperature and rainfall. While we know that the distribution of crops across climate zones is not random, day to day changes in weather can be interpreted as random, if one controls carefully for other confounders. We first made this point in a PNAS paper in 2006 and this has been standard practice subject to a number of well understood caveats.

So we know: Extreme Heat=Bad. Too much or too little water = Bad.

What we do not understand well so far are the impacts of CO2 on crop yields using non experimental data. There are plenty of studies which pump CO2 into open top chambers of fields and measure differences in yields between carbon fertilized and control plots. What we do not have is a good measure of carbon fertilization in a field setting which incorporates farmer behavior. What has prevented me and arguably many others from attacking this problem empirically is the fact that I thought that CO2 mixed roughly uniformly across space and any variation in CO2 is variation over time. This variation is not useful as one cannot empirically separate the impacts of CO2 from other factors that vary over time, such as prices and business cycles.

The video link above makes me want to question that assumption. The model based patterns here show tremendous spatial and temporal variability within a year. This is the type of variation in temperature and precipitation we use to identify their impacts on yield. While I understand that we do not have a great historical dataset of ground level CO2 measurements, I wonder if an interdisciplinary team of rockstars could come up with a meaningful identification strategy to allow us to measure the impacts of CO2 on yields. Not much good will come from global climate change, but we cannot simply measure the bads and ignore the goods. If anyone has any good ideas, I am interested. I got lots of great suggestions on my climate data post, so here's hoping...

Tuesday, November 18, 2014

The hunger strawman

A few questions are almost guaranteed to come up from an audience whenever I give a public talk, regardless of what I talk about. Probably the most persistent question is something like “Don’t we already produce more than enough food to feed everyone?” or its close relative “Isn’t hunger just a poverty or distribution problem?”

Some students recently pointed me to an op-ed by Mark Bittman in the NY Times called “Don’t ask how to feed the 9 billion” that rehashes this question/argument. It probably caught their attention because I teach a class called “Feeding 9 billion”, and they’re wondering why I’d organize a class around a question they supposedly shouldn’t even be asking. The op-ed has some catchy lines such as “The solution to malnourishment isn’t to produce more food. The solution is to eliminate poverty.” Or “So we should not be asking, ‘How will we feed the world?,’ but ‘How can we help end poverty?’" My first reaction to these kind of statements is usually “Gee, why didn’t anyone think of reducing poverty before -- we should really get some people working on that!” But more seriously, I think it’s really a quite ludicrous and potentially dangerous view for several reasons. Here’s three:
  1. To talk about poverty and food production as if they are two separate things is to forget that in most of parts of the world, the poorest people earn their livelihoods in agriculture. Increasing productivity of agriculture is almost always poverty reducing in rural areas. The 2008 World Development Report explains this well. Of course, the poor in urban areas are a different story, but that doesn’t change the critical global link between underperforming agriculture and poverty.
  2. Food prices matter, even if they are low enough that many of us barely notice when they change. If you go to a market, you’d of course rather have hundreds of dollars in your pocket than a few bucks. But if you are there with a few bucks, and you’re spending half or more of your income on food, it makes a big difference whether food prices are up or down by, say, 20%. If you could magically eliminate poverty that’d be great, but for a given level of poverty, small changes in prices matter. And if productivity of agriculture slows down, then (all else equal) prices tend to rise.
  3. Maybe most importantly, there’s no guarantee that past progress on keeping productivity rising and prices low will continue indefinitely, especially if we lose sight of its importance. There’s a great deal of innovation and hard work that goes into simply maintaining current productivity, much less continuing to improve it. Just because many remain hungry doesn’t mean we should treat past successes as failures, or take past successes for granted. And just because we have the technology and environment to feed 7 billion, it doesn’t mean we have it to feed 9 billion (at least not on the current amount of cropland, with some fraction of land going to bioenergy, etc.).

When Stanford had Andrew Luck, we didn’t go undefeated. The football team still had some weaknesses and ended up losing a couple of games, sometimes because of a key turnover or because we gave up too many points. Nobody in their right mind, though, concluded that “the solution to winning football games isn’t to have a good quarterback, it’s to have a good defense.” That would be the wrong lesson to learn from the Andrew Luck era. In other words, it’s possible for more than one thing to matter at the same time. (Incidentally, this year Stanford football has produced more than enough points to be a great team; they just haven't distributed them evenly across the games.)

Similarly, nobody that I know is actually claiming that the only thing we have to worry about for reducing hunger is increasing crop production. That would be idiotic. So it’s a complete strawman to say that the current strategy to reduce malnourishment is simply to raise yields in agriculture. It’s part of a strategy, and an important part, but not the whole thing.

I’m not sure why this strawman persists. I can think of a few cynical reasons, but I’m not really sure. To paraphrase a joke a student told me the other day: there’s really only one good use for a strawman. To drink, man.