Wednesday, October 29, 2014

Fanning the flames, unnecessarily


This post is a convolution of David's post earlier today and Sol's from a few days ago.  Our debate with Buhaug et al that Sol blogged about has dragged on for a while now, and engendered a range of press coverage, the most recent by a news reporter at Science.  "A debate among scientists over climate change and conflict has turned ugly", the article begins, and goes on to catalog the complaints on either side while doing little to engage with the content.

Perhaps it should be no surprise that press outlets prefer to highlight the mudslinging, but this sort of coverage is not really helpful.  And what was lost in this particular coverage were the many things I think we've actually learned in the protracted debate with Buhaug.

We've been having an ongoing email dialog with ur-blogger and statistician Andrew Gelman, who often takes it upon himself to clarify or adjudicate these sorts of public statistical debates, which is a real public service.  Gelman writes in an email:
In short, one might say that you and Buhang are disagreeing on who has the burden of proof.  From your perspective, you did a reasonable analysis which holds up under reasonable perturbations and you feel it should stand, unless a critic can show that any proposed alternative data inclusion or data analytic choices make a real difference.  From their perspective, you did an analysis with a lot of questionable choices and it’s not worth taking your analysis seriously until all these specifics are resolved.
I'm sympathetic with this summary, and am actually quite sympathetic to Buhaug and colleagues' concern about our variable selection in our original Science article.  Researchers have a lot of choice over how and where to focus their analysis, which is a particular issue in our meta-analysis since there are multiple climate variables to choose from and multiple ways to operationalize them.  Therefore it could be that our original effort to bring each researcher's "preferred" specification into our meta-analysis might have doubly amplified any publication bias -- with researchers of the individual studies we reviewed emphasizing the few significant results, and Sol, Ted, and I picking the most significant one out of those.  Or perhaps the other researchers are not to blame and the problem could have just been with Sol, Ted, and my choices about what to focus on.

How journalism works (or doesn't)

One reason we started this blog was the frustration of being misrepresented in media coverage of topics we work on. It can be hard for people to grasp just how frustrating it can be. You spend time talking to a journalist until they seem to get what you're saying, they go off and write the story, and then only about half the time do they check back to see if the quotes they attribute to you are right.  Having words put in your mouth is often compounded by other issues, like a "he said, she said" tone that can make issues appear much more contentious than they should be (see Sol's last post)

Another case in point - the other day I took a call from a reporter for the Guardian who said she was working on a story about which crops are threatened by climate change. I thought I was pretty clear that when we talk about impacts we are never talking about complete eradication of the crop. But today I see their story is "8 foods you're about to lose due to climate change"!

When she asks about CO2 I say it's absolutely clear that it has a benefit, it's just a question of whether it's enough to counteract the bad stuff that happens with climate change. That turned into:
One major issue is carbon dioxide, or CO2. Plants use the gas to fuel photosynthesis, a fact that has led some analysts to argue that an increase CO2 is a good thing for farming. Lobell disagrees, noting that CO2 is only one of many factors in agriculture. “There’s a point at which adding more and more CO2 doesn’t help,” he says. Other factors – like the availability of water, the increasing occurrence of high and low temperature swings and the impact of stress on plant health – may outweigh the benefits of a CO2 boost.
What happens over time is you learn to be a little more aggressive with reporters, but that only helps so much. And also you learn to stop answering your phone so much, and to stick with the handful of reporters you think do a really good job. It's sad but true.

What's especially annoying, though, is when people see these stories and start attributing everything it says to you, as if you wrote it, picked the headline, etc. (I see some tweets today saying I'm trying to spread fear about climate change.) The irony is when I give talks or speak on panels I'm more often than not accused of being a techno-optimistic, both about climate change and food security in general. I actually am quite optimistic. About food. Just not about journalism.

Monday, October 27, 2014

One effect to rule them all? Our reply to Buhaug et al's climate and conflict commentary

For many years there has been a heated debate about the empirical link between climate and conflict. A year ago, Marshall, Ted Miguel and I published a paper in Science where we reviewed existing quantitative research on this question, reanalyzed numerous studies, synthesized results into generalizable concepts, and conducted a meta-analysis of parameter estimates (watch Ted's TED talk).  At the time, many researchers laid out criticism in the press and blogosphere, which Marshall fielded through G-FEED. In March, Halvard Buhaug posted a comment signed by 26 authors on his website strongly critiquing our analysis, essentially claiming that they had overturned our analysis by replicating it using an unbiased selection of studies and variables. At the time, I explained numerous errors in the comment here on G-FEED

The comment by Buhaug et al. was published today in Climatic Change as a commentary (version here), essentially unchanged from the version posted earlier with none of the errors I pointed out addressed. 

You can read our reply to Buhaug et al. here. If you don't want to bother with lengthy details, our abstract is short and direct:
Abstract: A comment by Buhaug et al. attributes disagreement between our recent analyses and their review articles to biased decisions in our meta-analysis and a difference of opinion regarding statistical approaches. The claim is false. Buhaug et al.’s alteration of our meta-analysis misrepresents findings in the literature, makes statistical errors, misclassifies multiple studies, makes coding errors, and suppresses the display of results that are consistent with our original analysis. We correct these mistakes and obtain findings in line with our original results, even when we use the study selection criteria proposed by Buhaug et al. We conclude that there is no evidence in the data supporting the claims raised in Buhaug et al.

Friday, October 10, 2014

Will the dry get drier, and is that the right question?

A “drought” can be defined, it seems, in a million different ways. Webster’s dictionary says it’s a period of dryness especially when prolonged; specifically:  one that causes extensive damage to crops or prevents their successful growth.” Wikipedia tells me “Drought is an extended period when a region receives a deficiency in its water supply.” The urban dictionary has a different take.

But nearly all definitions share the concepts of dryness and of damage or deficiency. We’ve talked a lot on this blog about drought from an agricultural perspective, and in particular how droughts in agriculture can (or at least should) often be blamed as much on high temperatures and strong evaporative demand as on low rainfall. At the same time, there’s lots of interesting work going on trying to assess drought from a hydrological perspective. Like this recent summary by Trenberth et al.

The latest is a clever study by Greve et al. that tries to pin down whether and where droughts are becoming more or less common. They looked at lots of combinations of possible data sources for rainfall, evapotranspiration (ET) and potential evapotranspiration (ETp). They then chose those combinations that produced a reasonable relationship between E/P and ETp/P, defined as the Budyko curve, and used them to calculate trends in dryness for 1948-2005. The figure below shows their estimate of wet and dry areas and the different instances of wet areas getting wetter, wet getting drier, etc. The main point of their paper and media coverage was that these trends don’t follow the traditional expectation of WWDD (wet get wetter and dry get drier) – the idea that warming increases the water holding capacity of the air and thus amplifies existing patterns of rainfall.


Also clear in the figure is that the biggest exception to the rule appears to be wet areas getting drier. There don’t seem to be many dry areas getting wetter over the last 50 years.

Other than highlighting their nice paper, I wanted to draw attention to something that seems to get lost in all of the back-and-forth in the community looking at trends in dryness and drought, but that I often discuss with agriculture colleagues: it’s not clear how useful any of these traditional measures of drought really are. The main concept of drought is about deficiency, but deficient relative to what? The traditional measures all use a “reference” ET, with the FAO version of penman-monteith (PM) the gold standard for most hydrologists. But it’s sometimes forgotten that PM uses an arbitrary reference vegetation of a standard grass canopy. Here’s a description from the standard FAO reference:

“To avoid problems of local calibration which would require demanding and expensive studies, a hypothetical grass reference has been selected. Difficulties with a living grass reference result from the fact that the grass variety and morphology can significantly affect the evapotranspiration rate, especially during peak water use. Large differences may exist between warm-season and cool season grass types. Cool-season grasses have a lower degree of stomatal control and hence higher rates of evapotranspiration. It may be difficult to grow cool season grasses in some arid, tropical climates. The FAO Expert Consultation on Revision of FAO Methodologies for Crop Water Requirements accepted the following unambiguous definition for the reference surface:
"A hypothetical reference crop with an assumed crop height of 0.12 m, a fixed surface resistance of 70 s m-1 and an albedo of 0.23."
The reference surface closely resembles an extensive surface of green grass of uniform height, actively growing, completely shading the ground and with adequate water."

Of course, there are reasons to have a reference that is fixed in space and time – it makes it easier to compare changes in the physical environment. But if the main concern of drought is about agricultural impacts, then you have to ask yourself how much this reference really represents a modern agricultural crop. And, more generally, how relevant is the concept of a static reference in agriculture, where the crops and practices are continually changing. It’s a bit like when Dr. Evil talks about “millions of dollars” in Austin Powers.  

Here’s a quick example to illustrate the point for those of you still reading. Below is a plot I made for a recent talk that shows USDA reported corn yields for a county of Iowa where we have run crop model simulations. I then use the simulations (not shown) to define the relationship between yields and water requirements. This is a fairly tight relationship since water use and total growth are closely linked, and depends mainly on average maximum temperature. The red line then shows the maximum yield that could be expected (assuming current CO2 levels ) in a dry year, defined as the 5th percentile of historical annual rainfall. Note that for recent years, this amount of rainfall is almost always deficient and will lead to large amounts of water stress. But 50 years ago the yields were much smaller, and even a dry year provided enough water for typical crop growth (assuming not too much of it was lost to other things like runoff or soil evaporation).  


An alternative to the PM approach is to have the reference ET defined by the potential growth of the vegetation. This was described originally, also by Penman, as a “sink strength” alternative to PM, and is tested in a nice recent paper by Tom Sinclair. It would be interesting to see the community focused on trends try to account for trends in sink strength. That way they’d be looking not just at changes in the dryness part of drought, but also the deficiency part.


As someone interested in climate change, it’s nice to see continued progress on measuring trends in the physical environment. But for someone concerned about whether agriculture needs to prepare for more drought, in the sense of more water limitations to crop growth, then I think the answer in many cases is a clear yes, regardless of what’s happening to climate. As yield potential become higher and higher, the bar for what counts as "enough" water continues to rise.

Saturday, October 4, 2014

Agricultural Economics gets Politico

Update: For the record, I'm actually not against Federal crop insurance.  Like Obamacare, I generally favor it.  But the subsidies are surely much larger than they need to be for maximum efficiency.  And I think premiums could likely be better matched to risk, and that such adjustments would be good for both taxpayers and the environment.



Wow.  Frumpy agricultural economics goes Politico!

Actually, it's kind of strange to see a supposedly scandalous article in Politico in which you know almost every person mentioned. 

At issue is the federal crop insurance program.  The program has been around a long time, but its scope and size--the range crops and livestock insurable under the program and the degree to which taxpayers subsidize premiums--have grown tremendously over the last 20 years.  And the latest farm bill expands the program and its subsidies to grand new heights.

Nearly all the agricultural economists I know regard the crop insurance program (aka Obamacare for the corn) as overly subsidized.  But the issue here is not the subsides but the huge contracts received by agricultural economists moonlighting as well-paid consultants for USDA's Risk Management Agency (RMA), to help RMA design and run the insurance program.

For full disclosure: I used to work for USDA in the Economic Research Service and did some research on crop insurance.  Although, strangely, ties between ERS and RMA are thin to nonexistent. I've met and spoke to both Joe Glauber (USDA's Chief Economist) and Bruce Babcock (a leading professor of agricultural economics at Iowa State) a few times, and know and respect their work. And I used to work at NC State as a colleague of Barry Goodwin's.  I also went to Montana State for a master's degree way back, where I took courses from Myles Watts and Joe Attwood, who are mentioned in the article. I know Vince Smith from that time too.

Perhaps most importantly, some of my recent research uses some rich data resources that we obtained from RMA. But I have never received any monies from RMA.  Believe it or not, my interest is in the science, and despite having no vested financial interest in any of it, I have found myself in the cross hairs of agricultural interests who didn't seem to like my research findings.  Anyway, ag econ is a small, small world...

Okay, disclosures out of the way: What's the big deal here?  So ag economists work for RMA, make some nice cash, and then moonlight for the American Enterprise Institute to bash agricultural subsidies.  Yeah, there are are conflicts of interest, but it would seem that there are interests on many sides and the opportunistic ag economists in question seem willing to work for all of them.  They'll help RMA design crop insurance programs, but that doesn't mean they advocate for the programs or the level of subsidies farmers, insurance companies and program managers receive under them.  We observe the opposite.

I've got some sense of the people involved and their politics.  Most of them are pretty hard-core conservative (Babcock may be an exception, not sure), and my sense is that most are unsupportive of agricultural subsidies in general.  But none are going to turn down big pay check to try to make the program as efficient as possible.  I don't see a scandal here.  Really.

Except, I do kind of wonder why all this money is going to Illinois, Texas and Montana when folks at Columbia, Hawai'i, and Stanford could, almost surely, do a much better job for a fraction of  taxpayers' cost.  With all due respect (and requisite academic modesty--tongue in cheek), I know these guy's work, and I'm confident folks here at G-FEED could do a much better job.  I personally don't need a penny (okay, twist my arm and I'll take a month of summer salary). Just fund a few graduate students and let us use the data for good science.

 


Wednesday, October 1, 2014

People are not so different from crops, turns out



A decade of strong work by my senior colleagues here at G-FEED have taught us that crops don’t like it hot: 
  • Wolfram and Mike have the original go-to paper on US ag [ungated copy here], showing that yields for the main US field crops respond very negatively to extreme heat exposure
  • David, Wolfram, Mike + coauthors have a nice update in Science using even fancier data for the US, showing that while average corn yields have continued to increase in the US, the sensitivity of corn to high temperatures and moisture deficits has not diminished. 
  • And Max et al have a series of nice papers looking at rice in Asia, showing that hot nighttime temperatures are particularly bad for yields.
The results matter a lot for our understanding of the potential impacts of climate change, suggesting that in the absence of substantial adaptation we should expect climate change to exert significant downward pressure on future growth in agricultural productivity.

But we also know that for many countries of the world, agriculture makes up a small share of the economy.  So if we want to say something meaningful about overall effects of climate change on the economies of these countries (and of the world as a whole), we're also going to need to know something about how non-agricultural sectors of the economy might respond to a warmer climate. 

Thankfully there is a growing body of research on non-agricultural effects of climate -- and there is a very nice summary of some of this research (as well as the ag research) just out in this month's Journal of Economic Literature, by heavyweights Dell, Jones, and Olken. [earlier ungated version here].

I thought it would be useful to highlight some of this research here -- some of it already published (and mentioned elsewhere on this blog), but some of it quite new.  The overall take-home from these papers is that non-agricultural sectors are often also surprisingly sensitive to hot temperatures

First here are three papers that are already published:

1. Sol's 2010 PNAS paper was one of the first to look carefully at an array of non-agricultural outcomes (always ahead of the game, Sol...), using a panel of Caribbean countries from 1970-2006. Below is the money plot, showing strong negative responses of a range of non-ag sectors to temperature.  Point estimate for non-ag sectors as a whole was -2.4% per +1C, which was higher than the comparable estimate for the ag sector (-0.1% per 1C).

From Hsiang (2010)


2. Using a country-level panel, Dell Jones and Olken's instaclassic 2012 paper [ungated here] shows that both ag and non-ag output responds negatively to warmer average temperatures -- but only in poor countries. They find, for instance, that growth in industrial output in poor countries falls 2% for every 1C increase in temperature, which is only slightly lower than the -2.7% per 1C decline they find for ag. They find no effects in rich countries. 

3. Graff Zivin and Neidell (2014) use national time use surveys in the US to show that people work a lot less on hot days.  Below is their money fig:  on really hot days (>90F), people in "exposed" industries (which as they define it includes everything from ag to construction to manufacturing) work almost an hour less (left panel).  The right panels show leisure time.  So instead of working, people sit in their air conditioning and watch TV. 

from Graff Zivin and Neidell 2014.  Left panel is labor supply, right two panels are outdoor and indoor leisure time. 

And here are three papers on the topic you might not have seen, all of which are current working papers:

4.  Cachon, Gallino, and Olivares (2012 working paper) show, somewhat surprisingly, that US car manufacturing is substantially affected by the weather.  Using plant-level data from 64 plants, They show that days above 90F reduce output on that day by about 1%, and that production does not catch up in the week following a hot spell (i.e. hot days did not simply displace production).

5. Adhvaryu, Kala, and Nyshadham (2014 working paper)  use very detailed production data from garment manufacturing plants to show that hotter temperatures reduce production efficiency (defined as how much a particular production line produces on a given day, relative to how much engineering estimates say they should have produced given the complexity of the garment they were producing that day).  Not sure if I have the units right, but I think they find about a 0.5% decrease in production efficiency on a day that's +1C hotter.

6. Finally, in a related study, Somanathan et al (2014 working paper) use a nation-wide panel of Indian manufacturing firms and show that output decreases by 2.8% per +1C increase in annual average temperature.  They show that this is almost all coming from increased exposure above 25C, again pointing to a non-linear response of output to temperature.  For a subset of firms, they also collect detailed worker-level daily output data, and show that individual-level productivity suffers when temperatures are high -- but that this link is broken when plants are air conditioned.

So apparently it's not just crops that do badly when it's hot.  Most of the studies just mentioned cite the human physiological effects of heat stress as the likely explanation for why non-agricultural output also falls with increased heat exposure, and this seems both intuitive and plausible -- particularly given how similar the effect sizes are across these different settings.  But what we don't yet know is how these mostly micro-level results might aggregate up to the macro level. Do they matter for the projected overall effect of climate change on economies?  This is something Sol and I have been working on and hope to be able to share results on soon.  In the meantime, I will be setting my thermostat to 68F. 



Wednesday, September 17, 2014

An open letter to you climate people

Dear Climate People (yes, I mean you IPCC WG1 types):

I am a lowly social scientist. An economist to be precise. I am the type of person who is greatly interested in projecting impacts of climate change on human and natural systems. My friends and I are pretty darn good at figuring out how human and natural systems responded to observed changes in weather and climate. We use fancy statistics, spend tons of time and effort collecting good data on observed weather/climate and outcomes of interest. Crime? Got it. Yields for any crop you can think of? Got your back. Labor productivity? Please. Try harder.

But you know what's a huge pain in the neck for all of us? Trying to get climate model output in a format that is useable by someone without at least a computer science undergraduate degree. While you make a big deal out of having all of your climate model output in a public depository, we (yes the lowly social scientists) do not have the skills to read your terabytes and terabytes of netCDF files into our Macbooks and put them in a format we can use.

What do I mean by that? The vast majority of us use daily data of TMin, Tmax and Precipitation at the surface. That's it. We don't really care what's going on high in the sky. If we get fancy, we use wet bulb temperature and cloud cover. But that's really pushing it. For a current project I am trying to get county level Climate Model Output for the CMIP5 models. All of them. For all 3007 US counties. This should not be hard. But it is. My RA finally got the CMIP5 output from a Swiss server and translated them into a format we can use (yes. ASCII. laugh if you wish. The "a" in ASCII stands for awesome.) We now are slicing and dicing these data into the spatial units we can use. We had to buy a new computer and bang our heads against the wall for weeks.

If you want more people working on impacts in human and natural systems, we need to make climate model output available to them at the spatial and temporal level of resolution they need. For the old climate model output, there was such a tool, which was imperfect, but better than what we have now. I got a preview of the update to this tool, but it chokes on larger requests.

Here's what I'm thinking IPCC WG1: Let's create a data deposit, which makes climate model output available to WG2 types like me in formats I can understand. I bet you the impacts literature would grow much more rapidly. The most recent AR5 points out that the largest gaps in our understanding are in human systems. I am not surprised. If this human system has trouble getting the climate data into a useful format, I am worried about folks doing good work, who are even more computationally challenged than I am.

Call me some time. I am happy to help you figure out what we could do. It could be amazing!

Your friend (really) Max.

Monday, September 8, 2014

Can we measure being a good scientific citizen?

This is a bit trivial, but I was recently on travel, and I often ponder a couple of things when traveling. One is how to use my work time more efficiently. Or more specifically, what fraction of requests to say yes to, and which ones to choose? It’s a question I know a lot of other scientists ask themselves, and it’s a moving target as the number of requests change over time, for talks, reviews, etc.

The other thing is that I usually get a rare chance to sit and watch Sportscenter, and I'm continually amazed by how many statistics are now used to discuss sports. Like “so-and-so has a 56% completion percentage when rolling left on 2nd down” or “she’s won 42% of points on her second serve when playing at night on points that last less than 8 strokes, and when someone in the crowd sneezes after the 2nd stroke.” Ok, I might be exaggerating a little, but not by much.

So it gets me wondering why scientists haven’t been more pro-active in using numbers to measure our perpetual time management issues. Take reviews for journals as an example. It would seem fairly simple for journals to report how many reviews different people perform each year, even without revealing who reviewed which papers. I’m pretty sure this doesn’t exist, but could be wrong (The closest thing I’ve seen to this is Nature sends an email at the end of each year saying something like “thanks for your service to our journal family, you have reviewed 8 papers for us this year”).  It would seem that comparing the number of reviews to the number of papers you get reviewed by others (also something journals could easily report) would be a good measure of whether each person is doing their part.

Or more likely you’d want to share the load with your co-authors, but also account for the fact that a single paper usually requires about 3 reviewers. So we can make a simple “science citizen index” or “scindex” that would be
SCINDEX = A / (B x C/D) = A x D / (B x C)
where
A = # of reviews performed
B = # of your submissions that get reviewed (even if the paper ends up rejected)
C = average number of reviews needed per submission (assume = 3)
D = average number of authors per your submitted papers

Note that to keep it simple, none of this counts time spent as an editor of a journal. And it doesn’t adjust for being junior or senior, even though you could argue junior people should do less reviews and make up for it when they are senior. And I’m sure some would complain that measuring this will incentivize people agreeing but then doing lousy reviews. (Of course that never happens now). Anyhow, if this number is equal to 1 than you are pulling your own weight. If it’s more than 1 you are probably not rejecting enough requests. So now I’m curious how I stack up. Luckily I have a folder where I save all reviews and can look at the number saved in a given year. Let’s take 2013. Apparently I wrote 27 reviews, not counting proposal or assessment related reviews. And Google Scholar can quickly tell me how many papers I was an author on in that year (14), and I can calculate the average number of authors per paper (4.2). Let’s also assume that a few of those were first rejected after review elsewhere (I don’t remember precisely, but that’s an educated guess), so that my total submissions were 17. So that makes my scindex 27 x 4.2 / (17 x 3) = 2.2. For 2012 it was 3.3.

Holy cow! I’m doing 2-3 times as many reviews as I should be reasonably expected to. And here I was thinking I had a reasonably good balance of accepting/rejecting requests. It also means that there must be lots of people out there who are not pulling their weight (I’m looking at you, Sol).


It would be nice if the standard citation reports add something like a scindex to the h-index and other standards. Not because I expect scientists to be rewarded for being a good citizen, though that would be nice, or because it would expose the moochers. But because it would help us make more rational decisions about how much of the thankless tasks to take on. Or maybe my logic is completely off here. If so, let me know. I’ll blame it on being tired from too much travel and doing too many reviews!

Sunday, August 31, 2014

Commodity Prices: Financialization or Supply and Demand?


I've often panned the idea that commodity prices have been greatly influenced by so-called financialization---the emergence of tradable commodity price indices and growing participation by Wall Street in commodity futures trading. No, Goldman Sachs did not cause the food and oil-price spikes in recent years. I've had good company in this view.   See, for example, Killian, Knittel and PindyckKrugman (also here), HamiltonIrwin and coauthers, and I expect many others.

I don't deny that Wall Street has gotten deeper into the commodity game, a trend that many connect to  Gorton and Rouwenhorst (and much earlier similar findings).  But my sense is that commodity prices derive from more-or-less fundamental factors--supply and demand--and fairly reasonable expectations about future supply and demand.  Bubbles can happen in commodities, but mainly when there is poor information about supply, demand, trade and inventories.  Consider rice, circa 2008.

But most aren't thinking about rice. They're thinking about oil.

The financialization/speculation meme hasn't gone away, and now bigger guns are entering the fray, with some new theorizing and evidence.

Xiong theorizes (also see Cheng and Xiong and Tang and Xiong) that commodity demand might be upward sloping.  A tacit implication is that new speculation of higher prices could feed higher demand, leading to even higher prices, and an upward spiral.  A commodity price "bubble" could arise without accumulation of inventories, as many of us have argued.  Tang and Xiong don't actually write this, but I think some readers may infer it (incorrectly, in my view).

It is an interesting and counter-intuitive result.  After all, The Law of Demand is the first thing everybody learns in Econ 101:  holding all else the same, people buy less as price goes up.  Tang and Xiong get around this by considering how market participants learn about future supply and demand.  Here it's important to realize that commodity consumers are actually businesses that use commodities as inputs into their production processes.  Think of refineries, food processors, or, further down the chain, shipping companies and airlines.  These businesses are trying to read crystal balls about future demand for their final products.  Tang and Xiong suppose that commodity futures tell these businesses something about future demand.  Higher commodity futures may indicate stronger future demand for their finished, so they buy more raw commodities, not less.

There's probably some truth to this view.  However, it's not clear whether or when demand curves would actually bend backwards.  And more pointedly, even if the theory were true, it doesn't really imply any kind of market failure that regulation might ameliorate. Presumably some traders actually have a sense of the factors causing prices to spike: rapidly growing demand in China and other parts of Asia, a bad drought, an oil prospect that doesn't pan out, conflict in the Middle East that might disrupt future oil exports, and so on.  Demand shifting out due to reasonable expectations of higher future demand for finished product is not a market failure or the makings of a bubble.  I think Tang and Xiong know this, but the context of their reasoning seems to suggest they've uncovered a real anomaly, and I don't think they have.  Yes, it would be good to have more and better information about product supply, demand and disposition.  But we already knew that.

What about the new evidence?

One piece of evidence is that commodity prices have become more correlated with each other, and with stock prices, with a big spike around 2008, and much more so for indexed commodities than off-index commodities.


This spike in correlatedness happens to coincide with the overall spike in commodity prices, especially oil and food commodities.  This fact would seem consistent with the idea that aggregate demand growth--real or anticipated--was driving both higher prices and higher correlatedness.  This view isn't contrary to Tang and Xiong's theory, or really contrary to any of the other experts I linked to above.  And none of this really suggests speculation or financialization has anything to do with it.  After all, Wall Street interest in commodities started growing much earlier, between 2004 and 2007, and we don't see much out of the ordinary around that time.

The observation that common demand factors---mainly China growth pre-2008 and the Great Recession since then---have been driving price fluctuations also helps to explain changing hedging profiles and risk premiums noted by Tang and Xiong and others.  When idiosyncratic supply shocks drive commodity price fluctuations (e.g, bad weather), we should expect little correlation with the aggregate economy, and risk premiums should be low, and possibly even negative for critical inputs like oil.  But when large demand shocks drive fluctuations, correlatedness becomes positive and so do risk premiums.

None of this is really contrary to what Tang and Xiong write.  But I'm kind of confused about why they see demand growth from China as an alternative explanation for their findings. It all looks the same to me.  It all looks like good old fashioned fundamentals.

Another critical point about correlatedness that Tang and Xiong overlook is the role of ethanol policy.  Ethanol started to become serious business around 2007 and going into 2008, making a real if modest contribution to our fuel supply, and drawing a huge share of the all-important US corn crop.


During this period, even without subsidies, ethanol was competitive with gasoline.  Moreover, ethanol concentrations hadn't yet hit 10% blend wall, above which ethanol might damage some standard gasoline engines.  So, for a short while, oil and corn were effectively perfect substitutes, and this caused their prices to be highly correlated.  Corn prices, in turn, tend to be highly correlated with soybean and wheat prices, since they are substitutes in both production and consumption.

With ethanol effectively bridging energy and agricultural commodities, we got a big spike in correlatedness.  And it had nothing to do with financialization or speculation.

Note that this link effectively broke shortly thereafter. Once ethanol concentrations hit the blend wall, oil and ethanol went from being nearly perfect substitutes to nearly perfect complements in the production of gasoline.  They still shared some aggregate demand shocks, but oil-specific supply shocks and some speculative shocks started to push corn and oil prices in opposite directions.

Tang and Xiong also present new evidence on the volatility of hedgers positions. Hedgers--presumably commodity sellers who are more invested in commodities and want to their risk onto Wall Street---have highly volatile positions relative to the volatility of actual output.



These are interesting statistics.  But it really seems like a comparison of apples and oranges.  Why should we expect hedger's positions to scale with the volatility of output?  There are two risks for farmers: quantity and price.  For most farmers one is a poor substitute for the other.

After all, very small changes in quantity can cause huge changes in price due to the steep and possibly even backward-bending demand.  And it's not just US output that matters.  US farmers pay close attention to weather and harvest in Brazil, Australia, Russia, China and other places, too.

It also depends a little on which farmers we're talking about, since some farmers have a natural hedge if they are in a region with a high concentration of production (Iowa), while others don't (Georgia).  And farmers also have an ongoing interest in the value of their land that far exceeds the current crop, which they can partially hedge through commodity markets since prices tend to be highly autocorrelated.

Also, today's farmers, especially those engaged in futures markets, may be highly diversified into other non-agricultural investments.  It's not really clear what their best hedging strategy ought to look like.

Anyhow, these are nice papers with a bit of good data to ponder, and a very nice review of past literature.  But I don't see how any of it sheds new light on the effects of commodity financialization. All of it is easy to reconcile with existing frameworks.  I still see no evidence that speculation and Wall Street involvement in commodities is wreaking havoc.

Monday, August 25, 2014

What’s the goal and point of national biofuel regulation?

While preparing a lecture for the 4th Berkeley Summer School in Environmental and Energy Economics, I returned to contemplating the regulation of biofuels as part of a federal strategy to combat climate change and increase energy security. If we review policy approaches for increasing the share of biofuels in the transportation fuels supply across this great land, there are three main approaches. We have subsidies for the production of ethanol and biodiesel, renewable fuels standards (RFS) and low carbon fuels standards (LCFS).
The two main tools employed at the federal level are subsidies, which essentially provide a per gallon payment for producing a gallon of a certain type of biofuel, and renewable fuels standards, which require the production of different classes and quantities of biofuels over a prescribed time path. California has employed a low carbon fuel standard, whose goal it is to decrease the average carbon content of California’s gasoline by prescribed percentages over time. It relies on life cycle calculations for the carbon content of different fuels and allows producers to choose a mix of different fuels, which decrease the average carbon content, thus providing more flexibility in terms of fuels compared to the RFS.
If I were elected the social planner, I would recognize that I am most likely not smarter than the market, but also would not trust the market to make the right decisions when it comes to carbon reductions (see the demonstrated record of markets since 1850). The standard way an economist would approach the problem, assuming that we know what the right amount of carbon abatement is, is to set a cap on emissions and issue tradable rights to pollute (a cap and trade). This would in theory lead to the desired level of emissions reductions at least cost. While preparing for my lecture, I was thinking I should set up a simple model where profit-maximizing producers of fuels face different policy constraints (e.g., subsidies, RFS, LCFS or a cap and trade), a reasonable demand curve and my giant computer. As so often happens to many of us environmental and energy economists, EI@Haas’ all-star team captain Chris Knittel (MIT) and coauthors had already written the paper, which is titled “Unintended Consequences of Transportation Carbon Policies: Land-Use, Emission, and Innovation”.
[Skip this paragraph if you are not a fan of wonk]. The paper, which is a great and relatively quick read, simulates the consequences of the 2022 US RFS, current ethanol subsidies and constructs a fictional national LCFS and cap-and-trade (CAT) system, which are calibrated to achieve the same savings as the RFS. The paper assumes profit-maximizing firms which either face no policy, the RFS, subsidies, LCFS or CAT. Using an impressive county level dataset on agricultural production and waste, the authors set out to construct supply curves for corn ethanol and six different types of cellulosic ethanol. Chris’ daisy chained Mac Pros then maximize profits of the individual firms by choosing plant location, production technology, and output conditional on fuel price, biomass resources, conversion and transportation costs. Changing fuel prices and re-optimizing gets them county level supply curves. Assuming a perfectly elastic supply of gasoline and a constant elasticity demand curve for transportation fuels, they solve for market equilibria numerically.
They use the results to compare the consequences of each policy type for a variety of measures we might care about. Here is what happens:
The CAT leads to the greatest increase in gas prices and largest decrease in fuel consumption. It leads to no additional corn ethanol production and slight increases in second-generation biofuels. The RFS and LCFS both lead to less than half the price increase and fuel reduction compared to the CAT. Both policies see a four to nine fold increase in corn ethanol production relative to no policy and a massive ramp up in second generation biofuels production. All three measures lead to the same reductions in carbon emissions. The subsidies leave fuel costs constant, do not change fuel consumption and lead to a massive increase in first and second generation biofuels, but only achieve two thirds of the carbon reductions compared to the other policies (which is due to the authors using current subsidy rates rather than artificially higher ones which would lead to the same carbon savings).
Biofuels lead to lower gas prices and equivalent carbon savings! This is the point, where biofuels cheerleaders scream “everything is awesome!” But this ain’t a Lego movie. Especially since Legos are not made from corn. The paper evaluates the policies along a number of dimensions. First, compare the abatement cost curves for the CAT and the LCFS. When it comes to marginal abatement cost curves, the flatter, the better. What we see in the paper is a radically steeper marginal abatement cost curve from the LCFS compared to the CAT. In equilibrium the marginal abatement cost for the LCFS is almost five times higher that of the CAT. What about those emissions reductions? What happens in practice is that the CAT leads to higher emissions reductions from reduced fuel consumption (by driving less or more efficient cars) and a little bit of fuel switching. For the LCFS there is much more fuel switching and not much less driving.
What about land use? Well, since the non CAT policies incentivize ethanol production, significant amounts of crop and marginal lands will be pulled into production.
figure 3
The paper shows that total land use for energy crops goes up about ten fold under the biofuels policies and only by about 30% under the CAT. The paper calculates that damages from erosion and habitat loss from these policies can reach up to 20% of the social cost of carbon compared to essentially 0% for the CAT.
Further, ethanol policies create the wrong incentives for innovation, where in some settings the incentives are too strong and in others they are too weak. A further aspect of the paper, which is incredibly clever, is that they show the cost of being wrong in terms of the carbon intensity (e.g., you get the indirect land use effect wrong, which is almost certainly the case) of different fuels can lead to massive amounts of uncontrolled emissions. The carbon damage consequences of being wrong by 10% in terms of the emissions intensity of corn ethanol are an order of magnitude (read 10 times!) the number for the cap and trade. Before I wonk you to death, I will close with some more general thoughts, but staffers of carbon regulators should read this paper. Now.
What this work shows is that in the case of biofuels setting a simple universal policy, which lets market participants choose the least cost ways of finding emissions reductions, is vastly preferred to complex renewable fuels or low carbon fuels standards. While I understand that producers of ethanol enjoy their subsidies (much like I enjoy my home interest mortgage deduction), this paper argues that they are a bad deal for society. And so is the RFS, as would be a national LCFS. As we go ahead and design a national carbon policy, I would hope that we take the lessons from this paper and the decades of environmental economics insight it builds upon to heart. This does not say that first or second generation biofuels are a bad idea, but if they want to compete for emissions reductions, they need to be fully cost competitive with other and currently lower cost emissions reductions alternatives.
[This post originally appeared on the Energy Institute at Haas Blog]