Sunday, August 31, 2014

Commodity Prices: Financialization or Supply and Demand?


I've often panned the idea that commodity prices have been greatly influenced by so-called financialization---the emergence of tradable commodity price indices and growing participation by Wall Street in commodity futures trading. No, Goldman Sachs did not cause the food and oil-price spikes in recent years. I've had good company in this view.   See, for example, Killian, Knittel and PindyckKrugman (also here), HamiltonIrwin and coauthers, and I expect many others.

I don't deny that Wall Street has gotten deeper into the commodity game, a trend that many connect to  Gorton and Rouwenhorst (and much earlier similar findings).  But my sense is that commodity prices derive from more-or-less fundamental factors--supply and demand--and fairly reasonable expectations about future supply and demand.  Bubbles can happen in commodities, but mainly when there is poor information about supply, demand, trade and inventories.  Consider rice, circa 2008.

But most aren't thinking about rice. They're thinking about oil.

The financialization/speculation meme hasn't gone away, and now bigger guns are entering the fray, with some new theorizing and evidence.

Xiong theorizes (also see Cheng and Xiong and Tang and Xiong) that commodity demand might be upward sloping.  A tacit implication is that new speculation of higher prices could feed higher demand, leading to even higher prices, and an upward spiral.  A commodity price "bubble" could arise without accumulation of inventories, as many of us have argued.  Tang and Xiong don't actually write this, but I think some readers may infer it (incorrectly, in my view).

It is an interesting and counter-intuitive result.  After all, The Law of Demand is the first thing everybody learns in Econ 101:  holding all else the same, people buy less as price goes up.  Tang and Xiong get around this by considering how market participants learn about future supply and demand.  Here it's important to realize that commodity consumers are actually businesses that use commodities as inputs into their production processes.  Think of refineries, food processors, or, further down the chain, shipping companies and airlines.  These businesses are trying to read crystal balls about future demand for their final products.  Tang and Xiong suppose that commodity futures tell these businesses something about future demand.  Higher commodity futures may indicate stronger future demand for their finished, so they buy more raw commodities, not less.

There's probably some truth to this view.  However, it's not clear whether or when demand curves would actually bend backwards.  And more pointedly, even if the theory were true, it doesn't really imply any kind of market failure that regulation might ameliorate. Presumably some traders actually have a sense of the factors causing prices to spike: rapidly growing demand in China and other parts of Asia, a bad drought, an oil prospect that doesn't pan out, conflict in the Middle East that might disrupt future oil exports, and so on.  Demand shifting out due to reasonable expectations of higher future demand for finished product is not a market failure or the makings of a bubble.  I think Tang and Xiong know this, but the context of their reasoning seems to suggest they've uncovered a real anomaly, and I don't think they have.  Yes, it would be good to have more and better information about product supply, demand and disposition.  But we already knew that.

What about the new evidence?

One piece of evidence is that commodity prices have become more correlated with each other, and with stock prices, with a big spike around 2008, and much more so for indexed commodities than off-index commodities.


This spike in correlatedness happens to coincide with the overall spike in commodity prices, especially oil and food commodities.  This fact would seem consistent with the idea that aggregate demand growth--real or anticipated--was driving both higher prices and higher correlatedness.  This view isn't contrary to Tang and Xiong's theory, or really contrary to any of the other experts I linked to above.  And none of this really suggests speculation or financialization has anything to do with it.  After all, Wall Street interest in commodities started growing much earlier, between 2004 and 2007, and we don't see much out of the ordinary around that time.

The observation that common demand factors---mainly China growth pre-2008 and the Great Recession since then---have been driving price fluctuations also helps to explain changing hedging profiles and risk premiums noted by Tang and Xiong and others.  When idiosyncratic supply shocks drive commodity price fluctuations (e.g, bad weather), we should expect little correlation with the aggregate economy, and risk premiums should be low, and possibly even negative for critical inputs like oil.  But when large demand shocks drive fluctuations, correlatedness becomes positive and so do risk premiums.

None of this is really contrary to what Tang and Xiong write.  But I'm kind of confused about why they see demand growth from China as an alternative explanation for their findings. It all looks the same to me.  It all looks like good old fashioned fundamentals.

Another critical point about correlatedness that Tang and Xiong overlook is the role of ethanol policy.  Ethanol started to become serious business around 2007 and going into 2008, making a real if modest contribution to our fuel supply, and drawing a huge share of the all-important US corn crop.


During this period, even without subsidies, ethanol was competitive with gasoline.  Moreover, ethanol concentrations hadn't yet hit 10% blend wall, above which ethanol might damage some standard gasoline engines.  So, for a short while, oil and corn were effectively perfect substitutes, and this caused their prices to be highly correlated.  Corn prices, in turn, tend to be highly correlated with soybean and wheat prices, since they are substitutes in both production and consumption.

With ethanol effectively bridging energy and agricultural commodities, we got a big spike in correlatedness.  And it had nothing to do with financialization or speculation.

Note that this link effectively broke shortly thereafter. Once ethanol concentrations hit the blend wall, oil and ethanol went from being nearly perfect substitutes to nearly perfect complements in the production of gasoline.  They still shared some aggregate demand shocks, but oil-specific supply shocks and some speculative shocks started to push corn and oil prices in opposite directions.

Tang and Xiong also present new evidence on the volatility of hedgers positions. Hedgers--presumably commodity sellers who are more invested in commodities and want to their risk onto Wall Street---have highly volatile positions relative to the volatility of actual output.



These are interesting statistics.  But it really seems like a comparison of apples and oranges.  Why should we expect hedger's positions to scale with the volatility of output?  There are two risks for farmers: quantity and price.  For most farmers one is a poor substitute for the other.

After all, very small changes in quantity can cause huge changes in price due to the steep and possibly even backward-bending demand.  And it's not just US output that matters.  US farmers pay close attention to weather and harvest in Brazil, Australia, Russia, China and other places, too.

It also depends a little on which farmers we're talking about, since some farmers have a natural hedge if they are in a region with a high concentration of production (Iowa), while others don't (Georgia).  And farmers also have an ongoing interest in the value of their land that far exceeds the current crop, which they can partially hedge through commodity markets since prices tend to be highly autocorrelated.

Also, today's farmers, especially those engaged in futures markets, may be highly diversified into other non-agricultural investments.  It's not really clear what their best hedging strategy ought to look like.

Anyhow, these are nice papers with a bit of good data to ponder, and a very nice review of past literature.  But I don't see how any of it sheds new light on the effects of commodity financialization. All of it is easy to reconcile with existing frameworks.  I still see no evidence that speculation and Wall Street involvement in commodities is wreaking havoc.

Monday, August 25, 2014

What’s the goal and point of national biofuel regulation?

While preparing a lecture for the 4th Berkeley Summer School in Environmental and Energy Economics, I returned to contemplating the regulation of biofuels as part of a federal strategy to combat climate change and increase energy security. If we review policy approaches for increasing the share of biofuels in the transportation fuels supply across this great land, there are three main approaches. We have subsidies for the production of ethanol and biodiesel, renewable fuels standards (RFS) and low carbon fuels standards (LCFS).
The two main tools employed at the federal level are subsidies, which essentially provide a per gallon payment for producing a gallon of a certain type of biofuel, and renewable fuels standards, which require the production of different classes and quantities of biofuels over a prescribed time path. California has employed a low carbon fuel standard, whose goal it is to decrease the average carbon content of California’s gasoline by prescribed percentages over time. It relies on life cycle calculations for the carbon content of different fuels and allows producers to choose a mix of different fuels, which decrease the average carbon content, thus providing more flexibility in terms of fuels compared to the RFS.
If I were elected the social planner, I would recognize that I am most likely not smarter than the market, but also would not trust the market to make the right decisions when it comes to carbon reductions (see the demonstrated record of markets since 1850). The standard way an economist would approach the problem, assuming that we know what the right amount of carbon abatement is, is to set a cap on emissions and issue tradable rights to pollute (a cap and trade). This would in theory lead to the desired level of emissions reductions at least cost. While preparing for my lecture, I was thinking I should set up a simple model where profit-maximizing producers of fuels face different policy constraints (e.g., subsidies, RFS, LCFS or a cap and trade), a reasonable demand curve and my giant computer. As so often happens to many of us environmental and energy economists, EI@Haas’ all-star team captain Chris Knittel (MIT) and coauthors had already written the paper, which is titled “Unintended Consequences of Transportation Carbon Policies: Land-Use, Emission, and Innovation”.
[Skip this paragraph if you are not a fan of wonk]. The paper, which is a great and relatively quick read, simulates the consequences of the 2022 US RFS, current ethanol subsidies and constructs a fictional national LCFS and cap-and-trade (CAT) system, which are calibrated to achieve the same savings as the RFS. The paper assumes profit-maximizing firms which either face no policy, the RFS, subsidies, LCFS or CAT. Using an impressive county level dataset on agricultural production and waste, the authors set out to construct supply curves for corn ethanol and six different types of cellulosic ethanol. Chris’ daisy chained Mac Pros then maximize profits of the individual firms by choosing plant location, production technology, and output conditional on fuel price, biomass resources, conversion and transportation costs. Changing fuel prices and re-optimizing gets them county level supply curves. Assuming a perfectly elastic supply of gasoline and a constant elasticity demand curve for transportation fuels, they solve for market equilibria numerically.
They use the results to compare the consequences of each policy type for a variety of measures we might care about. Here is what happens:
The CAT leads to the greatest increase in gas prices and largest decrease in fuel consumption. It leads to no additional corn ethanol production and slight increases in second-generation biofuels. The RFS and LCFS both lead to less than half the price increase and fuel reduction compared to the CAT. Both policies see a four to nine fold increase in corn ethanol production relative to no policy and a massive ramp up in second generation biofuels production. All three measures lead to the same reductions in carbon emissions. The subsidies leave fuel costs constant, do not change fuel consumption and lead to a massive increase in first and second generation biofuels, but only achieve two thirds of the carbon reductions compared to the other policies (which is due to the authors using current subsidy rates rather than artificially higher ones which would lead to the same carbon savings).
Biofuels lead to lower gas prices and equivalent carbon savings! This is the point, where biofuels cheerleaders scream “everything is awesome!” But this ain’t a Lego movie. Especially since Legos are not made from corn. The paper evaluates the policies along a number of dimensions. First, compare the abatement cost curves for the CAT and the LCFS. When it comes to marginal abatement cost curves, the flatter, the better. What we see in the paper is a radically steeper marginal abatement cost curve from the LCFS compared to the CAT. In equilibrium the marginal abatement cost for the LCFS is almost five times higher that of the CAT. What about those emissions reductions? What happens in practice is that the CAT leads to higher emissions reductions from reduced fuel consumption (by driving less or more efficient cars) and a little bit of fuel switching. For the LCFS there is much more fuel switching and not much less driving.
What about land use? Well, since the non CAT policies incentivize ethanol production, significant amounts of crop and marginal lands will be pulled into production.
figure 3
The paper shows that total land use for energy crops goes up about ten fold under the biofuels policies and only by about 30% under the CAT. The paper calculates that damages from erosion and habitat loss from these policies can reach up to 20% of the social cost of carbon compared to essentially 0% for the CAT.
Further, ethanol policies create the wrong incentives for innovation, where in some settings the incentives are too strong and in others they are too weak. A further aspect of the paper, which is incredibly clever, is that they show the cost of being wrong in terms of the carbon intensity (e.g., you get the indirect land use effect wrong, which is almost certainly the case) of different fuels can lead to massive amounts of uncontrolled emissions. The carbon damage consequences of being wrong by 10% in terms of the emissions intensity of corn ethanol are an order of magnitude (read 10 times!) the number for the cap and trade. Before I wonk you to death, I will close with some more general thoughts, but staffers of carbon regulators should read this paper. Now.
What this work shows is that in the case of biofuels setting a simple universal policy, which lets market participants choose the least cost ways of finding emissions reductions, is vastly preferred to complex renewable fuels or low carbon fuels standards. While I understand that producers of ethanol enjoy their subsidies (much like I enjoy my home interest mortgage deduction), this paper argues that they are a bad deal for society. And so is the RFS, as would be a national LCFS. As we go ahead and design a national carbon policy, I would hope that we take the lessons from this paper and the decades of environmental economics insight it builds upon to heart. This does not say that first or second generation biofuels are a bad idea, but if they want to compete for emissions reductions, they need to be fully cost competitive with other and currently lower cost emissions reductions alternatives.
[This post originally appeared on the Energy Institute at Haas Blog]

Tuesday, August 12, 2014

Big swings and shock absorbers

Two years ago when we started this blog, the Midwest was going through a major drought and ended up eaking out just above 123 bushels per acre (bu/ac) in corn yield. Today the USDA released its latest projection for 2014, with a forecast for record corn yields of 167.4 bu/ac, due to really good weather (as Wolfram summarized in the last post.)

The difference of 44 bu/ac between two years this close apart is bigger than anything experienced in the history of US agriculture. The closest thing was in 1994, when yields were 139 bu/ac after just 101 in the flood year of 1993. When expressed as a percentage swing, to account for the fact that yields overall have been going up, the swing from 2012 to 2014 (36%) is near historic highs but still less than some of the swings seen in the 1980’s and early 1990’s (see figure below).


We’ve talked a lot on this blog about what contributes to big swings in production, and why it shouldn’t be surprising to see them increase over time. Partly it’s because really bad years like 2012 become more likely as warming occurs, and partly it’s because farmers are getting even better at producing high yields in good conditions. Sometimes I think of good weather like a hanging curveball – a decent hitter will probably manage a hit, but a really good hitter will probably hit a home run. So the difference between good and bad weather grows as farmers get better at hitting the easy pitches.

Moving on from bad analogies, the point of this post is to describe some of the changes we might see as the world comes to grips with more volatile production. What kind of shock absorbers will farmers and markets seek out? Four come to mind, though I can’t be sure which if any will actually turn out to be right. First, and most obvious, is that all forms of grain storage will increase. There are some interesting reports of new ways farmers are doing this on-site with enormous, cheap plastic bags. We have a working paper coming out soon (hopefully) on accounting for these storage responses in projections of price impacts.

Second will be new varieties that help reduce the sensitivity to drought. Mark Cooper and his team at Pioneer have some really interesting new papers here and here on their new Aquamax seeds, describing the approach behind them and how they perform across a range of conditions.

A third response is more irrigation in areas where it hasn’t been very common. I haven’t seen any great data on recent irrigation for grain crops throughout the Corn Belt, but I’ve heard some anecdotes that suggest it is becoming a fairly widespread insurance strategy to boost yields in dry years.

The fourth is a bit more speculative, but I wouldn’t be surprised to see new approaches to reducing soil evaporation, including cheap plastic mulches. There are some interesting new papers here and here testing this in China showing yield gains of 50% or more in dry years. Even in the Midwest, where farmers practice no-till and crops cover the ground fairly quickly, as much as a third of the water in the soil is commonly lost to evaporation rather than via plant uptake. That represents a big opportunity cost, and if the price of grain is high enough, and the cost of mulch is low enough, it’s possible that it will take hold as a way of raising yields in drier years. So between storage and mulching, maybe “plastics” is still the future.

Sunday, August 3, 2014

2014 - colder and slightly wetter than average US growing conditions

Futures prices for corn have been decreasing a lot - for example the December 2014 contract for corn has decreased by more than 25% between the beginning of May and the beginning of August (chart of CME), indicating that the market either anticipates increased supply or a drop in demand, likely the former.  Maybe good weather will be giving the US, which produces 40% of the world's corn a bumper crop.

Below are the weather updates for 2014 until the end of July 2014.  Similar to earlier posts, I graphed the cumulative degree days above 29C, which have been shown to be very detrimental for corn growth. This measure counts how much temperatures exceed 29C and for how long. For example, being half a day at 33C would result in 2degree days above 29C (0.5days x 4C). These are the weighted average of all counties in the United States, where the weight is proportional to expected production (expected yield according to trend times last reported growing area).  Areas with higher yields and larger growing area get weighted more heavily.

Grey lines show the historic distribution from 1950-2011, while the last three years are shown in color.  The red line shows how hot July 2012 has been - notice the sharp increase of the red line in July.  By comparison, 2014 (green line) has been the second lowest total by the end of July of what we have observed in the last 65 years.  This should be great for con yields, as there hasn't been much damaging heat.

At the same time, it has also been slightly wetter than average.  Since too wet or too dry, having close to average amount of rain is good for crops as well.

Looking at these two graphs, it suggests that 2014 will be a very plentiful harvest.

Wednesday, June 25, 2014

Adaptation Illusions

For some reason, the topic of adaptation can sometimes turn me into a grumpy old man. Maybe it’s because I’ve sat through hours of presentations about how someone’s pet project should be funded under the name of climate adaptation. Or maybe it’s the hundreds of times people say that adaptation will just fix the climate problem, with no actual evidence to support that other than statements like “corn grows just fine in Alabama.” Or maybe it’s just that I’m really turning into a grumpy old man and I like excuses to act that way.

In any case, I was recently asked to write a piece for Global Food Security, a relatively new journal run by Ken Cassman that I’m on the editorial board for. I used it as a chance to articulate some of the reasons I am unconvinced by most studies trumpeting the benefits of adaptation. That’s not to say I don’t think adaptation can be effective, just that most of the “evidence” for it is currently quite weak. In fact, I think some adaptations could be quite effective, which is why it’s so important to not spend time on dead ends. The paper is here, and abstract below.


"Climate change adaptation in crop production: Beware of illusions"
A primary goal of studying climate change adaptation is to estimate the net impacts of climate change. Many potential changes in agricultural management and technology, including shifts in crop phenology and improved drought and heat tolerance, would help to improve crop productivity but do not necessarily represent true adaptations. Here the importance of retaining a strict definition of adaptation – as an action that reduces negative or enhances positive impacts of climate change – is discussed, as are common ways in which studies misinterpret the adaptation benefits of various changes. These “adaptation illusions” arise from a combination of faulty logic, model errors, and management assumptions that ignore the tendency for farmers to maximize profits for a given technology. More consistent treatment of adaptation is needed to better inform synthetic assessments of climate change impacts, and to more easily identify innovations in agriculture that are truly more effective in future climates than in current or past ones. Of course, some of the best innovations in agriculture in coming decades may have no adaptation benefits, and that makes them no less worthy of attention.

Tuesday, June 24, 2014

American Climate Prospectus, feeling the heat

I've had the pleasure of working with a team of brilliant people on the American Climate Prospectus: Economic Risks in the United States, which was released today. It was the independent research report commissioned by the Risky Business initiative, and which underlies that group's conclusions.  I'm excited to share and discuss with G-FEEDers the numerous ideas, questions, challenges and results that arose in the process of the analysis, since until now we haven't had a chance to discuss the work with the larger community. I will try to post on these various issues in the coming weeks, but for now I'd like to [sleep and] simply kick off the social media rodeo with my favorite graphic from the entire report (which has no economics in it!). If we've done our job right, it needs no explaining. The only hint for the non-wonks is that RCP 8.5 means "business as usual" climate change (or "A1B" if your wonk is out of date).


Source: American Climate Prospectus

I win 20 bucks if this goes viral, so please repost it.

Saturday, June 7, 2014

Regulating CO2 emissions and firm competitiveness

EPA announced this week the Clean Power Plan to reduce carbon emissions of coal fired power plants by 30% by 2030 (relative to 2005 levels).  It has created a lot of discussion with environmental groups praising it as an overdue step and some business pointing to the cost of such regulation.

One thing to note is that EPA was smart enough in giving states a lot of flexibility in meeting the 30% reduction.  In general, more flexibility should allow regulated plants to meet the goal in the most cost effective way. Max Auffhammer called this the Yoga Theorem, i.e., the "less flexible you are, the more you will suffer," which I can personally relate to (you don't want to see me stretch).

Most of the research we have talked about on this blog has estimated the effects of increasing temperatures that are caused by elevated CO2 levels (or other greenhouse gases like methane). Having spent so much time on the benefit side of greenhouse gas regulation (avoided damages), it might be interesting to look at some research on the cost side of such regulation. As any student of econ 101 remembers, in the optimum one should balance the costs of regulation against the benefits.

Making a factor of production (coal) more expensive will have two effects: first, there will be a shift away from coal, which will be especially felt in regions that rely on coal production (e.g., West Virginia).  While electricity generation has been shifting from coal to natural gas even before the regulation was implemented, the regulation will likely accelerate this transition. So there will be losers compared to the stays quo (one could also argue that coal has an unfair advantage in the status quo as it imposes a sizable unpriced externality, which makes a factor of production artificially cheap).

The best research on the cost of environmental regulation is a recent paper by Reed Walker, where he compares wages of individuals that work in industries in a county that are subject to tougher regulation under a the revised ozone standard compared to workers who work in the same county in industries that are not covered by the more stringent regulation. The combined loss in wages are sizable (5 billion in present discounted value), but order(s) of magnitude lower than the estimated benefits.  The largest cost incur to people that used to work in firms that are subject to tougher regulation and loose their jobs - once they find a new job wages tend to be lower for up to eight years.  Workers who stay at regulated firms see no drop in wages.

Second, regulating coal will make production that uses electricity (pretty much all production) more expensive as electricity prizes might rise as a result of the regulation.  How big is this second effect? We have some evidence from Europe, where the European Trading System (ETS) created a market for carbon credits.  Similar to the EPA regulation that only covers coal-fired power plants, ETS only covered a subset of firms.  This offers some nice way for researchers to compare regulated and unregulated firms.

Sebastian Petrick and Ulrich Wagner have a great new paper that uses firm-level data from firms in Germany and compares what happens to firms that are regulated under ETS by matching them to comparable firms that were not regulated.  Not surprisingly, regulated firms reduce emissions by 20% more than unregulated firms (the regulation is effective), but there seems to be no reduction in competitiveness of regulated firm, i.e., they find no reduction in employment, gross output, or exports.

In summary, my best guess is that the carbon regulation will have very little effect on overall competitiveness or employment in the US as a whole, but will be felt in some regions that heavily rely on coal.

Thursday, June 5, 2014

Adaptation with an Envelope


Economists like to emphasize how people and businesses will adapt to climate change.  On a geological scale the world is warming very fast.  But on a human scale it is warming slowly, so we can easily adjust infrastructure and management decisions to the gradually changing climate.  For example, in agriculture farmers can gradually adjust planting times, cultivars, and locations where we grow crops, and so on.

So how much does adaptation really buy us?  As it turns out, probably very little, at least in most contexts. 

Since it is economists who often emphasize this point, sometimes even intimating that otherwise negative impacts could turn positive with adaptation, perhaps we should pause for a moment to consider what basic microeconomic theory says about it.  And we have a ready-made tool for the job, called the envelope theorem (or here), that provides essential insight.

I'll try to make this intuitive, but it helps to be a little formal.  Suppose agricultural yield is:

y = f(xr)

where r indicates climate and x represents farmers' decisions.  I'm just using notation from a generic case in the second link above.

Farmers' decisions are not random.  With time and experience, we should expect farmers to optimize decisions for their climate.  Call these optimal decisions x*(r).  So, the outcomes we observe in practice are

y*  =  f(x*(r), r)

Now, to obtain a first-order approximation of the effect of climate change on yield, we need to find dy*/dr, which is just a fancy way of saying the marginal change in observed yield for a small change in climate.  Multiply this marginal change by the total change in climate (the change in r) , and we get a first-order approximation to the total impact.

If you've taken basic calculus, you learned the chain rule, which says that:

dy*/dr  =  df/dx dx/dr + df/dr

If the farmer is optimizing, however, df/dx=0.  The farmer cannot improve yield outcomes by changing decisions, because s/he's already optimizing.  So

dy*/dr = df/dr

And this gives the heart of the envelope theorem: to a first approximation, we don't need to worry about changes in behavior (dx/dr, or adaptation) to evaluate the effect of climate change on output.  The fact that behavior is already optimized means that behavioral adjustments will be second-order.

Here's an illustration of the math from lifted from the link above. The black and blue curves hold farmers's decisions fixed at different levels of x, optimized at each r.  The f*(r) ( or y*) we observe is the "upper envelope"of the all the blue and black curves with different, optimized levels of x.


Now, if  f*(r) is highly nonlinear, and we are contemplating a very large change in r, then adaptation will come into play. But even then it's not going to be a primary consideration.

I don't expect this basic insight, drilled into every economist during their first year of grad school (and even some undergraduates), will stop some economists from over-emphasizing adaptation.  But our own basic theory nevertheless indicates it is a small deal.  And it seems to me that the evidence so far bears this out just as clearly as the theory does.

Thursday, May 22, 2014

Regressing to the mean

As a general rule, people like to take credit when good things happen to them, but blame bad luck when things go wrong. This probably helps us all get through the day, feeling better about ourselves and other people we like. For example, I’d like to think that the paper rejection I got last month was bad luck, while the award I got last year was totally deserved. But anyone who pays attention to professional sports, or the stock market, knows that success in any single day or even year has a lot to do with luck. It’s not that luck is all you need, but it often makes the difference between very evenly-matched competitors. That’s why people or teams who perform particularly well in one year tend to drop back the next, a.k.a. regressing to the mean.

Take tennis. There’s clearly a skill separation between professionals and amateurs, and between the top four or five professionals and everyone else. But among the top, it’s hard to know who will win on any day, and it’s very hard to sustain a streak of victories against other top players. So even when someone like Raphael Nadal, who has some remarkable winning streaks, talks about how he got a few key bounces, he’s as much being an astute observer as a gracious winner.

Or the stockmarket. It’s well known that even the best fund managers have a hard time outperforming the market for a long time. Even if someone has beaten it five years in a row, there’s a good chance it was just luck given how many fund managers are out there. I don’t tend to watch interviews with fund managers as much as athletes, but something tells me they might be a little less inclined to turn down the credit.

So what about agriculture? It’s not a source of entertainment or income for most of us, so we don’t spend much time thinking about it, and you won’t find any posts on fivethirtyeight about it. But one thing that struck me early on working in agriculture is how farmers are just as prone to thinking they are above average as the rest of us. More specifically, if you ask why some fields around them don’t look as good, they will talk about how that farmer is lazy, has another job, doesn’t take care of his soil, etc.

As far as I can tell, this isn’t a purely academic question. Understanding how much farmers vary in their ability to consistently produce a good crop is important if you want to know the best ways of improving agricultural yields. If there really are a bunch of star performers out there, then letting them take over from the laggards by buying up their land, or training other farmers to use best-practices, could be a good source of growth in the next decade. For example, here’s a cool presentation about a new effort in India to have farmers spread videos of best practices through social networks.

There’s a fairly obvious but not perfect link to the idea of yield gaps. People who say that closing yield gaps are a big “low-hanging fruit” for yield improvement often have a vision of better agronomy being able to drastically raise yields. The link isn’t perfect, because it could be that even the best farmers in a region are underperforming, agronomically speaking, for instance if fertilizers are very expensive. And it could be that some farmers consistently out-perform not because of better management, but because they are endowed with better soils (though this can be sorted out partly by using data on soil properties). Even with these caveats, understanding how much truly better the “best” farmers are could help give a more realistic view of what could be achieved with improved agronomy.

The key here is the “how much” part of it. Nobody can argue that some farmers aren’t better than others, just as nobody can say that Warren Buffet isn’t better than an average fund manager. The question is whether this is a big opportunity, or if it’s best to focus efforts elsewhere. I’ve been trying to get a handle on the “how much” over the years by using satellites to track yields over time. I’ve used various ways of trying to display this in a simple way, but nothing was too satisfying. So let me give it another shot, based on some suggestions from Tony Fischer during a visit to Canberra.

The figures below show wheat yield anomalies (relative to average) for fields ranked in the top 10% for the first year we have satellite data (green line). Each panel is for a different area, and soils don’t vary too much within each area. Then we track those fields over the following years to see if those fields are able to consistently outperform their neighbors. Similarly we can follow fields in any other group, and I show both the bottom decile (0-10% in blue) and the fifth (40-50% in orange). The two horizontal lines show the mean yield anomaly in the first group for the year they ranked in the top, and their mean yield anomaly in all the other years. If it was all skill (or something else related to a place like the soil quality) the second line would be on top of the first. If it was all luck, the second line would be at zero.


So what’s the verdict? The top performers in the first year definitely show signs of regressing to the mean, as their mean yield drops much closer to the overall average in other years. Similarly, the worst performers “regress” back up toward the mean. But neither jump all the way back to zero, which says that some of the yield differences are persistent. In the two left panels, the anomalies are a little more than one-third the size they were in the initial year. In the right panel, the anomalies are about half their original value, indicating relatively more persistence. That makes sense since we know the right panel is a region where some farmers consistently sow too late (see this paper for more detail).

So a naive look at any snapshot of performance over a year or two would be way too optimistic about how big the exploitable yield gap might be. It’s important to remember that performance tends to regress to the mean. At the same time, there are some consistent differences that amount to roughly 10% of average yields in these regions (where mean yield is around 5-6 tons/ha). And with satellites we can pinpoint where the laggards are and target studies on what might be constraining them. At least that's the idea behind some of our current work. 


Whether other regions would look much different than the 3 examples above, I really don’t know. But it shouldn’t be too hard to find out. With the current and upcoming group of satellites, we now have the ability to track performance on individual fields in an objective way, which should serve as a useful reality check on discussions of how to improve yields in the next decade. 

Tuesday, May 13, 2014

Miraculous growth in Africa?


There has been a lot of chatter over the last few years about the purported African growth "miracle" -- the stylized fact that African economies seem to have really picked up speed over the last decade.  I am not really certified to comment on whether something is a miracle (maybe that certification comes with the genius award?), but reviewing the broader patterns and determinants of recent African growth seemed like a useful exercise to this blogger, and hopefully will be as well to the vaunted G-FEED readership. 

Seems like there are two basic questions:

  1. How has Africa (and here we really mean Sub-Saharan Africa) performed relative to how it has in the past, or to how other countries have recently?
  2. What explains these patterns of performance?
To answer question #1, I pulled the most recent national accounts data on per capita GDP from the World Bank's World Development indicators.  These are plotted below. On the left is how real GDP per capita has evolved in SSA over the last half century, and on the right is the growth rate in real GDP per capita for SSA (shown in red) as well as a number of other developing and developed regions.  

A few things are immediately apparent.  First, Africa does seem to have had a good decade, at least relative to the few before it. Real per capita GDP growth has averaged around 2% for the entire decade, and average per capita incomes across Sub-Saharan Africa are now higher than they have ever been in history. Second, though, Africa's recent performance does not appear to be a "miracle" relative to what other countries have accomplished, as can be seen by comparing recent African growth rates with growth in other developing regions in the right plot.  Nor is its performance an obvious miracle relative to what it has accomplished during past decades.  The continent has had growth surges before, for instance growing at about 2%/year in per capita terms for most of the 1960s and 1970s.  Indeed, per capita incomes only recovered their previous mid-70s after about 2005, and are now only slightly higher than they were 30 years ago. 

Some people do not trust these national accounts data and have tried to use other data sources to get at the same question.  For instance, Alwyn Young uses wage and asset data from the Demographic and Health Surveys to reconstruct how consumption has changed over time, and concludes that the story is even better than what national accounts data would suggest (but these findings have been seriously questioned, for instance here).  Henderson, Storeygard, and Weil (2012) in a really cool recent paper use satellite data on the brightness of nighttime lights as a proxy for economic activity.  Although they do not report a SSA-wide growth number, they show that GDP growth derived from night lights is substantially higher than national accounts estimates for a substantial number of African countries (although not all of them), perhaps again suggesting that the national accounts data understate the improvements.

So the overall conclusion seems to be that while recent African performance is not off-the-charts good, either compared to past history on the continent or to other regions over the last few decades, the continent did seem to have a good decade. 

So on to question 2:  what explains the uptick in recent performance?  Here the rhetoric about miracles seems particularly unhelpful, given that it's not clear that we'd want Africa's recent performance to be a "miracle" in the Merriam-Webster sense of "an extraordinary event manifesting divine intervention in human affairs".  It would instead be nice if recent growth was instead a function of concerted improvements in some of the suspected constituents of growth:  institutions that protect and support political and economic rights, policies that help capital flow to high-return investments, long term investments in agriculture and education that have started to bear fruit, etc. 

Perhaps not surprisingly, it turns out there is still a lot of disagreement over why Africa has done better in the 2000s relative to the 1980's and 1990s. 

Here's the upbeat news from the consultancy McKinsey in an attractively titled report "Lions on the move": "Africa's economic pulse has quickened, infusing the continent with a new commerical vibrancy....  Africa's growth acceleration resulted from more than a resource boom.  Arguably more important were government actions to end political conflicts, improve macroeconomic conditions, and create better business climates, which enabled growth to accelerate broadly across countries and sectors."

Compare that to the view of long-time Africa hand Michael Lipton:  "Stop kidding ourselves. Faster GDP growth in Africa since 2000 is mainly a mining boom, with dubious benefits. Staples yields (and labour productivity) have not reversed the dismal trends that Peter Timmer diagnosed two decades ago: big, credible rises are seen in only a few African countries (e.g. Rwanda, Ghana). The populous ones (Ethiopia, Nigeria, maybe Kenya, above all DR Congo) tell a sad tale."

National accounts data can again be used to shed some light on this debate.  Below are a couple plots from a periodic Africa-focused World Bank report called "Africa's Pulse".  The top one shows how resource-rich countries performed relative to non-resource rich countries, and the bottom one shows the sectoral composition of growth in the fast-growing African countries.




These data seem to suggest a story in between the Lipton view and the McKinsey view:  growth was substantially higher in resource-rich countries (both oil and non-oil) relative to non resource rich countries, although as the report explains there were some notable performers in the latter group (Rwanda, Ethiopia). And as shown in the bottom plot, all sectors grew over the period since the mid 1990s.  For those interested in agriculture, there is also recent evidence that agricultural productivity was also on the rise in many, though not all, African countries;  total factor productivity (total outputs per total inputs) improved in Eastern and Southern African over the last few decades but was flat or declined in much of Central Africa, the Sahel, and the Horn. 

But just looking at broader sectoral patterns of growth does not really answer the fundamental question about what caused growth.  For instance, observed "broad based" patterns of growth -- i.e. the fact that resource sectors and non-resource sectors (services, manufacturing) both appeared to be growing -- could still be consistent with resources booms that increase demand for local non-tradeables.  Indeed, the fact that services grew much faster than manufacturing (and, with resources, grew as a proportion of the economy while ag and manufacturing shrank) is some evidence that this might the case.  Was it just the case that higher mineral prices gave miners more money and they decided to buy more sandwiches and haircuts?  It's hard to know. 

Similarly, concluding that productivity growth was driven by a declining share of the labor force in agriculture, as argued by a recent NBER paper, does not necessarily tell us what the driver was of this declining share. The fact that most people that left farming ended up in the informal services sector is again not obvious evidence of broad-based or sustainable growth.

Maybe some readers know of better-identified studies of the sources of recent productivity growth in African economies?  In the absence of these studies, it seems right to conclude that Africa is doing better, but it seems pretty hard to know what to make of this improvement -- where it came from, and whether it might last.