Friday, December 21, 2012

The good and bad of fixed effects

If you ever want to scare an economist, the two words "omitted variable" will usually do the trick. I was not trained in an economics department, but I can imagine they drill it into you from the first day. It’s an interesting contrast to statistics, where I have much of my training, where the focus is much more on out-of-sample prediction skill. In economics, showing causality is often the name of the game, and it’s very important to make sure a relationship is not driven by a “latent” variable. Omitted variables can still be important for out-of-sample skill, but only if their relationships with the model variables change over space or time.

A common way to deal with omitted variable bias is to introduce dummy variables for space or time units. These “fixed effects” greatly reduce (but do not completely eliminate) the chance that a relationship is driven by an omitted variable. Fixed effects are very popular, and some economists seem to like to introduce them to the maximum extent possible. But as any economist can tell you (another lesson on day one?), there are no free lunches. In this case, the cost of reducing omitted variable problems is that you throw away a lot of the signal in the data.

Consider a bad analogy (bad analogies happen to be my specialty). Let’s say you wanted to know whether being taller caused you to get paid more. You could simply look at everyone’s height and income, and see if there was a significant correlation. But someone could plausibly argue that omitted variables related to height are actually causing the income variation. Maybe very young and old people tend to get paid less, and happen to be shorter. And women get paid less and tend to be shorter. And certain ethnicities might tend to be discriminated against, and also be shorter. And maybe living in a certain state that has good water makes you both taller and smarter, and being smarter is the real reason you earn more. And on and on and on we could go. A reasonable response would be to introduce dummy variables for all of these factors (gender, age, ethnicity, location). Then you’d be looking at whether people who are taller than average given their age, sex, ethnicity, and location get paid more than an average person of that age, sex, ethnicity, and location.

In other words, you end up comparing much smaller changes than if you were to look at the entire range of data. This helps calm the person grumbling about omitted variables (at least until they think of another one), and would probably be ok in the example, since all of these things can be measured very precisely. But think about what would happen if we only could measure age and income with 10% error. Taking out the fixed effects means removing a lot of the signal but not any of the noise, which means in statistical terms that the power of the analysis goes down.

Now to a more relevant example. (Sorry, this is where things may get a little wonkish, as Krugman would say). I was recently looking at some data that colleagues at Stanford and I are analyzing on weather and nutritional outcomes for district level data in India. As in most developing countries, the weather data in India are far from perfect. And as in most regression studies, we are worried about omitted variables. So what is the right level of fixed effects to include? Inspired by a table in a recent paper by some eminent economists (including a couple who have been rumored to blog on G-FEED once in a while), I calculated the standard deviation of residuals from regressions on different levels of fixed effects. The 2nd and 3rd columns in the table below show the results for summer (June-September) average temperatures (T) and rainfall (P). Units are not important for the point, so I’ve left them out:

Year FE
Year + State FE
Year + District FE

The different rows here correspond to the raw data (no fixed effect), after removing year fixed effects (FE), year + state FE, and year + district FE. Note how including year FE reduces P variation but not T, which indicates that most of the T variation comes from spatial differences, whereas a lot of the P variation comes from year-to-year swings that are common to all areas. Both get further reduced when introducing state FE, but there’s still a good amount of variation left. But when going to district FE, the variation in T gets cut by nearly a factor of 10, from 2.2 to 0.30! That means the typical temperature deviation a regression model would be working with is less than a third of a degree Celsius. 

None of this is too interesting, but the 4th and 5th columns are where things get more related to the point about signal to noise. There I’m computing the correlation between two different datasets of T or P (details of which ones are not important). When there is a low correlation between two datasets that are supposed to be measuring the same thing, that’s a good indication that measurement error is a problem. So I’m using this correlation here as an indication of where fixed effects may really cause a problem with signal to noise.

Two things to note. First is that precipitation data seems to have a lot of measurement issues even before taking any fixed effects.  Second is that temperature seems ok, at least until state fixed-effects are introduced (a correlation of 0.842 indicates some measurement error, but still more signal than noise). But when district effects are introduced, the correlation plummets by more than half.

The take-home here is that fixed effects may be valuable, even indispensible, for empirical research. But like turkey at thanksgiving, or presents at Christmas, more of a good thing is not always better.

UPDATE: If you made it to the end of this post, you are probably nerdy enough to enjoy this related cartoon in this week's Economist.

Wednesday, December 12, 2012

What poop tells us about the social impacts of climate change

A growing literature in paleoclimate and archeology explores the extent to which past fluctuations in climate have shaped the evolution of human societies.  These papers get to tackle pretty sexy topics:  did climate help cause the collapse of the Maya?  Is climate implicated in dynastic transitions in China? How about in the fall of Angkor Wat?

That a lot of these papers are answering "yes" to the question of whether climate is implicated in large historical social upheavals could tell us something important about the impact of future climatic changes on social outcomes.  But there are a few things you might worry about in this literature.  One is that the studies are actually measuring what they say they're measuring -- i.e that they're picking up meaningful variation in human activity, and that changes in societies and in climate happened when they say they did.  Most of the papers published that you see on these topics spend most of their time convincing you that this is the case, and given my mere hobbyist's understanding of paleolimnology I have to take them at their word.

The second concern is one that is more familiar to folks that are used to running regressions:  can we say with certainty that the variations in climate are causally linked to the socioeconomic variation of interest?  The hard part with these papers is that they're often dealing with one-off events -- e.g. the collapse of the Maya -- that don't give you the repeated observations you need to carry out the typical statistical tests.  Basically, you'd be worried that even though the collapse event you measured was coincident with a large climate shock, by chance something unobserved might also have happened at the same time that in fact caused the collapse.  Given this, you might be worried that these studies are looking under the proverbial lamppost for the proverbial keys: we'd like to observe the universe of all climate events and all collapse events over time, but instead we focus on a few iconic ones.  

A new paper in PNAS helps overcome some of these concerns. D'Anjou and coauthors use coprostanol concentrations (Wikipedia:  chemical compounds found in fecal matter of higher order mammals - i.e. poop) that they dug up in a Norwegian lake to estimate the variation in local human activity in the nearby area over the last 2000 or so years.  They then compare this to existing reconstructions of local summertime temperature, which is the time of year when agriculture would have been possible.  The nice thing about their paper is that they have a lot of observations of the same place over time, and so can run some of the basic statistical tests you often want to see in these papers (and can't).  The other nice thing is that poop appears to be a much better indicator of human activity that many of the proxies used in the past, which could have been directly affected by climate (e.g. charcoal from fires, which could have been manmade or could have risen naturally as temperatures changed). 

Here is the money plot comparing poop and temperature (their Figure 5):

While there are a couple things you can still complain about -- e.g. you probably want to see Panel C as a plot of the time-detrended data -- this to me is one of the more convincing relationships that has shown up in these Paleo papers.  As in other studies looking at cold regions, they show that human activity responded strongly and positively to warmer temperatures: drops of ~4C caused total abandonment of (poop-related) human activity in the region.

While both the broader welfare effects and the modern implications of this and related studies are not immediately obvious (did people die or just migrate south? what do Iron Age societies' sensitivities to climate imply for modern societies?), the methodological differences between this and most of the past studies is to me a nice contribution.  And hopefully the grad students who had to dig up the poop got a PNAS paper out of the deal...

Tuesday, December 11, 2012

The summer of 2013

Last week at AGU I gave a talk about the lessons of the US corn harvest in 2011 and 2012, both of which were below trend line (see figure). That got me thinking a little more about what to look for in 2013. The obvious point is that it is likely to be better than 2012, because it can’t get much worse. But that’s not too insightful, it’s like saying that Cal’s football team will be better next year, since they were so bad this year (By the way, welcome to Max Aufhammer, our newest blogger! With Wolfram’s move to Berkeley that brings our Cal contingent up to 3. I sure hope I don’t say anything to offend them.)

As we’ve talked about in other posts, the summer of 2012 might be considered the normal in a few decades, but not now. And some recent work from Justin Sheffield and colleagues in Nature argues that drought trends globally, and in North America, are not significantly positive if calculated properly (which contradicts some earlier work). We can leave aside for now the question of whether soil moisture trends are the best measure of drought exposure if one cares about corn yields (though a good topic for a future post), and simply say that conditions in 2012 were well below trend.

This means we’d expect next year to be closer to the trend, and that seems to be the overriding sentiment of markets. As Darrell Good over at farmdoc daily explains “In the past five decades, extreme drought conditions in the U.S., like those experienced in 2012, have been followed by generally favorable growing conditions and yields near trend values.”

But two things work against this tendency to revert to the mean. First, the drought still persists throughout much of the country, as seen at UNL’s drought monitor site.  As Good goes on to say, “current dry soil moisture conditions in much of the U.S. and some recent forecasts that drought conditions could persist well into next year have raised concerns that such a rebound in yields may not occur in 2013.” In other words, if the Corn Belt does not get a wet winter and/or spring, expect prices to start climbing again.

Second, though, is that good initial moisture does not eliminate the chance of drought during the season. There’s an interesting piece by folks at the National Climate Data Center (NCDC) in the AGU newsletter I got today (it was actually published Nov. 20, but it takes about 3 weeks for me to get it!). They note that the 2012 was not like previous droughts in the 1930’s  and 1950’s, or even 1988, in that it was very much driven by high temperatures rather than low starting moisture. As they say:
“For example, at the end of February in both 2011 and 2012 the national PDSI (calculated using the observed monthly mean temperature and precipitation averaged across the contiguous United States) was 1.2 (mildly wet) and –2.5 (moderate drought), respectively, compared to 1934 and 1954 of –5.7 and –4.6, respectively.”
This is also shown pretty effectively in an animation by climate central. So the high temperatures in recent years have made drought come on much more quickly than usual. As the NCDC piece says “By the end of September, every month since June 2011 had above normal average temperatures, a record that is unprecedented.” That's 16 straight months of above normal temperatures!

So my seat-of-the-pants guess is that next year’s yields are likely to still be below trend line (which would be at around 160 bushels/acre). Obviously lots of things could push it above trend line (including changing the definition of the trend!), and it’s way too early to have much confidence about how 2013 will end up. But following Sol’s lead on the Sandy damage prediction, I’ll go out on a limb (his mean was too low by a factor of two, but the true damage was within the confidence interval!). And to pair a risky bet with a safe one, I’ll also predict Stanford wins big at the Rose Bowl.

Sunday, December 9, 2012

Climate, food prices, social conflict and....Google Hangout?

My coauthor Kyle Meng was asked to participate in this HuffPost Live discussion about climate, food prices and civil conflict. It's an interesting discussion, which gets pretty rowdy at times, with an eclectic group. I am also very impressed by HP's leveraging of Google Hangout to produce a low-cost public, intellectual forum.

David has written about the food-price and conflict linkage before, and we've discussed the association between climate and conflict a few times here.  In general, I don't think a linkage has been demonstrated conclusively with data, but that doesn't seem to get in the way of people referencing it.

The debate is interesting and entertaining, highlighting a few of the differences in how some policy-folk, economists and ecologists view theses various ideas.

Kyle was asked to participate because he was an author of our 2011 Nature paper on ENSO and conflict.  He also happens to be on the job market right now.

Thursday, December 6, 2012

Climate data and projections at your fingertips

Do you ever get jealous of Wolfram's pretty graphs on this blog or just want to know what March rainfall will look like in New Zealand at midcentury -- but you just don't have the time or energy to sort through all the various climate data sets or learn how to use GIS software?

Lucky for you, the Nature Conservancy has teamed up with scientists at the University of Washington and the University of Southern Mississippi to develop Climate Wizard, a graphical user interface available through your browser window that lets you surf real climate model projections and historical data for both the USA and the world. According to the website:
With ClimateWizard you can:
  • view historic temperature and rainfall maps for anywhere in the world
  • view state-of-the-art future predictions of temperature and rainfall around the world
  • view and download climate change maps in a few easy steps 
ClimateWizard enables technical and non-technical audiences alike to access leading climate change information and visualize the impacts anywhere on Earth.  The first generation of this web-based program allows the user to choose a state or country and both assess how climate has changed over time and to project what future changes are predicted to occur in a given area. ClimateWizard represents the first time ever the full range of climate history and impacts for a landscape have been brought together in a user-friendly format. 
The data sets underlying behind the pictures are well documented on the "about us" page, and the data in each map is easily exportable.

If this had come out four years ago, I probably could have shaved six months off of my phd...

h/t Bob Kopp

Sunday, November 25, 2012

Apps or agriculture?

In one of the most emailed NYTimes articles from last week, it was reported that there are now more software engineers in the US (about a million) than there are farmers.  A follow-up article on the "Silicon Prairie" suggests that a growing percentage of these software jobs are, ahem, cropping up in farming strongholds like Iowa and Kansas.

The articles point out that while building iPhone apps nobody wants isn't particularly lucrative, building apps that people do want can be (e.g. the guys who thought to launch birds at pigs pulled in over $100 million in revenue in 2011).  And on the whole, getting a job as a software engineer appears to pay pretty well, at least judging by what private shuttle bus access to Silicon Valley has done to rents in the Mission neighborhood in SF where I live. 

This transformation from agriculture to apps is what economists call the "structural transformation" of economies, in which they metamorphose from poorer, rural, and agrarian societies to societies that are rich(er), more urbanized, and more devoted to industry and services.  You start out with everyone poor and farming, and you end with everyone wealthy and making apps for each other.  

Understanding why and how this transition occurs is a central (and old) question in development economics, and one that remains crucial for economic policy in the developing world.  A key debate is to what extent improvements in agricultural productivity help contribute to broader economic development.  That is, do poor countries undergo the structural transformation because they have improved the productivity of their (majority) rural population, or do they undergo it because they have focused their attention and investments on other, seemingly more modern sectors?

Agricultural development and economic growth are clearly related. Below are some plots for African countries (left) and all countries (right) over the period 1961-2008 using annual data from the World Bank, with overall GDP growth on the y-axis and agricultural growth on the x-axis.  On one level this relationship is mechanical:  in countries where agriculture makes up a large percentage of total output, it has to be the case that growth in agricultural productivity increases overall output and makes people better off.  But because agriculture is a relatively small share of total output in most countries outside of Africa, and the relationship is still relatively strong and highly significant using all countries in the world (right plot), there is probably something more interesting going on.  But which way does the causal arrow go?  Does an improving economy lead to agricultural growth, or is agriculture itself an important engine of that growth?

There are a few ways that agricultural growth could spur growth in other sectors.  First, a more productive agriculture could provide stuff that other sectors need to grow:  an agricultural surplus generates capital that can be invested in non-agricultural enterprises and frees up labor to work in them.  Second, increased agricultural productivity means better-off farmers which means increased demand for non-agricultural stuff:  rich(er) people spend a smaller and smaller percentage of their income on food, and have extra money to spend on a new widget or that Angry Birds app.  Agricultural growth could harm other sectors too, however, by raising their costs for land and labor:  increases in ag productivity will likely raise wages and land prices for everyone. Furthermore, if capital and labor are mobile, maybe non-agricultural sectors can get what they need from elsewhere.

A couple recent papers try to shed some causal relationship between agricultural development and broader economic performance.  The basic empirical approach is familiar:  find something that shifts around agricultural productivity in a plausibly random way, and see what this implies for economic performance outside agriculture.  Finding that magical "exogenous" shifter is where the cleverness come in.

In a new working paper, Hornbeck and Keskin study what happens when changes in technology (better pumps) suddenly allowed farmers in parts of the US Great Plains to tap the Ogallala Aquifer for irrigation.  They show that access to the Ogallala greatly increased agricultural productivity in the counties where it could be accessed, and they then compare how the non-agricultural economy performed in these counties to its performance in nearby counties that were otherwise the same but that could not access the Aquifer.  They show that agricultural productivity gains do not seem to have helped the long-run performance of either industry or services in these counties.  

Hornbeck and Keskin conclude that "a large windfall gain in the agricultural sector does not appear to have encouraged broader economic development of the local non-agricultural economy", and that "public support of the agricultural sector does not appear to generate positive economic spillovers that might justify its distortionary impacts."  Citing other work by Hornbeck and co-authors , they note that the opening of large manufacturing plants in the US appeared to have much larger local spillovers.  

Call this support for the "apps not agriculture" view:  if you want to improve outcomes, invest in something besides agriculture.  You hear this a lot.  Here, for instance, is econo-celebrity blogger Chris Blattman a couple weeks ago: "Helping poor women set up a market stall, or small farmers double their profits, is a good and noble goal.... But it is a humanitarian strategy, not a growth strategy. Real economic change will wait for industry."

So, apps not agriculture?  A paper by Nunn and Qian published last year in the QJE, a top econ journal, paints a very different picture.  They study what happened when European sailors brought potatoes back from the Americas to the "Old World" (i.e. Eastern Hemisphere) beginning in the 1500 and 1600s.  They show that relative to what was being grown in much of the Old World at the time, growing potatoes offered a remarkable bang for the buck in terms of both calories and vitamins.  Their empirical approach, similar to Hornbeck and Keskin, compares how outcomes evolved in Old World areas that were able to adopt the potato to how they evolved in nearby areas that were not suitable to potato cultivation.  

They find that potato-related increases in agricultural productivity explain a remarkable 25-30% of the (dramatic) increase in population and urbanization that these areas experienced between 1700-1900.  That is, agricultural productivity improvements played a driving role in the ensuing structural transformation of these economies.  This finding suggests a very different policy prescription:  if you want to set the structural transformation in motion, you would do well to invest in agricultural productivity improvements. Call this the "from agriculture grows apps" view. 

Which view is correct? And what explains the difference between the findings in these two papers? It seems to me that the Hornbeck and Keskin finding describes an economic setting (post WWII United States) that is very different from many rural agricultural settings of today's poor countries:  trade and capital markets in the US were/are fairly well integrated, which would limit local demand spillovers (just buy that widget you want off Amazon) and limit the need for capital to come from local sources (borrow from Chase instead of your buddy down the street).  

The setting in Nunn and Qian is perhaps more relevant for many of today's developing countries -- and in fact includes a lot of them -- but much of their data and anecdotes are from Europe and so the implications for, say, Africa are not automatic.  Some equally rigorous studies on modern developing countries are going to be key for understanding how we get from agriculture to apps  -- in particular, in understanding whether the insights from papers like Nunn and Qian hold for an increasingly globalized world.  As in the two papers above, this is going to require some serious cleverness in identifying exogenous shifters of longer-term agricultural productivity.

(Concluding sidenote:  much like March Madness, apps can have their own decidedly negative economic spillovers...)

Monday, November 12, 2012

Grow, Canada?

A quick post on an interesting article in Bloomberg last week about expansion of corn in Canada. I’ve been keeping an eye out for stories about possible adaptations in the wake of this summer’s poor corn harvest in the U.S (or as Stephen Colbert called it – our shucking disaster). As we’ve discussed before on this blog, having crops migrate northward is a commonly cited adaptation response. In addition to being encouraged by the warming trends, the northward migration of corn could be helped by new varieties that have shorter cycles and better cold hardiness.

It’s certainly interesting to see the expansion of corn in areas like Alberta and Manitoba, or for that matter in North Dakota. But too often media stories, or even scientific studies, present the changes as de facto proof of adaptation. The question is not really if crops will move around – they always have moved around in the past, and will continue doing so in the future. And I don’t even think the key question is whether climate change will be an important factor in them moving around – the evidence is still slim on this question but the Canada transition is clearly one made easier by the climate trends. Instead, the most important issue is how much is gained by these adaptive moves relative to the overall impacts of climate trends.

Sol has been analyzing this in some detail for different regions around the world, so he will hopefully have some more to say with numbers to back it up. But let me just point to two relevant questions that were each alluded to in the Bloomberg article. First, is the scale of expansion large relative to the main zones of production? In the case of Canadian corn, the article mentions about 120,000 Ha of corn area sown in three provinces of Canada (Manitoba, Saskatchewan, and Alberta). That is less than one-half of one percent of the U.S. corn area.

Second, what is being displaced by the crop expansion? In most cases, including Canada’s, corn is being grown by farmers that used to grow wheat or barley. So the gain in corn production is offset to a large degree by the loss in wheat production (although not completely, since corn typically produces more grain per hectare than wheat). Modeling studies of adaptation typically assume there is a net expansion of total cropland in cold areas, not just expansion of individual crops. Without net area expansion, it is hard to offset losses incurred at lower latitudes.

There is certainly an argument to be made that big expansions of net area will only be seen with more substantial shifts in climate, since only then will it pay to make the large capital investments to open up new areas for agriculture. But it’s also possible that the constraints on expanding into new areas (poor soils, lack of infrastructure, property rights, rules on foreign investment) are large enough that only a modest amount of expansion will happen.

The main point is that changes in crop area have to be judged not just by whether or not they happen, but by whether their impact is large enough to matter. For most readers of Bloomberg, “large enough to matter” may simply mean that it’s an opportunity to make a lot of money. But for those of us interested in global food supply and price dynamics, the scale of interest is much larger. A 25% drop in U.S. corn production leaves a big hole to fill – it’s not enough to drop a few shovels full of dirt in and call it a day. 

Friday, November 9, 2012

Climate and Conflict in East Africa

Andrew Revkin asked what I thought about this recent PNAS article:

John O’Loughlina, Frank D. W. Witmer, Andrew M. Linke, Arlene Laing, Andrew Gettelman, and Jimy Dudhia
Abstract Recent studies concerning the possible relationship between climate trends and the risks of violent conflict have yielded contradictory results, partly because of choices of conflict measures and modeling design. In this study, we examine climate–conflict relationships using a geographically disaggregated approach. We consider the effects of climate change to be both local and national in character, and we use a conflict database that contains 16,359 individual geolocated violent events for East Africa from 1990 to 2009. Unlike previous studies that relied exclusively on political and economic controls, we analyze the many geographical factors that have been shown to be important in understanding the distribution and causes of violence while also considering yearly and country fixed effects. For our main climate indicators at gridded 1° resolution (∼100 km), wetter deviations from the precipitation norms decrease the risk of violence, whereas drier and normal periods show no effects. The relationship between temperature and conflict shows that much warmer than normal temperatures raise the risk of violence, whereas average and cooler temperatures have no effect. These precipitation and temperature effects are statistically significant but have modest influence in terms of predictive power in a model with political, economic, and physical geographic predictors. Large variations in the climate–conflict relationships are evident between the nine countries of the study region and across time periods.
Here is my full reply:

This is useful paper, and the results are important, but the framing by the authors and the press coverage (which I suppose that the author's guide) is strange.

The authors report that months with hotter and drier conditions have much more violence. And the size of this effect is *large*: rates of violence climb 29.6% when temperatures are 2 degrees higher than normal, and 30.3% when rainfall declines from wet to normal/dry (a 2 standard deviation change).  These numbers are really big! To get a sense of scale, shifting from a violent hot/dry anomaly to a more peaceful cool/wet anomaly decreases violence by more than 50%, which is similar to the reduction in crime that NYC felt during Rudy Guiliani's tenure! (see here). Whether or not you think Guiliani was responsible for that decline, it is widely recognized that the reduction in NYC violence on that scale had a large effect on the welfare of New Yorkers.

Now, while these effects are large, they are not new. Our 2011 Nature paper found almost the identical result.  And the 2009 PNAS paper by Burke et al. also reported the same sized effects. Burke et al. was at the national scale, and our paper was at the global scale, so I think that the main contribution of O'Loughlin et al is to demonstrate that the findings of those two earlier papers continue to hold up a the local scale.

What I find surprising about the paper's presentation and the press coverage is that it looks like the authors are trying to bury this finding within the paper by suggesting that these effects are small or unimportant. I don't know why (but it certainly has the feel of Nils Petter Gleditsch's attempt to bury similar findings in his 2012 Special Issue of the Journal of Peace Research here).  Had I had found this results, I would be putting them front and center in the article rather than reporting them in the text on page 3.

The main way that these authors try to downplay their findings is to argue that temperature and precip anomalies don't have a lot of predictive power compared to "other variables", but this is a red herring. The comparison the authors make is not an apples-to-apples comparison. The statistical model the team uses has two dimensions, time and space. In modeling violence, the team first tries to model *where* violence will occur to get a location-specific baseline. Then, conditional on a location's baseline level of violence, they model *when* violence in a specific location is higher or lower than this baseline.  Their main finding has to do with the "when" part of the question, showing that violence within a specific location fluctuates in time, reflecting temperature and precip anomalies. But then they go on to compare whether they can predict the timing or location of violence better, which is not a useful exercise. They conclude that location variables like "population density" or "capital city" are much stronger predictors of violence than timing variables like "temperature" or "presidential election", but spatial variation and temporal variation in violence are completely different in magnitude and dynamics, so it is unclear what this comparison tells us. The authors argue that this tells them that doublings of violence brought on by hot/dry conditions is only a "modest" effect, but this claim doesn't have any statistical foundation and doesn't jibe with common sense. 

To see why, consider the NYC/Guiliani example. If we ran the O'Loughlin et al. model for New York State during 1980-2010 with variables describing "population density" and a trend (to capture the decline in the 1990s), we'd see that on average higher population densities would have higher crime (i.e. NYC has more crime than small towns in Upstate New York) and there is a fall in violence by 50% in the 1990s. But if one used the same measures as O'Loughlin to ask whether location or trend was "more important", we'd find that location is a much more "important" predictor of violence because the difference between violence in NYC and the rest of the state is much more dramatic than the 50% decline within NYC during the 1990s. Using this logic, we would conclude that the 50% decline in the 1990's was "modest" or "not important," but anyone who's been living in NYC for the last couple of decades will say that conclusion is nuts. Halving violence is a big deal. One reason this statement doesn't make a lot of sense is because many rural locations have very low crime rates (in part because they don't have many people) so violence could double, triple or increase by a factor of ten and those changes would be trivial (in terms of total violent events) compared to a 50% change in NYC. The fallacy of these kinds of apples-to-oranges comparisons (mixing "where" with "when" variables) is why using "goodness of fit" statistics to assess "importance" doesn't make sense when working with data that covers both time and space. An aside: this mistake also shows up as one of the critical errors make by Buhaug in his 2010 PNAS article that got a lot of press and is widely misunderstood and misinterpreted.

So in short: The paper is important and useful because the authors confirm at the local-scale previously discovered large-scale results linking climatic events and violence. But their conclusion that these effects are "modest" is based on an incorrect interpretation of goodness-of-fit statistics.

In a follow-up email, I sent him a graph that he ended up posting. This is the full explanation:
Marshall Burke and I were looking at the data from this paper and put together this plot (attached).  It displays the main result from the paper, but more clearly than any of the figures or language in the paper does (we think).  Basically, relatively small changes in temperature lead to really large changes in violence, illustrating my earlier point that this study finds a very large effect of climate on violence in East Africa. 
click to enlarge
About the plot:
The thin white line is the average response of violence to temperature fluctuations after rainfall, location, season and trend effects are all removed. The red "ink" indicates the probability that the true regression line is at a given location, with more ink reflecting a higher probability (more certainty).  There's a fixed amount of "ink" at each temperature value, it just appears lighter if the range of possible values is more spread out (less certain).  We estimate the amount of uncertainty in the regression by randomly resampling the data 500 times, re-estimating the regression each time and looking at how much the results change (so-called "bootstrapping"). This "watercolor regression" is a new way of displaying uncertainty in these kinds of results, I describe it in more detail here.  A similar and related plot showing the relationship between rape and temperature is here.

Wednesday, November 7, 2012

An American, a Canadian and a physicist walk into a bar with a regression... why not to use log(temperature)

Many of us applied staticians like to transform our data (prior to analysis) by taking the natural logarithm of variable values.  This transformation is clever because it transforms regression coefficients into elasticities, which are especially nice because they are unitless. In the regression

log(y) = b* log(x)

b represents the percentage change in y that is associated with a 1% change in x. But this transformation is not always a good idea.  

I frequently see papers that examine the effect of temperature (or control for it because they care about some other factor) and use log(temperature) as an independent variable.  This is a bad idea because a 1% change in temperature is an ambiguous value. 

Imagine an author estimates

log(Y) = b*log(temperature)

and obtains the estimate b = 1. The author reports that a 1% change in temperature leads to a 1% change in Y. I have seen this done many times.

Now an American reader wants to apply this estimate to some hypothetical scenario where the temperature changes from 75 Fahrenheit (F) to 80 F. She computes the change in the independent variable  D:

DAmerican = log(80)-log(75) = 0.065

and concludes that because temperature is changing 6.5%, then Y also changes 6.5% (since 0.065*b = 0.065*1 = 0.065).

But now imagine that a Canadian reader wants to do the same thing.  Canadians use the metric system, so they measure temperature in Celsius (C) rather than Fahrenheit. Because 80F = 26.67C and 75F = 23.89C, the Canadian computes

DCanadian = log(26.67)-log(23.89) = 0.110

and concludes that Y increases 11%.

Finally, a physicist tries to compute the same change in Y, but physicists use Kelvin (K) and 80F = 299.82K and 75F = 297.04K, so she uses

Dphysicist = log(299.82) - log(297.04) = 0.009

and concludes that Y increases by a measly 0.9%.

What happened? Usually we like the log transformation because it makes units irrelevant. But here changes in units dramatically changed the predication of this model, causing it to range from 0.9% to 11%! 

The answer is that the log transformation is a bad idea when the value x = 0 is not anchored to a unique [physical] interpretation. When we change from Fahrenheit to Celsius to Kelvin, we change the meaning of "zero temperature" since 0 F does not equal 0 C which does not equal 0 K.  This causes a 1% change in F to not have the same meaning as a 1% change in C or K.   The log transformation is robust to a rescaling of units but not to a recentering of units.

For comparison, log(rainfall) is an okay measure to use as an independent variable, since zero rainfall is always the same, regardless of whether one uses inches, millimeters or Smoots to measure rainfall.

Sunday, November 4, 2012

Too close to call?

All of the attention on the presidential election has brought up some issues that are familiar to those of us who work in the world of anticipating and preparing for climate change impacts. In particular, there's been a clear contrast in the election coverage between, on the one hand, a lot of media stories that describe the race as a "toss-up" or "too close to call" and, on the other hand, careful analysis of the actual data on polls in swing states that say the odds are overwhelmingly in favor of another term for President Obama. Nate Silver has become a nerd celebrity for his analysis and daily blog posts (his new book is also really good). But there are many others who come to similar or even stronger conclusions. Like Sam Wang at Princeton who has put Obama's chances at over 98%.

I think there are a few things going on here. One is that the popular media has basically no incentive to report anything but a very close race. It keeps readers checking back frequently, and campaigns may be more likely to spend more money to media outlets for advertising if the narrative is for a very close race (although admittedly, they have so much money that the narrative may not make much difference). A more fundamental reason, though, is just a basic misunderstanding of probability. Not being able to entirely rule out something from happening (e.g., Romney winning) is not the same as saying it could easily happen. People mistake the possible for the probable. They want black and white, not shades of gray (at least not fewer than 50 shades of gray).

(Also in the news this week: hurricane Sandy. Another case where people who understand probabilities, like Mayor Bloomberg, have little trouble seeing the link to global warming, while others continue the silly argument that if it was possible for such things to happen in the past, then global warming can't play a role. In their black and white world, things can either happen or they can't. There is no understanding of probability or risk. I call this the Rava view of the world, based on the episode of Seinfeld when Elaine tries to convince Rava that there are degrees of coincidence:

RAVA: Maybe you think we're in cahoots.
ELAINE: No, no.. but it is quite a coincidence.
RAVA: Yes, that's all, a coincidence!
ELAINE: A big coincidence.
RAVA: Not a big coincidence. A coincidence!
ELAINE: No, that's a big coincidence.
RAVA: That's what a coincidence is! There are no small coincidences and big coincidences!
ELAINE: No, there are degrees of coincidences.
RAVA: No, there are only coincidences! ..Ask anyone! (Enraged, she asks everone in the elevator) Are there big coincidences and small coincidences, or just coincidences? (Silent) ..Well?! Well?!..)

Back to my point (you have a point!?), when we turn to climate impacts on agriculture, it's still quite common to hear people say that we just don't know what will happen. Usually this comes in some form of a "depends what happens to rainfall, and models aren't good with rainfall" type of argument. It's true that we do not know with complete certainty which direction climate change will push food production or hunger. But we do know a lot about the probabilities. Given what we know about how fast temperature extremes are increasing, and how sensitive crops are to these extremes, it's very probable in many cases, like U.S. corn, that impacts on crop yields will be negative. (For example, a few years back I tried with Claudia Tebaldi to estimate the probabilities that climate change would negatively impact global production of key crops by 2030. For maize, we put the odds at over 95%). Even in cases where rainfall goes up, the negatives tend to predominate. It's also also very likely that in some cases, like potatoes in England, that impacts will be positive. In either case we cannot say anything with absolute certainty, but that doesn't mean we should describe impacts as "too close to call"

Us academics can probably learn a thing or two from how Nate Silver is trying to explain risk and probability in his daily posts. But it's also fair to say that our task is a little hard for a couple of reasons. First, there are lots of data on past polls and election results, which people can use to figure out empirically how accurate their methods would have been in past cases. With climate change, we are often talking about changes that have not been seen in the past, or at least not by enough cases to develop a large sample size for testing. A second and, in my view, more critical difference is that climate impacts happen on top of many other changes in society. Elections provide a clear outcome - a candidate wins or loses. But what does a climate impact look like? How do we know if our predictions are right or not?  A lot of the entries in this blog are around that question, but the short answer is we can't directly measure impacts, we have to be clever in thinking of ways to pull them out of the data.

So maybe all of the attention to the election forecasts will help the public understand probabilities a little better. If nothing else, people should understand the difference between a 50% chance and an 80% chance of something happening. Reporting the latter as if it were the former is annoying in the context of the election, or as Paul Krugman says "Reporting that makes you stupid". But confusing the two in the case of climate impacts is more than annoying, it can lead to a lot more wishful thinking and a lot fewer smart investments than would otherwise be the case.

One final note: even when people are on board with the meaning of probabilities, it's still not so easy to get them right. Silver has the election at ~85% chances for Obama. That's high, but his chances of Romney winning are about 10 times higher than what Wang has. So just like with climate impacts, smart people can disagree, and it usually comes down to what they assume about model bias (Silver seems to admit a much higher chance that all polls are wrong in the same direction.) But even if smart analysts disagree, very few if any of them think the election results (or climate impacts) are a toss-up..

Monday, October 29, 2012

Probabilistic forecast of direct damage from Hurricane Sandy

These models are pretty preliminary, but Marshall and David convinced me to post this. I've been working with landfall statistics for only a couple of weeks, but I had enough data to put together a simple probabilistic forecast this morning for Sandy's direct damage (the number that will eventually appear on Wikipedia) based on landfall parameters (as they were forecast at around noon).  The distribution of outcomes is pretty wide, but the most likely outcome and expected loss are both at around $20B.  Below is the cumulative distribution function (left) and probability density function (right). 

click to enlarge

It will probably take several weeks for official estimates to converge. If I'm anywhere near right, I'll be sure to remind you.  Rather than explaining and caveating, I'm posting now since the power-outage frontier is two blocks away (it's dark south of 24th Street).

Tuesday, October 23, 2012

Bad control

You want to know how X affects Y.  You're worried that some other factor Z might be correlated with both X and Y - i.e. that Z is a potential "confounder" or "omitted variable" - and so you are hesitant to explore the effect of X on Y without accounting for Z.  Imagine that you are also lucky enough to have some data on Z.  So when calculating the effect of X on Y, you "control" for Z - i.e. calculate the effect of X on Y holding Z constant.  

Often this approach makes a lot of sense, and it is intuitively appealing to throw in a lot of control variables into your analysis to see if the effect of your main variable of interest (X) is "robust".  People do this routinely, and paper referees almost always ask for it in some form.

But there is a particular case where throwing in a bunch of "control" variables might actually be a really bad idea:  when these variables are themselves outcomes of the X variable of interest.   That is, if X affects Y, and X also affects Z, then "controlling" for Z when you estimate the effect of X on Y is probably a mistake.  This type of mistake is generically termed "bad control", and it can lead to dramatic misinterpretations of coefficient estimates.  Unfortunately it's a mistake that gets made a lot. 

Sol, Ted Miguel, and I have been working on a review of the rapidly growing literature on climate and conflict, and it is impressive the number of times bad controls are included.  Consider the following stylized example:  

You want to understand the effect of temperature on conflict.  You figure that temperature is not the only thing that affects conflict, and you're worried that temperature is also correlated with a lot of other stuff that might affect conflict - for instance, per capita GDP levels. So you regress conflict on temperature and GDP, and find that the effect of temperature is insignificant and the effect of GDP is large and significant.   What do you conclude?

A standard conclusion would be that the effect of temperature is "not robust", but in this case that conclusion is likely wrong.  The reason why is that temperature also affects economic productivity (see here and here), and so GDP is really an outcome variable.  This means it doesn't make sense to "hold economic productivity constant" when exploring the relationship between temperature and conflict -- part (or potentially all) of temperature's effect on conflict is through income.  At the extreme, if temperature affects conflict through only income, then controlling for income in a regression of conflict on temperature would lead you in this case to draw exactly the wrong conclusion about the relationship between temperature and conflict: that there is no effect of temperature on conflict.  (For those scoring at home with access to Stata who need to convince themselves, run the couple of lines of code below.)

The difficulty in this setting is that a growing body of research shows that climatic factors (and particularly temperature) also affect many other of the socioeconomic factors that that often get thrown in as control variables - things like crop production, infant mortality, population (via migration or mortality), and even political regime type.  To the extent that these show up as controls, studies might be drawing mistaken conclusions about the relationship between climate and conflict.

Studies can do two things to make sure their inferences are not being biased by bad controls.  First, show us the reduced form relationship between X and Y without any controls.  When X is "as good as randomly assigned" - as it typically is when X is a climate variable and the study is using variation in climate over time - then the reduced form relationship between X and Y tells us most of what we want to know.  Second, if you just have to use control variables - or referees make you, as in our 2009 PNAS paper on conflict in Africa - then be clear about the relationship between X and the controls you want to conclude.  Convince the reader that these controls are not themselves outcome variables and that controlling for them is not going to make your inference problem worse rather than better.

Finally, it's worth noting that not all is bad with bad controls:  including them can sometimes be useful to illuminate the mechanism through which X affects Y.  If X affects both Y and Z, but you're interested if X has an effect on Y through some other variable than Z, then "controlling" for Z in a regression of X on Y provides some insight into whether this is true.  (Maccini and Yang have a nice example of this in their paper on rainfall and later life outcomes.) Continuing the example above, regressing conflict on temperature and income and finding that temperature still has a significant effect on conflict suggests that temperature's effects on conflict are not only through income.  But, to reiterate, finding no effect of temperature in this regression does not tell you much at all, unless you can be sure that temperature does not also affect income.

[For a little more on bad controls, see Angrist and Pischke's nice discussion in Mostly Harmless Econometrics, p64-68].


* Code to demonstrate "bad control".
* Data generating process:
* income = temperature + noise
* conflict = income + noise

set seed 123
set obs 1000

gen temp = rnormal()  //temperature variable
gen e_1 = rnormal()  //noise variable 1
gen e_2 = rnormal()  //noise variable 2, uncorrelated with e_1

gen income =  -temp + e_1  //temp and income are negatively related - e.g. Dell et al 2012
gen conflict = -income + e_2  //income and conflict are negatively related - e.g. Miguel et al 2004

reg conflict temp
reg conflict income
reg conflict income temp  //coefficient on income is highly significant, coeff on temp is not and point estimates is close to zero

Saturday, October 13, 2012

Global hunger: down but not out

A recent revision of the FAO's calculations on how many hungry people there are in the world has garnered some attention, not least because the FAO seems to have backed off their earlier headline- and funding-generating claim that high food prices and the global economic downturn had resulted in there being over 1 billion people hungry in the world.  The roundness and bigness of that number was certainly shocking and galvanizing, but what was perhaps more worrying at the time was the implication that earlier gains in reducing the number of hungry were being rapidly reversed - that hunger was "spiking" and that there was a serious crisis underway.

FAO's revised numbers, out in their annual State of Food Insecurity, tell a somewhat different story.  See the plot below, which is pieced together from the last three SOFI reports. The total number of hungry is now about 850 million - below a billion but still a debacle by any normal standard - but the updated numbers (shown in blue) now completely wipe out the highly-publicized food crisis spike of 2008-2010. Instead, it looks like there were more hungry people in the world in the 1990s, but that this has been more or less steadily improving ever since - with some leveling off in the last half-decade. The take home from these numbers:  we had a worse starting point, but much more progress since then and no big spike.

So what happened? Why the progress, and where'd the spike go? Calculating the number of hungry in the world is not an easy task.  The way the FAO does it is to combine population estimates for a given country (which we know pretty well) with estimates of dietary requirements for people in that country (based on anthropometrics, which we know decently well), with estimates of calorie availability.  This last part is where things get tough.  What the FAO does is to (try to) use household survey data to get an estimate of the distribution of consumption within a country, and then because these data are not available every year, use broader indicators of food availability (e.g. data on country level production and trade) to shift this distribution around.  In this technical note, they suggest that the revision had a little to do with better estimates of calorie distribution across households (which reduced estimates of the number of hungry 2008-2010 by about 60million), and a lot to do with better accounting for food losses and wastage (which increased the number of hungry in each period by about 125 million).  

This to me explains why the levels went up, but does not really explain where the spike went.  In the 2012 SOFI, the authors explain:  

"The methodology estimates chronic undernourishment based on habitual consumption of dietary energy and does not fully capture the effects of price spikes, which are typically short-term. As a result, the prevalence of undernourishment (PoU) indicator should not be used to draw definitive conclusions about the effects of price spikes or other short-term shocks. Second, and most importantly, the transmission of economic shocks to many developing countries was less pronounced than initially thought."

This seems a little weird, since basically the same methodology was used to show a huge hunger spike on account of the 2008 price rise. 

In any case, it was likely there was (and is) still a hunger spike.  What of course you can't show on that plot is the counterfactual - what hunger numbers would have looked like had there been no economic downturn and food price increase.  There is plenty of evidence from other sources, including good micro work by folks at the World Bank, that price spikes in 2008 and again since mid-2010 have pushed 50-100million people below the $1.25 poverty line.  Hunger is of course different than poverty, but they are closely related - and this makes the FAO revision again confusing since it suggests that things were getting worse for a lot of people.  

Good household survey data are a critical component to any adding up of the number of hungry, and if you had these surveys every year and in a bunch of countries, you would know a whole lot more about how much people are eating and how much they are hurt by higher food prices.  And there are many other (potentially much more clever) ways to use household expenditure data to get at hunger without adding up every single calorie consumed by the household.

The FAO seems to realize this.  In their technical note on the updated numbers, the FAO notes that: 

"If nationally representative surveys collecting reliable data on habitual food consumption were conducted every year and could be processed in a timely and consistent manner throughout the world, then a simple head-count method, based on the classification of individuals, could be used. Until then, a model based estimation procedure, such as FAO’s, is still needed."

What I don't understand is why the FAO is not already doing these surveys.  Calculating the number of hungry people in the world (and its different regions) would seem like one of - if not the - most important task the FAO has on its annual to-do list, and something that might be worth throwing some money at. 

FAO's annual budget is $1 billion USD (which as noted by this website equals the "cost of six days of cat and dog food in nine industrialized countries").  Lets say you wanted to do annual household surveys in 100 poor countries.  A good rule of thumb for doing surveys in poor places is that it costs about $25 to survey one person, inclusive of all costs.  So for $100k, you could survey 4000 people, which is a decent sized national survey.  So doing these surveys annually in 100 poor countries would cost $10million, or 1% of FAO budget.  (Initial survey costs might be higher, but once you've paid the fixed costs of getting together a survey team, costs over time would go down..)  And with modern electronic data collection methods, you could collect, aggregate, and analyze this data pretty quickly - which is not to say that doing surveys is easy, but that this seems like a fixable problem. Furthermore, the much, much richer World Bank is already doing a bunch of these surveys - the LSMS - so presumably would be willing to go halvsies. 

Until they do so - and given the large differences in the what the poverty numbers and the hunger numbers seem to say about the food crisis -  it's not obvious that we're better off trusting the new estimates of the global number of hungry a whole lot more than the old ones.  Either way, there are a whole lot of hungry people in the world, and high food prices do not appear to be doing them any favors.