Monday, December 21, 2015

From the archives: Friends don't let friends add constants before squaring

I was rooting around in my hard-drive for a review article when I tripped over this old comment that Marshall, Ted and I drafted a while back.

While working on our 2013 climate meta-analysis, we ran across an interesting article by Ole Thiesen at PRIO where he coded up all sorts of violence at a highly local level in Kenya to investigate whether local climatic events, like rainfall and temperature anomalies, appeared to be affecting conflict. Thiesen was estimating a model analogous to:
and reported finding no effect of either temperature or rainfall. I was looking through the replication code of the paper to check the structure of the fixed effects being used when I noticed something, the  squared terms for temperature and rainfall were offset by a constant so that the minimum of the squared terms did not occur at zero:

(Thiesen was using standardize temperature and rainfall measures, so they were both centered at zero). This offset was not apparent in the linear terms of these variables, which got us thinking about whether this matters. Often, when working with linear models, we get used to shifting variables around by a constant, usually out of convenience, and it doesn't matter much. But in non-linear models, adding a constant incorrectly can be dangerous.

After some scratching pen on paper, we realized that

for the squared term in temperature (C is a constant), which when squared gives:

because this constant was not added to the linear terms in the model, the actual regression Thiesen was running is:

which can be converted to the earlier intended equation by computing linear combinations of the regression coefficients (as indicated by the underbraces), but directly interpreting the beta-tilde coefficients as the linear and squared effects is not right--except for beta-tilde_2 which is unchanged. Weird, huh? If you add a constant prior squaring for only the measure that is squared, then the coefficient for that term is fine, but it messes up all the other coefficients in the model.  This didn't seem intuitive to us, which is part of why we drafted up the note.

To check this theory, we swapped out the T-tilde-squared measures for the correct T-squared measures and re-estimated the model in Theisen's original analysis. As predicted, the squared coefficients don't change, but the linear effects do:

This matters substantively, since the linear effect of temperature had appeared to be insignificant in the original analysis, leading Thiesen to conclude that Marshall and Ted might have drawn incorrect conclusions in their 2009 paper finding temperature affected conflict in Africa. But just removing the offending constant term revealed a large positive and significant linear effect of temperature in this new high resolution data set, agreeing with the earlier work. It turns out that if you compute the correct linear combination of coefficients from Thiesen's original regression (stuff above the brace for beta_1 above), you actually see the correct marginal effect of temperature (and it is significant).

The error was not at all obvious to us originally, and we guess that lots of folks make similar errors without realizing it. In particular, it's easy to show that a similar effect shows up if you estimate interaction effects incorrectly (after all, temperature-squared is just an interaction with itself).

Thiesen's construction of this new data set is an important contribution, and when we emailed this point to him he was very gracious in acknowledging the mistake. This comment didn't get seen widely because when we submitted it to the original journal that published the article, we received an email back from the editor stating that the "Journal of Peace Research does not publish research notes or commentaries."

This holiday season, don't let your friends drink and drive or add constants the wrong way in nonlinear models.

Monday, December 14, 2015

The right way to overfit

As the Heisman Trophy voters showed again, it is super easy to overfit a model. Sure, the SEC is good at playing football. But that doesn’t mean that the best player in their league is *always* the best player in the country. This year I don’t think it was even close.

At the same time, there are still plenty examples of overfitting in the scientific literature. Even as datasets become larger, this is still easy to do, since models often have more parameters than they used to. Most responsible modelers are pretty careful about presenting out-of-sample errors, but even that can get misleading when cross-validation techniques are used to select models, as opposed to just estimating errors.

Recently I saw a talk here by Trevor Hastie, a colleague at Stanford in statistics, which presented a technique he and Brad Efron have recently started using that seems more immune to overfitting. They call it spraygun, which doesn’t seem too intuitive a description to me. But who am I to question two of the giants of statistics.

Anyhow, a summary figure he presented is below. The x-axis shows the degree of model variance or overfitting, with high values in the left hand side, and the y-axis shows the error on a test dataset. In this case they’re trying to predict beer ratings from over 1M samples. (statistics students will know beer has always played an important role in statistics, since the origin of the “t-test”). The light red dots show the out-of-sample error for a traditional lasso-model fit to the full training data. The dark red dots show models fit to subsets of the data, which unsurprisingly tend to overfit sooner and have worse overall performance. But what’s interesting is that the average of the predictions from these overfit models do nearly the same as the model fit to the full data, until the tuning parameter is turned up enough that that full model overfits. At that point, the average of the models fit to the subset of data continues to perform well, with no notable increase in out-of-sample error. This means one doesn’t have to be too careful about optimizing the calibration stage. Instead just (over)fit a bunch of models and take the average.

This obviously relates to the superior performance of ensembles of process-based models, such as I discussed in a previous post about crop models. Even if individual models aren't very good, because they are overfit to their training data or for other reasons, the average model tends to be quite good. But in the world of empirical models, maybe we have also been too guilty of trying to find the ‘best’ model for a given application. This maybe makes sense if one is really interested in the coefficients of the model, for instance if you are obsessed with the question of causality. But often our interest in models, and even in identifying causality, is that we just want good out-of-sample prediction. And for causality, it is still possible to look at the distribution of parameter estimates across the individual models.

Hopefully for some future posts one of us can test this kind of approach on models we’ve discussed here in the past. For now, I just thought it was worth calling attention to. Chances are that when Trevor or Brad have a new technique, it’s worth paying attention to. Just like it’s worth paying attention to states west of Alabama if you want to see the best college football player in the country.

Monday, December 7, 2015

Warming makes people unhappy: evidence from a billion tweets (guest post by Patrick Baylis)

Everyone likes fresh air, sunshine, and pleasant temperatures. But how much do we like these things? And how much would we be willing to pay to gain more of them, or to prevent a decrease in the current amount that we get?

Clean air, sunny days, and moderate temperatures can all be thought of as environmental goods. If you're not an environmental economist, it may seem strange to think about different environmental conditions as "goods". But, if you believe that someone prefers more sunshine to less and would be willing to pay some cost for it, then a unit of sunshine really isn't conceptually much different from, say, a loaf of bread or a Playstation 4.

The tricky thing about environmental goods is that they're usually difficult to value. Most of them are what economists call nonmarket goods, meaning that we don't have an explicit market for them. So unlike a Playstation 4, I can't just go to the store and buy more sunshine or a nicer outdoor temperature (or maybe I can, but it's very, very expensive). This also makes it more challenging to study how much people value these goods. Still, there is a long tradition in economics of using various nonmarket valuation methods to study this kind of problem.

New data set: a billion tweets

Wednesday, December 2, 2015

Renewable energy is not as costly as some think

The other day Marshall and Sol took on Bjorn Lomborg for ignoring the benefits of curbing greenhouse gas emissions.  Indeed.  But Bjorn, among others, is also notorious for exaggerating costs.  That fact is that most serious estimates of reducing emissions are fairly low, and there is good reason to believe cost estimates are too high for the simple fact that analysts cannot measure or imagine all ways we might curb emissions.  Anything analysts cannot model translates into cost exaggeration.

Hawai`i is a good case in point.  Since moving to Hawai`i I've started digging into energy, in large part because the situation in Hawai`i is so interesting.  Here we make electricity mainly from oil, which is super expensive.  We are also rich in sun and wind.  Add these facts to Federal and state subsidies and it spells a remarkable energy revolution.  Actually, renewables are now cost effective even without subsidies.

In the video below Matthias Fripp, who I'm lucky to be working with now, explains how we can achieve 100% renewable energy by 2030 using current technology at a cost that is roughly comparable to our conventional coal and oil system. In all likelihood, with solar and battery costs continuing to fall, this goal could be achieved for a cost that's far less.  And all of this assumes zero subsidies.

One key ingredient:  We need to shift electric loads toward the supply of renewables, and we could probably do this with a combination of smart variable pricing and smart machines that could help us shift loads.  More electric cars could help, too.  I'm sure some could argue with some of the assumptions, but it's hard to see how this could be wildly unreasonable.

Monday, November 23, 2015

In cost-benefit calculation of climate action, Bjorn Lomborg forgets benefits

Sol and I had a letter to the editor in the Wall Street Journal today, responding to an earlier editorial by Bjorn Lomborg.  Below is what we wrote.  The best part is that I am "Prof. Marshall Burke" and Sol is "Solomon Hsiang, PhD" and we are both at Berkeley.  Apparently bylines are not the purview of the WSJ fact-checker (although Sol does have a PhD and is at Berkeley).


Bjorn Lomborg's "Gambling the World Economy on Climate" (op-ed, Nov. 17) argues that emissions reductions are bad investments because of costBut he never considers the value of the asset we are buying. Smart policy should carefully weigh the costs and benefits of possible actions and pursue those that yield the strongest return for society. Mr. Lomborg became famous advocating for this approach, but now he seems to forget his own lesson.
Our research shows that the climate is a valuable asset and paying billions to prevent it depreciating is a bargain. Our recent study published in Nature shows rising temperatures could cost 23% of global GDP by 2100 -- and that there is a 50-50 chance it could be worse. Mr. Lomborg rightly advocates for lifting up the world's poor, but we calculate that failing to address climate change will cost the poorest 40% of countries three-quarters of their income. By 2030 alone, we show that climate change could reduce annual global GDP by $5 trillion.
These are only the effects of temperature on productivity. Other impacts will add to the price tag. For example, we estimate avoiding intensification of tropical cyclones from climate change is worth about $10 trillion. And warming could increase conflict roughly 30% in 2050; what is that worth?
Mr. Lomborg says that $730 billion a year in 2030 is too much to pay to avoid many trillions in losses. This math is easy.
Prof. Marshall Burke
Solomon Hsiang, Ph.D.
University of California, Berkeley
Berkeley, Calif.

What we know about climate change, conflict, and terrorism

Ever since Bernie Sanders' remarks about climate change causing terrorism, a lot of folks have been asking about what we know on this and related issues. I worked with Marshall and Tamma Carleton to put together this short brief for those interested in knowing what we know quickly. (For those looking for a long answer, see here.)

Summary points:
  1. Research clearly demonstrates that hotter temperatures cause more individual level violence (e.g. homicides in the US) and more large-scale violence (e.g. civil wars in Africa), and that extreme rainfall leads to violence in agrarian contexts.
  2. Climate change to date, via warmer temperatures, has likely increased the risk of conflict, although this has not yet been empirically proven.
  3. Attributing the Syrian conflict to climate change is difficult.  What we can say is that drought and hot temperatures increase the likelihood of these types of conflict.
  4. There is currently little evidence for or against a systematic relationship between climate and terrorism. 

Risky Business and Financial Disclosure

Nature recently requested clarification on whether I should have issued a financial disclosure for my recent paper with Marshall and Ted for a grant originating with the Risky Business Project, based on a inquiry from a concerned reader (at least one is here). For transparency, in case there are other concerned readers out there, our reply is pasted below.

The Risky Business Project provided a 1 year grant to Hsiang that ended in the summer of 2014. The grant was to work on new methods for estimating the economic impact of climate change in the United States with a larger research team comprised of researchers from Rutgers University, Columbia University, Risk Management Solutions and Rhodium Group. That work was released in the summer of 2014 as the American Climate Prospectus and then subject to peer review and publication as a book by Columbia University Press: “Economic Risks of Climate Change: An American Prospectus” in the summer of 2015 ( That research program was entirely independent of the diverse views of the members of the Risky Business Project, and none of those funds were used to support any of the work in our recent Nature publication.

At the time when Hsiang’s grant concluded, i.e. when the American Climate Prospectus was released in 2014, The Risky Business Project described itself as:

“The Risky Business Project is a joint partnership of Bloomberg Philanthropies, the Paulson Institute, and TomKat Charitable Trust. All three organizations provided substantive staff input to the Risky Business Project over the past 18 months, and supported the underlying independent research being released today. Additional support for this research was provided by the Skoll Global Threats Fund and the Rockefeller Family Fund. Staff support for the Risky Business Project is provided by Next Generation, an independent 501c3 organization.”

Thus there is no profit-driven element of the organization and Hsiang’s research funding ultimately came from philanthropic foundations, analogous to the Gates Foundation. Further, Hsiang has no financial interest in any of the organizations contributing to the Risky Business Project. For these reasons, Hsiang did not believe it was necessary or appropriate to disclose the grant as a financial conflict of interest at the time of publication. 

Nevertheless, due to this inquiry, Hsiang double checked this logic with members of the UC Berkeley administration to make sure this reasoning was consistent with the University’s view. He received a reply that confirmed this interpretation: 

“Based on the description of Risky Business that you provided, it would appear that it is a philanthropic sponsor of research and therefore would not benefit financially from the results of the research you published in Nature, nor do you have a financial interest in the affairs of Risky Business.  According to Nature's "competing financial interests" policy, authors are required "to declare to the editors any competing financial interests in relation to the work described."  As Risky Business can reap no financial benefit from your research and you have no financial interest in Risky Business, it does not seem to me that you have any obligation to disclose your previous Risky Business grant.  Indeed doing so would be misleading as it would erroneously suggest that you and  Risky Business have such interests.”

For all of these reasons, we do not feel that it is necessary to publish any correction.

Monday, November 9, 2015

El Niño and Global Inequality (guest post by Kyle Meng)

El Niño is here, and in a big way. Recent sea-surface temperatures in the tropical Pacific Ocean, our main indicator of El Niño intensity, is about as high as they were prior to the winter of 1997/98, our last major El Niño (and one of the biggest in recorded history). Going forward, median climate forecasts suggest this intensity will be sustained over the coming months, and could even end up stronger than the 1997/98 El Niño. This will have important global consequences over the next 12 months not just on where food is produced but also how it is traded around the planet.

First, a quick primer. The El Niño Southern Oscillation (ENSO) is a naturally occurring climatic phenomenon, arguably the most important driver of global annual climate variability. It is characterized by two extreme states: El Niño and La Niña. Warm water piles up along the western tropical Pacific during La Niña. During El Niño, the atmospheric and oceanographic forces that maintain this pool of warm water collapse resulting in a large release of heat into the atmosphere that is propagated around the planet within a relatively short period. ENSO’s impact on local environmental conditions around the planet are known as its “teleconnections.” To a rough first order, El Niño makes much of the tropics (from 30N to 30S latitude, shown in red in figure below) hotter and dryer and the temperate regions (shown in blue in figure below) cooler and wetter.

Countries where the majority of the population experience hotter conditions under El Niño are shown in red. Countries that get cooler under El Niño are shown in blue (reproduced from Hsiang and Meng, American Economic Review, May 2015).

There are two features of El Niño that have important implications on global food markets. First, El Niño creates winners and losers across the planet. Sol Hsiang and I have shown in a paper published in the American Economic Review Papers and Proceedings, that between 1960-2010, country-level cereal output in the tropics drop on average by 3.5% for every degree increase in the winter ENSO index. For a large event like 1997/98, or the one anticipated this winter, we estimate a 7% decrease in cereal output across the tropics. Conversely, the relatively more favorable environmental conditions experienced by temperate countries during the same year results in a 5% increase in cereal output. Interestingly, if you sum up the gains and losses across the world, you end up with a positive number: El Niño actually increases global cereal output. 

Second, El Niño impacts are highly spatially correlated, organizing winners and losers to roughly two spatially contiguous blocks across the planet: temperate and tropical countries (which are also mostly just countries south of 30N because there are few countries south of 30S). This means under El Niño, countries suffering crop failures deep in the tropics are also surrounded by neighbors that are likely experiencing similar food shortages at the same time. Why is the spatial scale of El Niño impacts important? Basic economics tells us that the primary driver for international trade is productivity differences across countries. When El Niño occurs, tropical neighbors that normally engage in bilateral trade experience similar crop losses and thus may be less likely to trade with each other. To find an exporter that experiences bumper yields under El Niño, tropical countries have to source food from temperate countries (i.e. North America, Europe, North Asia), that are much further away, for which the cost of trade is higher. The predicted result is that El Niño has two effects on countries in the tropics: it causes direct crop losses and limits the ability of imports to offset such losses. 

Is this happening? In ongoing work with Sol and Jonathan Dingel, we detect exactly these trade effects. From 1960-2010, when an El Niño occurs, cereal output fall in the tropics with some extra imports arriving. However, these imports do not offset all losses such that countries deep in the tropics experience large spikes in food prices. Stay tuned for that paper.

We think this is important beyond food prices during El Niño. In an article published in Nature in 2011, Sol and I, together with Mark Cane, detected that the likelihood of civil wars breaking out in the tropics doubles during strong El Niño years relative to strong La Niña years. Many have asked us about the mechanisms behind this large effect. We now think that direct crop losses together with the limits of trade during El Niño are important parts of the story.

What can be done? Sol and I recently wrote an op-ed in the Guardian on El Niño and its impact on global equality with some policy prescriptions. In the short-term, we argue that aid agencies, peacekeeping groups, refugee organizations, and other international institutions should be prepared to send food to the tropics as local conditions deteriorate. We also argue, in the long-term, that investments should be made to better integrate global food markets and improved access to other financial instruments such as like crop insurance. 

Finally, the spatial nature of El Niño events has similarities with that of anthropogenic climate change, which we know from Marshall, Sol, and Ted’s work is expected to generate winners and losers across the planet. As such adaptation to climate change will involve not just local investments but also global efforts to improve how markets redistribute the unequal effects of climate change.

This is a guest post by Kyle Meng, an Assistant Professor at UC Santa Barbara.

Monday, October 26, 2015

Climate change and the global economy

[image lifted from PBS press coverage]

Sol, Ted Miguel, and I have a new paper out in Nature that looks at at how past and future temperature changes might affect global economic output.  In particular, we study the historical relationship between temperature and country-level output with an eye for potential non-linearities in the macro data (which have cropped up everywhere in the micro literature).  We then combine historical results with global climate model estimates of future warming to come up with some projections of the potential future impacts of warming.  We wrap up by trying to compare our damage estimates to the damage functions currently in the Integrated Assessment Models (IAMs).

We get some big numbers.  Looking historically, we see that output in both rich and poor countries alike has been shaped by changes in temperature, and that temperature appears to affect growth rate of per capita GDP and not just the level of GDP (which matters a whole lot when you do the projections).  Importantly, we don't see big differences between rich and poor countries in how they respond to changes in temperature historically.  Differences we do see across countries appear driven more by countries' average temperatures than by their average incomes, with cooler countries growing faster on average during yeas that are warmer-than average for them, and hotter countries growing slower.

This non-linear response peaks at an annual average temperature of around 13C which just so happens to be the annual average temperature of both Palo Alto and New York City.  (For the naysayers: There is nothing mechanical in this fact; we can drop the US from the country-level regressions and we get the same optimum of 13C).   The effect of temperature on growth rates is pretty flat around this peak, but gets pretty steep as you move away from the optimum in either direction.  We find that for really hot or really cold countries, +1C changes in annual temperature have historically moved growth rates up (for cold countries) or down (for hot countries) by a percentage point -- i.e. a hot country goes from growing at 2% per year to 1% a year.  That is a big number.  And for poor countries, it's basically what was shown in the seminal Dell, Jones, Olken piece from a few years ago.  The big difference is we see a lot more action in the richer, cooler countries than DJO found in their earlier paper -- a difference we spend a lot of time exploring in the supplement to our paper.

We then run the world forward under a RCP8.5, a business-as-usual emissions scenario that makes the world pretty hot by end of century (+4.3C is the population-weighted projected temperature increase under RCP8.5 by 2100 that we pull off the models).  Doing this requires three pieces of data, as we describe here [after clicking on a country, scroll down to the "how do we arrive at these numbers" section].  Cool high-latitude countries warm up a bunch (much more than global average) and so could stand to benefit substantially from climate change -- recall the +1%/C marginal effect from above.  Countries at or beyond the optimum are harmed, and increasingly so as the temperature heats up.  Globally, we find that under RCP8.5, the global economy could be more than 20% smaller by 2100 than it would have been had temperatures remained fixed at today's values.  This does not mean that the world will be poorer in 2100.  It almost certainly will not.  It means that it will be less rich than it would have been had temperatures not warmed.

Importantly, our estimates mechanically do not include the potential future effects of stuff we have not observed historically (e.g. sea level rise), nor do they contain non-marketed things we care about that do not show up in GDP (e.g. polar bears).

We built a little website to take people through the paper and let people play with the country-level results.  We've posted all our data and replication code, and would love people to show us where we went wrong.

In the spirit of earlier blog posts, we again want to use this space to respond to some of the early comments and criticism we've gotten on the paper.  This accompanies a related attempt to answer some "frequently asked questions" about our paper that we got from the press and earlier inquirires.  We imagine that we will be updating this blog post as additional criticism rolls in.   But here are some of the "Frequently Heard Criticisms" (FHCs) so far, and some responses:

1.  These results just don't pass the "sniff test" [Alternate version:  Your impacts are too big, and they just can't be true].  As far as I can tell, "this doesn't pass the sniff test" is just a snarky way of saying, "this disagrees strongly with what I thought I knew about the world, and I am uninterested in updating that view".

For those still deciding whether or not to update their views, it seems worthwhile to explicitly lay out the key assumptions in our projection exercise, and then be explicit about why these might or might not be good assumptions.

  1. Assumptions about how much future climate might change.  Our only assumption here is that the CMIP5 ensemble is in the ballpark for RCP8.5.  But that doesn't really matter that much either, since we can calculate impacts under any amount of warming you want (see Figure 5d).  But people seem less worried about this one anyway.
  2. Assumptions about secular trends in growth.  Again, we pull these from the SSPs and so are just passing-the-assumption-buck, so to speak, onto what the folks who put together the SSPs assume about how countries are going to perform in the future.  But these assumptions don't end up mattering too much for our main headline number (i.e. the impact on global GDP, relative to world without climate change), because that's a relative number.  For comparisons between a high-baseline-growth scenario (SSP5) and a low-growth scenario (SSP3), see Extended data Table 3
  3. The assumption that historical responses will be a good guide for understanding future responses.  This is a key one for most folks, and you can see at least two reasons to be uneasy with this assumption.  First, (3a) that our historical estimates are derived from year-to-year variation in temperature, which is potentially hard for agents to anticipate and respond to, whereas future climate change will be a slower-moving, more-predictable, more-anticipate-able shift.  Second, (3b) that there is no way that economies 50 or 85 years from now will look like today's economies, and we can't reasonably expect them to respond similarly. 
We return to (3a) below.  Claim (3b) you hear a lot, and on some level has to be true:  we have no idea what economies are going to look like in 2100 (I'm still trying to figure out what Snapchat is...).  However, what we do have is the experiment over the last 50 years of being able to look at countries at very different points in the development process, and study how both the really advanced ones and the really poor ones respond to environmental change.  And from our vantage point, the news is just not that good:  sensitivity to temperature fluctuations has not changed over time (Fig 2c in the paper, reproduced below), and rich countries appear only marginally less sensitive -- if at all -- than poor countries (Fig 2b and like half of the supplement).  This latter result, as we highlight in the paper, is very consistent with a crap-ton (technical term) of micro-level studies from rich countries -- e.g. see Sol and Tatyana's nice paper on the US.  Even incredibly technologically advance countries, and advanced sectors within those countries, are hurt by higher temperatures.  I just don't see how you can look at these data and be sanguine about our ability to adapt. 
Effect of annual average temperature on economic production. a, Global non-linear relationship between annual average temperature and change in log gross domestic product (GDP) per capita (thick black line, relative to optimum) during 1960–2010 with 90% confidence interval (blue, clustered by country, N= 6,584). Model includes country fixed effects, flexible trends, and precipitation controls. Vertical lines indicate average temperature for selected countries. Histograms show global distribution of temperature exposure (red), population (grey), and income (black). b, Comparing rich (above median, red) and poor (below median, blue) countries. Blue shaded region is 90% confidence interval for poor countries. Histograms show distribution of country–year observations. c, Same as b but for early (1960– 1989) and late (1990–2010) subsamples (all countries). d, Same as b but for agricultural income. e, Same as b but for non-agricultural income.
So, yes, the future world might look different than the current world.  But saying that is a cop-out, unless you can tell a convincing story as to exactly why the future is going to look so different than the past.  Our guess is that you are going to have a hard time telling that story with an appeal to the historical record. 

2. You're studying weather, not climate.  An old saw.  In fact, at every single conference related to  the economics of climate change that I have ever been to, if 5 minutes passed without someone mentioning weather-versus-climate, then with probability 1 someone else would smirk-ingly mention that it had been at least 5 min since someone mentioned weather-versus-climate.

Joking aside, this is still a really important concern.  The worry, already stated above, is that our historical estimates are derived from year-to-year variation in temperature ("weather"), which is potentially hard for agents to anticipate and respond to, whereas "climate" change will be a slower-moving, more-predictable, more-anticipate-able shift.  Whether people respond differently to short- versus longer-run changes in temperature is an empirical question, and one that is often tricky to get a handle on in the data.  Kyle Emerick and I have looked at this some in US agriculture, and we find that responses to slow-moving, multi-decadal changes in temperature don't look very different from responses to "weather" (see earlier blog here) -- hot temperatures are bad whether they show up unexpectedly in one year or whether you're exposed to them a little bit more year after year.  Now whether this result in US agriculture extrapolates to aggregate country-level output in the US or anywhere else is unknown, and to us a key area for future work.  But again, just claiming that responses derived from studying "weather" are a bad guide to understanding "climate" is not that satisfactory.  Show us how long-run responses are going to be different.

3.  All you're picking up is spurious time trends.  This one is annoying.  Please read the paper carefully, and please look at Extended Data Table 1 in the paper (conveniently packaged with the main pdf, so there is no excuse!).  Countries have been getting richer over time, on average, and the world has also been warming over time, on average.  But since all sorts of other crap has been trending over time as well, it's clearly going to be hard to correctly identify the impact of temperature on economic output just by studying trends over time.  So you have to deal with trends somehow.

So we try all sorts of combinations of year- or continent-year fixed effects, and/or linear or non-linear country time trends, to try to see how things hold up under different approaches to taking out both common shocks (the year FE) and trending stuff.   If you're worried about "dynamic effects", we also control in some specifications for multiple lags of the dependent variable.  It doesn't end up mattering too much.  As shown in Extended Data Table 1, we still get a similar looking non-linear response no matter the model.  [And, to be clear for the time series folks, our LHS is differences in log income, not log income]. If you still think we messed this up, then download our data and show us.  The onus is on you at this point, and just making claims about the potential for spurious trends does a disservice to the debate.

4. Who funded you?  The fact that I'm now getting these unsigned emails from anonymous gmail addresses I think means we did something right.   I am mainly funded by Stanford University, who pays my salary.  We had some project support from a $50k grant from the Stanford Institute for Innovation in Developing Economies.  And I thank the Stanford Institute on Economic Policy Research for giving me a place to sit last year while this paper got written.  Sol and Ted both work at Berkeley, so are paid by that fine institution, but neither received additional project support for this work from anyone.  [Edit 11/23/2015: Sol adds more on his situation here].

5. [Added Oct 27].  You do not account of the effects of development.  Or, verbatim from Richard Tol, "Although Burke and co notice that poorer countries are more vulnerable to climate change, they did not think to adjust their future projections for future development".  Richard, please please read the paper before blogging this sort of stuff.  This is EXACTLY what we do in Figure 5 panels b and d.  "Differentiated response" means that rich and poor countries are allowed to respond differently, per Figure 2b, and that poor countries "graduate" to the rich-country response function in the future if their income rises above the historical median income.  This does reduce the global projected impacts from -23% to -15% (see Fig 5b and Extended Data table 3) -- a meaningful difference, to be sure.  But -15% is still about a factor of 5 larger than what's in any IAM.  

Wednesday, October 21, 2015

Paul Krugman on Food Economics

Paul Krugman doesn't typically write about food, so I was a little surprised to see this.  Still, I think he got most things right, at least by my way of thinking.  Among the interesting things he discussed.

1. The importance of behavioral economics in healthy food choices
2. That it's hard to know how many actual farmers are out there, but it's a very small number.
3. That we could clean up farming a lot by pricing externalities [also see], or out-right banning of the most heinous practices, but that doesn't mean we're going to go back to the small farms of the pre-industrial era, or anything close to it.
4. Food labels probably don't do all that we might like them to do (see point 1.)
5. How food issues seem to align with Red/Blue politics just a little too much

There's enough to offend and ingratiate most everyones preconceived ideas in some small way, but mostly on the mark, I think.