Monday, December 4, 2017

New Damage Functions for the Agricultural Sector – Guest Post by Fran Moore

Last week I had a new paper come out with some fantastic co-authors (Delavane Diaz, Tom Hertel, and Uris Baldos) on new damage functions in the agricultural sector. Since this covers a number of topics of interest to G-FEED readers (climate damages, agricultural impacts, SCC etc), I thought I’d dig a bit into what I see as some of the main contributions.
Firstly what we do. Essentially this is an exercise in trying to link a damage function directly to the scientific consensus on climate impacts as represented by findings in the IPCC. We therefore rely on a database of over 1000 results on the yield response to temperature previously published by Challinor et al. and used to support conclusions in the food security chapter of Working Group 2. We do a bit of reanalysis (using a multi-variate regression, adding soybeans, allowing for more geographic heterogeneity in response) to get yield-temperature response functions that can be extrapolated to a global grid to give productivity shocks at 1, 2, and 3 degrees of warming (available for other researchers to use here).
As readers of G-FEED well know though, getting productivity shocks is only half the battle because what we really want are welfare changes. Therefore, we introduce our productivity shocks into the 140-region GTAP CGE model, which has a particularly rich representation of the agriculture sector and international trade. This then gives us welfare changes due to agricultural impacts at different levels of warming, which is what we need for an IAM damage function. Finally, we take our findings all the way through to the SCC by replacing the agricultural damages from FUND with our new estimates. Our headline result is that improving damages just in the agricultural sector leads the overall SCC to more than double.
There’s lots more fun stuff in the paper and supplementary information including a comparison with results from AgMIP, some interesting findings around adaptation effectiveness, and a sensitivity analysis of GTAP parameters. But here I want to highlight what I see as three main contributions.
Firstly I think a major contribution is to the literature on improving the scientific validity of damages underlying IAM results. The importance of improving the empirical basis of damage functions has been pointed out by numerous people. This is something Delavane and I have worked on previously using empirically-estimated growth-rate damages in DICE, and something that Sol and co-authors have done a lot of work on. I think what’s new here is using the existing biophysical impacts literature and tracing its implications all the way through to the SCC. There is an awful lot of scientific work out there on the effects of climate change relevant to the SCC, but wrangling the results from a large number of dispersed studies into a form that can be used to support a global damage function is not straightforward (something Delavane and I discuss in a recent review paper). In this sense the agriculture sector is probably one of the more straightforward to tackle – we definitely relied on previous work from the lead authors of the IPCC chapter and from the AgMIP team. I do think this paper provides one template for implementing the recommendation of the National Academy of Sciences SCC report around damage functions – that they be based on current and peer-reviewed science, that they report and quantify uncertainty wherever possible, and that calibrations are transparent and well-documented.
Secondly, an important contribution of this paper is on quantifying the importance of general-equilibrium effects in determining welfare changes. Since so much climate impacts work estimates local productivity changes, it raises the question of what these can or can’t tell us about welfare under global climate change. We address this question directly by decomposing our welfare changes into three components: the direct productivity effect (essentially just the local productivity change multiplied by crop value), the terms-of-trade effect, and the allocative efficiency effect (caused by interactions with existing market distortions). The last one is generally pretty small in our work, so regional changes in welfare boil down to a combination of local productivity changes and the interaction between a region’s trade position and global price changes. This breakdown for 3 degrees of warming is shown below.

For these productivity shocks, the terms-of-trade (ToT) effect is regionally pretty important. In a number of cases (notably the US, Australia, Argentina, and South Africa) it reverses the sign of the welfare change of the productivity effect. In other words, a number of regions experience yield losses but welfare gains because the increasing value of exports more than compensates for the productivity losses (with the opposite in South Africa). There are a few rules-of-thumb for thinking about how important the ToT effects are likely to be. Firstly, the ToT effects cancel out at the global level, so if you are only interested in aggregate global damages, you don’t have to worry about this effect. By similar logic, the higher the level of regional aggregation, the less important ToT effects are likely to be, since larger regions will likely contain both importers and exporters. Secondly, the importance of ToT effects depends on the magnitude of relative price changes which in turn depends on the productivity shocks. If the effect of climate change is to redistribute production around the globe rather than to increase or decrease aggregate production, then ToT effects will be smaller. We see this in our AgMIP results, where losses in the tropics are offset to a greater degree by gains in temperate regions, leading to smaller price changes and correspondingly smaller ToT effects.
A final point I’d like to highlight is the quantitative importance of our findings for the SCC. We knew from previous work by Delavane that agriculture is important in driving the SCC in FUND. Nevertheless, such a large increase in the total SCC from updating just one sector, in a model that contains 14 different impact sectors, is striking and begs the question of what would happen with other sectors. Moreover, there are suggestions that other models might also be underestimating agricultural impacts. Neither PAGE nor DICE model the agricultural sector explicitly, but both include agricultural impacts as part of more aggregate damage functions (market impacts in PAGE09 and non-SLR damages in DICE 2013). By coupling damage functions from these models to standardized socio-economic and climate modules, we are able to make an apples-to-apples comparison of our agriculture-sector damages with these damage functions. The graph below didn’t make it into the paper, but I think is informative:
                                                                                                                         
What you see is that our estimated agricultural damages (from the meta-analysis of yield estimates) are actually larger than all market damages included in PAGE. This indicates either that there are very small negative impacts and large off-setting benefits in other market sectors, or that PAGE is substantially under-estimating market damages. Comparing our results to DICE would imply that agricultural impacts constitute 45% of non-SLR damages. This seems high to me, considering this damage function includes both market and non-market (i.e. mortality) damages. For instance, in their analysis of US damages, Sol and co-authors found agricultural impacts to be dwarfed by costs from increased mortality, and to be smaller than effects on labor productivity and energy demand, suggesting to me that agricultural damages might also be currently under-estimated in DICE.

That’s my (excessively-long) take on the paper. Thank you to the GFEEDers for the opportunity. Do get in touch if you have any questions or comments on the paper – fmoore at ucdavis dot edu.

Tuesday, November 7, 2017

[Pretty urgent] Call for papers: Berkeley Climate Economics Workshop

We are currently soliciting papers from PhD students and post-docs for a climate economics workshop at UC Berkeley. Proposals are due on November 20, 2017 by 8:00am PST.


BACKGROUND AND GOALS


We encourage papers by PhD students and post-docs undertaking research in any area related to the economics of climate change. We encourage papers that use empirical methods, theory or numerical modelling. Papers can be single authored or co-authored. No restrictions apply to co-authors, i.e. coauthors can be senior researchers.

The workshop will explore recent advances in climate economics, with an emphasis on the linkage between empirical and numerical modeling methods. One goal of the workshop is to bring junior and senior researchers together. The final program will combine presentations from invited leading senior researchers and presentations from the most promising junior researchers (PhD students and postdocs).


HOW TO APPLY


Applications should be submitted online here.

Please include either a full working paper or an extended abstract (1-2 pages). PhD students should include a brief letter of recommendation from their advisor that indicates that the submitted abstract/paper will be ready for a full presentation for the workshop.


WORKSHOP INFORMATION


The workshop will be held at UC Berkeley on Fri 1/19 and Sat 1/20, 2018. All travel and lodging costs will be covered for presenters.

The workshop is organized by David Anthoff,  Max Auffhamer, and Solomon Hsiang, in collaboration with the Social Science Matrix at UC Berkeley.

Any questions regarding the workshop should be directed to Eva Seto.

Monday, October 30, 2017

Climate impacts research is getting real

A few recent examples of our research getting out of the ivory tower and into the real world. This came up in a recent seminar I teach, so it seemed like others might appreciate the update.

1. The NYT sent some journalists to India to do a story following up on Tamma Carleton's recent work on climate and economic drivers of suicide in India.  This is powerful and humanizing work by the Times making hard-nosed numbers appropriately real and heartbreaking. Some of the striking one-line quotes they obtained from personal interviews:
"I go without food more days than I eat"
"If I were to take out any more loans, the interest would grow, and my whole family would be forced to kill themselves."
2. Last week the US Government Accountability Office released to congress the report Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. The report drew heavily form the American Climate Prospectus that we published a few years ago and our recent paper on US costs of warming. The GAO summary concludes:
Information about the potential economic effects of climate change could inform decision makers about significant potential damages in different U.S. sectors or regions. According to several experts and prior GAO work, this information could help federal decision makers identify significant climate risks as an initial step toward managing such risks. This is consistent with, for example, National Academies leading practices, which call for climate change risk management efforts that focus on where immediate attention is needed. The federal government has not undertaken strategic government-wide planning to manage climate risks by using information on the potential economic effects of climate change to identify significant risks and craft appropriate federal responses. By using such information, the federal government could take an initial step in establishing government-wide priorities to manage such risks.
3. Trevor Houser and I recently estimated the potential long-run economic consequences of Hurricane Maria on the economic growth of Puerto Rico and published an op-ed explaining the issue and putting the event in context. Basically, I ran my LICRICE model to compute wind exposure across the island, which totals 123 mph max wind speeds on average across the entire territory. For a territory of this size, especially in the Atlantic, this is unprecedented in my data.


Then we applied these numbers to the results of work with Amir Jina on the macro-economic effects of these storms. If you were take central estimates from our benchmark model, the picture is a 21% drop in GDP per capita, relative to trend, over the next 15 years. Based on the low pre-Maria growth rate, we estimate that this storm undid roughly 26 years of growth in under 12 hrs. As stated in the op ed 
"Almost nothing on the planet, short of nuclear weaponry, destroys economic value as rapidly as a mega-hurricane."



Thursday, August 10, 2017

Climate change, crop failure, and suicides in India (Guest post by Tamma Carleton)

[This is a guest post by Tamma Carleton, a Doctoral Fellow at the Global Policy Lab and PhD candidate in the Ag and Resource Econ department here at Berkeley]

Last week, I published a paper in PNAS addressing a topic that has captured the attention of media and policymakers around the world for many years – the rising suicide rate in India. As a dedicated student of G-FEED contributors, I focus on the role of climate in this tragic phenomenon. I find that temperatures during India’s main growing season cause substantial increases in the suicide rate, amounting to around 65 additional deaths if all of India gained a degree day. I show that over 59,000 suicides can be attributed to warming trends across the country since 1980. With a range of different approaches I’ll talk about here, I argue that this effect appears to materialize through an agricultural channel in which crops are damaged, households face economic distress, and some cope by taking their own lives. It’s been a pretty disheartening subject to study for the last couple years, and I’m glad to see the findings out in the world, and now here on G-FEED.

First, a little background on suicides in India. The national suicide rate has approximately doubled since 1980, from around 6 per 100,000 to over 11 per 100,000 (for reference, the rate in the U.S. is about 13 per 100,000). The size of India’s population means this number encompasses many lives – today, about 135,000 are lost to suicide annually. There have been a lot of claims about what contributes to the upward trend, although most focus on increasing risks in agriculture, such as output price volatility, costly hybrid seeds, and crop-damaging climate events like drought and heat (e.g. here, here, and here). While many academic and non-academic sources have discussed the role of the climate, there was no quantitative evidence of a causal effect. I wanted to see if this relationship was in the data, and I wanted to be able to speak to the ongoing public debate by looking at mechanisms, a notoriously thorny aspect of the climate impacts literature.

The first finding in my paper is that while growing season temperatures increase the annual suicide rate, also hurting crops (as the G-FEED authors have shown us many times over), these same temperatures have no effect on suicides outside the growing season. While the results are much less certain for rainfall (I’m stuck with state-by-year suicide data throughout the analysis), a similar pattern emerges there, with higher growing season rainfall appearing to cause reductions in suicide: 


These effects seem pretty large to me. As I said above, a degree day of warming throughout the country during the growing season causes about 65 suicides throughout the year, equivalent to a 3.5% increase in the suicide rate per standard deviation increase in growing season degree days. 

The fact that the crop response functions are mirrored by the suicide response functions is consistent with an agricultural mechanism. However, this isn’t really enough evidence. Like in other areas of climate impacts research, it’s difficult to find exogenous variation that turns on or shuts off the hypothesized mechanism here –  I don’t have an experiment where I randomly let some households’ farm income be unaffected by temperature, as others’ suffer. Therefore, aspects of life that are different in the growing and non-growing seasons could possibly be driving heterogeneous response functions between temperature and suicide.  Because this mechanism is so important to policy, I turn to a couple additional tests.

I first show that there are substantial lagged effects, which are unlikely to occur if non-economic, direct links between the climate and suicidal behavior were taking place (like the psychological channels linking temperature to violence discussed in Sol and Marshall’s work). I also estimate spatial heterogeneity in both the suicide response to temperature, as well as the yield response, and find that the locations where suicides are most sensitive to growing season degree days also tend to be the locations where yields are most sensitive: 



The fact that higher temperatures mean more suicides is troubling as we think about warming unfolding in the next few decades. However, I’m an economist, so I should expect populations to reallocate resources and re-optimize behaviors to adapt to a gradually warming climate, right? Sadly, after throwing at the data all of the main adaptation tests that I’m aware of from the literature, I find no evidence of adaptation, looking both across space (e.g. are places with hotter average climates less sensitive?) and across time (e.g. has the response function flattened over time? What about long differences?):


Keeping in mind that there is no evidence of adaptation, my last calculation is to estimate the total number of deaths that can be attributed to warming trends observed since 1980. Following the method in David, Wolfram and Justin Costa-Roberts’ 2011 article in Science, I find that by end of sample in 2013, over 4,000 suicides per year across the country can be attributed to warming. Integrating from 1980 to today and across all states in India, I estimate that over 59,000 deaths in total can be attributed to warming. With spatially heterogeneous warming trends and population density, these deaths are distributed very differently across space:



While the tools I use are not innovative by any means (thanks to the actual authors of this blog for developing most of them), I think this paper is valuable to our literature for a couple reasons. First, while we talk a lot about integrating our empirical estimates of the mortality effects of climate change into policy-relevant metrics like the SCC, this is a particular type of death I think we should be incredibly concerned about. Suicide indicates extreme distress and hardship, and since we care about welfare, these deaths mean something distinct from the majority of the deaths driving the mortality rate responses that we often study. 

Second, mechanisms really matter. The media response to my paper has been shockingly strong, and everyone wants to talk about the mechanism and what it means for preventative policy. While I have by no means nailed the channel down perfectly here, a focus on testing the agricultural mechanism has made my findings much more tangible for people battling the suicide epidemic on the ground in India. I look forward to trying to find ways to improve the tools at our disposal for identifying mechanisms in this context and in others.

Finally, as climate change progresses, I think we could learn a lot from applying David, Wolfram, and Justin’s method more broadly. While attribution exercises have their own issues (e.g. we can’t, of course, attribute with certainty the entire temperature trend in any location to anthropogenic warming), I think it’s much easier for many people to engage with damages being felt today, as opposed to those likely to play out in the future. 

Thursday, July 6, 2017

Yesterday's maximum temperature is... today's maximum temperature? (Guest post by Patrick Baylis)

[For you climate-data-wrangling nerds out there, today we bring you a guest post by Patrick Baylis, current Stanford postdoc and soon-to-be assistant prof at UBC this fall. ]
You may not know this, but Kahlil Gibran was actually thinking about weather data when he wrote that yesterday is but today’s memory, and tomorrow is today’s dream. (Okay, not really.)
Bad literary references aside, readers of this blog know that climate economists project the impacts of climate change is by observing the relationships between historical weather realizations and economic outcomes. Fellow former ARE PhD Kendon Bell alerted me to an idiosyncrasy in one of the weather datasets we frequently use in our analyses. Since many of us (myself included) rely on high-quality daily weather data to do our work, I investigated. This post is a fairly deep dive into what I learned, so if you happen to not be interested in the minutiae of daily weather data, consider yourself warned.
The PRISM AN81-d dataset is daily minimum and maximum temperatures, precipitation, and minimum and maximum vapor pressure deficit data for the continental United States from 1981 to present. It is created by the PRISM Climate Group at Oregon State, and it is really nice. Why? It’s a gridded data product: it is composed of hundreds of thousands of 4km by 4km grid cells, where the values for each cell are determined by a complex interpolation method from weather station data (GHCN-D) that accounts for topological factors. Importantly, it’s consistent: there are no discontinuous jumps in the data (see figure below) and it’s a balanced panel: the observations are never missing.
PRISM 30 year normals
Source: PRISM Climate Group
These benefits are well-understood, and as a result many researchers use the PRISM dataset for their statistical models. However, there is a particularity of these data that may be important to researchers making use of the daily variation in the data: most measurements of temperature maximums, and some measurements of temperature minimums, actually refer to the maximum or minimum temperature of the day before the date listed.
To understand this, you have to understand that daily climate observations are actually summaries of many within-day observations. The reported maximum and minimum temperature are just the maximum and minimum temperature observations within a given period, like a day. The tricky part is that stations define a “day” as “the 24 hours since I previously recorded the daily summary”, but not all stations record their summaries at the same time. While most U.S. stations record in the morning (i.e, “morning observers”), a hefty proportion of stations are either afternoon or evening observers. PRISM aggregates data from these daily summaries, but in order to ensure consistency tries to only incorporate morning observers. This leads to the definition of a “PRISM day”. The PRISM documentation defines a “PRISM day” as:
Station data used in AN81d are screened for adherence to a “PRISM day” criterion. A PRISM day is defined as 1200 UTC-1200 UTC (e.g., 7 AM-7AM EST), which is the same as the [the National Weather Service’s hydrologic day]. Once-per day observation times must fall within +/- 4 hours of the PRISM day to be included in the AN81d tmax and tmin datasets. Stations without reported observation times in the NCEI GHCN-D database are currently assumed to adhere to the PRISM day criterion. The dataset uses a day-ending naming convention, e.g., a day ending at 1200 UTC on 1 January is labeled 1 January.
This definition means that generally only morning observers should be included in the data. The last sentence is important: because a day runs from 4am-4am PST (or 7am-7am EST) and because days are labeled using the endpoint of that time period, most of the observations from which the daily measures are constructed for a given date are taken from the day prior. A diagram may be helpful here:
Diagram
The above is a plot of temperature over about two days, representing a possible set of within-day monitor data. Let’s say that this station takes a morning reading at 7am PST (10am EST), meaning that this station would be included in the PRISM dataset. The top x-axis is the actual date, while the bottom x axis shows which observations are used under the PRISM day definition. The red lines are actual midnights, the dark green dotted line is the PRISM day definition cutoff and the orange (blue) dots in the diagram are the observations that represent the true maximums (minimums) of that calendar day. Because of the definition of a PRISM day, the maximum temperatures (“tmax”s from here on out) given for Tuesday and Wednesday (in PRISM) are actually observations recorded on Monday and Tuesday, respectively. On the other hand, the minimum temperatures (“tmin”s) given for Tuesday (in PRISM) is actually drawn from Tuesday, but the tmin given for Wednesday (in PRISM) is also from Tuesday.
To see this visually, I pulled the GHCN data and plotted a histogram of the average reporting time by station for the stations that report observation time (66% in the United States). The histogram below shows the average observation time by stations for all GHCN-D stations in the continental United States in UTC, colored by whether or not they would be included in PRISM according to the guidelines given above.
Histogram of observation time
This confirms what I asserted above: most, but not all, GHCN-D stations are morning observers, and the PRISM day definition does a good job capturing the bulk of that distribution. On average, stations fulfilling the PRISM criterion report at 7:25am or so.
The next step is to look empirically at how many minimum and maximum temperature observations are likely to fall before or after the observation time cutoff. To do that, we need some raw weather station data, which I pulled from NOAA’s Quality Controlled Local Climatological Data(QCLCD). To get a sense for which extreme temperatures would be reported as occurring on the actual day they occurred, I assumed that all stations would report at 7:25am, the average observation time in the PRISM dataset. The next two figures show histograms of observed maximum and minimum temperatures.
Histogram of observed maximum temperatures Histogram of observed minimum temperatures
I’ve colored the histograms so that all extremes (tmins and tmaxes) after 7:25am are red, indicating that extremes after that time will be reported as occurring during the following day. As expected, the vast majority of tmaxes (>94%) occur after 7:25am. But surprisingly, a good portion (32%) of tmins do as well. If you’re concerned about the large number of minimum temperature observations around midnight, remember that a midnight-to-midnight summary is likely to have this sort of “bump”, since days with a warmer-than-usual morning and a colder-than-usual night will have their lowest temperature at the end of the calendar day.
As a more direct check, I compared PRISM leads of tmin and tmax to daily aggregates (that I computed using a local calendar date definition) of the raw QCLCD data described above. The table below shows the pairwise correlations between the PRISM day-of observations, leads (next day), and the QCLCD data for both maximum and minimum daily temperature.
MeasurePRISM day-ofPRISM lead
tmin (calendar)0.9620.978
tmax (calendar)0.9340.992
As you can see, the the PRISM leads, i.e., observations from the next day, correlated more strongly with my aggregated data. The difference was substantial for tmax, as expected. The result for tmin is surprising: it also correlates more strongly with the PRISM tmin lead. I’m not quite sure what to make of this - it may be that the stations who fail to report their observation times and the occasions when the minimum temperature occurs after the station observation time are combining to make the lead of tmin correlate more closely with the local daily summaries I’ve computed. But I’d love to hear other explanations.
So who should be concerned about this? Mostly, researchers with econometric models that use daily variation in temperature on the right-hand side, and fairly high frequency variables on the left-hand side. The PRISM group isn’t doing anything wrong, and I’m sure that folks who specialize in working with weather datasets are very familiar with this particular limitation. Their definition matches a widely used definition of how to appropriately summarize daily weather observations, and presumably they’ve thought carefully about the consequences of this definition and of including more data from stations who don’t report their observation times. But researchers who, like me, are not specialists in using meteorological data and who, like me, use PRISM to examine at daily relationships between weather and economics outcomes, should tread carefully.
As is, using the PRISM daily tmax data amounts to estimating a model that includes lagged rather than day-of temperature. A quick fix, particularly for models that include only maximum temperature, is to simply use the leads, or the observed weather for the next day, since it will almost always reflect the maximum temperature for the day of interest. A less-quick fix is to make use of the whole distribution using the raw monitor data, but then you would lose the nice gridded quality of the PRISM data. Models with average or minimum temperature should, at the very least, tested for robustness with the lead values. Let’s all do Gibran proud.

Friday, June 30, 2017

Building a better damage function

Bob Kopp, Amir Jina, James Rising, and our partners at the Climate Impact Lab, Princeton, and RMS, have a new paper out today.  Our goal was to construct a climate damage function for the USA that is "micro-founded," in the sense that it is built up from causal relationships that are empirically measured using real-world data (if you're feeling skeptical, here's are two videos where Michael Greenstone and I explain why this matters).

Until now, the "damage function" has been a theoretical concept. The idea is that there should be some function that links global temperature changes to overall economic costs, and it was floated in the very earliest economic models of climate change, such as the original DICE model by Nordhaus where, in 1992, he described the idea while outlining his model:

from Nordhaus (1992)

The "extremely elusive" challenge was figuring out what this function should look like, e.g. what should theta_1 and theta_2 be? Should there something steeper than quadratic to capture really catastrophic outcomes? Many strong words have been shared between environmental economists at conferences about the shape and slope of this function, but essentially all discussions have been heuristic or theoretical.  We took a different approach, instead setting out to try and use the best available real world empirical results to figure out what the damage function looks like for the USA.  Here's what we did.

We started out by recognizing that a lot of work has already gone into modeling climate projections for the world, by dozens of teams of climate modelers around the world. So we took advantage of all those gazillions of processor-hours that have already been used and simply took all the CMIP5 models off the shelf, and systematically downscaled them to the county level.

Then we indexed each model against the global mean surface temperature change that it exhibits. Not all models agree on what warming will happen given a certain level of emissions. And even among models that exhibit the global mean temperature change, not all models agree on what will happen for specific locations in the US.  So it's important that we keep track of all possible experiences that the US might have in the future for each possible level of global mean temperature change.  Here's the array of actual warming patterns in maps, where each map is located on the horizontal axis based on the projected warming under RCP8.5 ("business as usual"). As you can see, the US may experience many different types of outcomes for any specific level of global mean warming.


Then, for each possible future warming scenario, we build a projection of what impacts will be in a whole bunch of sectors, using studies that meet a high empirical standard (pretty much the same one Marshall and I used in our conflict meta-analysis, plus a few other additional criteria). This was relying on a lot of previous work done by colleagues here, like Mike/Wolfram/David's crop results and Max's electricity results. We project impacts in agriculture, energy, mortality, labor and crime using empirical response functions. For energy, this got a little fancy because we hooked up the empirical model to NEMS and ran it as a process model. For coastal damages, we partnered with RMS and restructured their coastal cyclone model to take Bob's probabilistic SLR projections and Kerry Emanuel's cyclone projections as inputs---their model is pretty cool since it models thousands of coastal flood scenarios, and explicitly models damages for every building along the Atlantic coast. The energy and coastal models were each big lifts, using process models with empirical calibration, as were the reduced form impacts since we resampled daily weather and statistical uncertainty for each impact in each RCP in each climate model; this amounted to tracking 15 impacts across 3,143 counties across 29,000 possible states of the world for every day during 2000-2099. These are maps of the median scenarios for the different impacts:


Then, within each possible state of the world, we added up the costs across these different sectors to get a total cost. Doing this addition first is important because it accounts for any cross-sector correlations that might emerge in the future due to the spatial correlations in economic activity (across sectors) and their joint spatial correlation with the future climate anomaly (a bad realization for energy, ag, and mortality all might happen at the same time).  We then take these total costs and plot them against the global mean temperature change that was exhibited by the climate model that generated them. There ended up being 116 climate models that we could use, so there are only 116 different global temperature anomalies, but each model generated a whole distribution of possible outcomes due to weather and econometric uncertainty. Plotting these 116 distributions gives us a sense of the joint distribution between overall economic losses and global temperature changes: 


We can then just use normal statistics on these data to describe this joint distribution succinctly, getting out some equations that other folks can plug into their cost calculations or IAMS.  Below is the 5-95th intervals for the probability mass, as well as the median. To our knowledge, this is basically the first micro-founded damage function:


It turns out that Nordhaus was right about the functional form, it is quadratic. In the paper we try a bunch of other forms, but this thing is definitely quadratic. And if you are happy with the conditional average damage, we can get you the thetas

E[damage | T] = 0.283 x T + 0.146 x T^2

Now, of course, as we say several times in the paper, this function will change as we learn more about the different parts of the economy that the climate influences (for example, since we submitted the paper, we've learned that sleep is affected by climate). So for any new empirical study, as long as it meets our basic criteria, we can plug it in and crank out a new and updated damage function.

Beyond the damage function, there is one other finding which might interest the G-FEED crowd. First, because the South tends to be both hotter, it is disproportionally damaged by nonlinear climate impacts where high temperatures impose higher marginal damages (ag, mortality, energy & labor). Also, along the Gulf and southern Atlantic coast, coastal damages get large. The South also happens to be poorer than the North, which is impacted less heavily (or benefits on net, in many cases). This means that damages are negatively correlated with incomes, so the poor are hit hardest and the rich lose less (or gain). On net, this will increase current patterns of economic inequality (a point the press has emphasized heavily). Here are are whisker plots showing the distribution of total damage for each county, where counties are ordered by their rank in the current income distribution:


Note that nothing about this calculation takes into account the possibility that poor counties have fewer resources with which to cope, this is just about interaction of geography and the structure of the dose-response function.

This widening of inequality probably should matter for all sorts of reasons, including the possibility that it induces strong migration or social conflict (e.g. think about current rural-to-urban migration, the last election, or the Dust Bowl). But it also should matter for thinking about policy design and calculations of the social cost of carbon (SCC).  Pretty much all SCC calculations (e.g. DICE, FUND, PAGE) think about climate damages in welfare terms, but they compute damages for a representative agent that either represents the entire word, or enormous regions (e.g. the USA is one region in FUND). This made sense, since most of the models were primarily designed to think about the inter-temporal mitigation-as-investment problem, so collapsing the problem in the spatial dimension made it tractable in the inter-temporal dimension.  But it makes it really hard, or impossible, to resolve any inequality of damages among contemporary individuals within a region (in the case of FUND) or on the planet (in the case of DICE). Our analysis shows that there are highly unequal impacts within a single country, and this inequality of damages can be systematically incorporated into the damage function above, which as its shown is simply aggregate losses (treating national welfare as equal to average GDP only).  David Anthoff and others have thought about accounting for inequality between the representative agents of different FUND regions, and shown that it matters a lot.  But as far as I know, nobody has accounted for it within a country, and this seems to matter a lot too.

In the online appendix [Section K] (space is short at Science) we show how we can account for both inequality and risk, capturing both in a welfare-based damage function. Using our data and a welfare function that is additive in CRRA utilities, we compute inequality-neutral certainty-equivelent damage functions. These are the income losses that, if shared equally across the entire US population with certainty, would have the same welfare impact as the uncertain and unequal damages that we cover (i.e. shown in the dot-whisker plot above).  Two things to note about this concept. First, this adjustment could theoretically make damages appear smaller if climate changes were sufficiently progressive (i.e. hurting the wealthy and helping the poor). Second, there are two ways to compute this that are not equivelent; one could either compute the (i) inequality in risks borne by different counties or (ii) risks of inequality across counties. We chose to go with the first option, which involves first computing the certainty-equivelent damage for each county, then computing the inequality-neutral equivalent damage for that cross-sectional distribution of risk. (We thought it was a little too difficult for actual people to reasonably imagine all possible unequal states of the future world before integrating out the uncertainty.)

We compute these adjusted damages for a range of parameters that separately describe risk aversion and inequality aversion, since these are value judgements and we don't have strong priors on what the right number ought to be. Below is a graph of what happens to the damage function as you raise both these parameters above one (values of one just give you back the original damage function, which is the dashed line below). Each colored band is for a single inequality aversion value, where the top edge is for risk aversion = 8 and the lower edge is risk aversion = 2:

National inequality-neutral certainty-equivalent loss equal in value to direct damages under different assumptions regarding coefficients of inequality aversion and risk aversion. Shaded regions span results for risk aversion values between 2 and 8 (lower and upper bounds).  Dashed line is same as median curve above.

What we see is that adjustment for inequality starts to matter a lot pretty quickly, more so than risk aversion, but the two actually interact to create huge welfare losses as temperatures start to get high. For a sense of scale, note that in the original DICE model, Nordhaus defined "catastrophic outcomes" as possible events that might lower incomes by 20%.

Bob, David Anthoff and I have debated a bit what the right values for these parameters are, and I'll be the first to say I don't know what they should be. There are several estimates out there, but I think we really don't talk about inequality aversion much so there's not a ton to draw on. But, just like the discount rate (which has received a lot of attention/thought/debate), these ethical parameters have a huge influence on how we think about these damages. And looking at this figure, my guess is that inequality aversion may be just as influential on the SCC as the discount rate---especially once we start having global estimates with this kind of spatial resolution.  I think this is one of the most important directions for research to go: figuring out how we are supposed to value the inequality caused by climate change and accounting for it appropriately in the SCC.