Thursday, August 23, 2018

Let there be light? Estimating the impact of geoengineering on crop productivity using volcanic eruptions as natural experiments (Guest post by Jonathan Proctor)

[This is a guest post by Jonathan Proctor, a Doctoral Fellow at the Global Policy Lab and PhD candidate in the Ag and Resource Econ department here at Berkeley]

On Wednesday I, and some notorious G-FEEDers, published a paper in Nature exploring whether solar geoengineering – a proposed technology cool the Earth by reflecting sunlight back into space—might be able to mitigate climate-change damages to agricultural production. We find that, as intended and previously described, the cooling from geoengineering benefits crop yields. We also find, however, that the shading from solar geoengineering makes crops less productive. On net, the damages from reduced sunlight wash out the benefits from cooling, meaning that solar geoengineering is unlikely to be an effective tool to mitigate the damages that climate change poses to global agricultural production and food security. Put another way, if we imagine SRM as an experimental surgery, our findings suggest that the side effects are as bad as the cure.

Zooming out, solar geoengineering is an idea to cool the earth by injecting reflective particles --usually precursors to sulfate aerosols -- into the high atmosphere. The idea is that these particles would bounce sunlight back into space and thus cool the Earth, similarly to how you might cool yourself down by standing in the shade of a tree during a hot day. The idea of such sulfate-based climate engineering was, in part, inspired by the observation that the Earth tends to cool following massive volcanic eruptions such as that of Pinatubo in 1991, which cooled the earth by about half a degree C in the years following the eruption.

Our visualization of the stratospheric aerosols (blue) that scattered light and shaded the planet after the eruption of Mount Pinatubo in 1991. Each frame is one month of data. Green on the surface indicates global crop lands. (The distance between the aerosol cloud and the surface is much larger than in real life.) 

A major challenge in learning the consequences of solar geoengineering is that we can’t do a planetary-scale experiment without actually deploying the technology. (Sol’s questionably-appropriate analogy is that you can’t figure out if you want to be a parent through experimentation.) An innovation here was realizing that we could learn about the impacts of solar geoengineering without incurring the risks of an outdoor experiment by using giant volcanic eruptions as natural experiments. While these eruptions are not perfect proxies for solar geoengineering in every way, they give us the necessary variation we need in high atmosphere aerosol concentrations to study some of the key effects on agriculture. (We expand on how we account for the important differences between the impacts of volcanic eruptions and solar geoengineering on agricultural production later in this post). This approach builds on previous work in the earth science community which has used the eruptions to study solar geoengineering’s impact on climate.  Here’s what we found:

Result 1: Pinatubo dims the lights


First, we find that the aerosols from Pinatubo had a profound impact on the global optical environment. By combing remotely sensed data on the eruption’s aerosol cloud with globally-dispersed ground sensors of solar radiation (scraped from a Russian website that recommends visitors use Netscape Navigator) we estimate that the Pinatubo eruption temporarily decreased global surface solar radiation (orange) by 2.5%, reduced direct (i.e. unscattered, shown in yellow) insolation by 20% and increased diffuse (i.e. scattered, shown in red) sunlight by 20%.

Effect of El Chichon (1982) and Mt Pinatubo (1991) on direct (yellow), diffuse (red) and total (orange) insolation for global all-sky conditions.

These global all-sky results (i.e. the average effect on a given day) generalize previous clear-sky estimates (the effect on a clear day) that have been done at individual stations. Like a softbox or diffusing sheet in photography, this increase in diffuse light reduced shadows on a global scale. The aerosol-scattering also made redder sunsets (sulfate aerosols cause a spectral shift in addition to a diffusion of light), similar to the volcanic sunsets that inspired Edvard Munch’s “The Scream.” Portraits and paintings aside, we wanted to know: how did these changes in sunlight impact global agricultural production?

Isolating the effect of changes in sunlight, however, was a challenge. First, the aerosols emitted by the eruption alter not only sunlight, but also other climatic variables such as temperature and precipitation, which impact yield. Second, there just so happened to be El Nino events that coincided with the eruptions of both El Chichon and Pinatubo. This unfortunate coincidence has frustrated the atmospheric science community for decades, leading some to suggest that volcanic eruptions might even cause El NiƱos, as well as the reverse (the former theory seems to have more evidence behind it).

To address the concern that volcanoes affect both light and other climatic conditions, we used a simple “condition on observables” design – by measuring and including potential confounds (such as temperature, precipitation and cloud cover) in the regression we can account for their effects. To address the concurrent El Nino, we do two things. First, we directly condition on the variables though which an El Nino could impact yields – again temperature, precipitation and cloud cover. Second, we condition on the El Nino index itself, which captures any effects that operate outside of these directly modeled channels. Essentially, we isolate the insolation effect by partitioning out the variation due to everything else – like looking for your keys by pulling everything else out of your purse.



The above figure schematically illustrates our strategy. The total effect (blue) is the sum of optical (red) and climatic components (green). By accounting for the change in yields due to the non-optical factors, we isolate the variation in yields due to stratospheric aerosol-induced changes in sunlight.

Result 2: Dimming the lights decreases yields


Our second result, and the main scientific contribution, is the finding that radiative scattering from stratospheric sulfate aerosols decreases yields on net, holding other variables like temperature constant. The magnitude of this impact is substantial – the global average scattering from Pinatubo reduced C4 (maize) yields by 9.3% and C3 (soy, rice and wheat) yields by 4.8%, which is two to three times larger than the change in total sunlight. We reconstruct this effect for each country in the figure below:

Each line represents one crop for one country. These are the reconstructed yield losses due to the estimated direct optical effects of overhead aerosols.

My young cousins dismissed the sign of this effect as obvious – after all plants need sunlight to grow. But, surprising to my young cousins, the prevailing wisdom in the literature tended to be that scattering light should increase crop growth (Sol incessantly points out that David even said this once). The argument is that the reduction in yields from loss of total light would be more than offset by gains in yield through an increase in diffuse light. The belief that diffuse light is more useful to plants than direct light stems from both the observation that the biosphere breathed in carbon dioxide following the Pinatubo eruption and the accompanying theory that diffusing light increases plant growth by redistributing light from the sun-saturated leaves at the top of the canopy to the light-hungry leaves below. Since each leaf has diminishing photosynthetic productivity for each incremental increase in sunlight, the theory argues, a more equal distribution of light should promote growth.

Aerosols scatter incoming sunlight, which more evenly distributes sunlight across the leaves of the plant. We test whether the loss of total sunlight or the increase in diffuse light from aerosol scattering has a stronger effect on yield.

While this “diffuse fertilization” appears to be strong in unmanaged environments, such as the Harvard Forest where the uptake of carbon increased following the Pinatubo eruption, our results find that, for agricultural yields, the damages from reduced total sunlight outweigh the benefits from a greater portion of the light being diffuse.

We cannot tell for sure, but we think that this difference between forest and crop responses could be due to either their differences in geometric structure (which could affect how deeply scattered light might penetrate the canopy):


Or to a re-optimization towards vegetative growth at the cost of fruit growth in response to the changes in light:

Two radishes grown in normal (left) and low light (right) conditions.  Credit: Nagy Lab, University of Edinburgh

This latter re-optimization may also explain the relatively large magnitude of the estimated effect on crop yield.

Result 3: Dimming damages neutralize cooling benefits from geo


Our final calculation, and the main policy-relevant finding of the paper, is that in a solar geoengineering scenario the damages from reduced sunlight cancel out the benefits from warming. The main challenge here was figuring out how to apply what we learned from volcanic eruptions to solar geoengineering, since the climate impacts (e.g. changes in temperature, precipitation or cloud cover) of a short-term injection of stratospheric aerosols differ from those of more sustained injections (e.g. see here and here). To address this, we first used an earth system model to calculate the impact of a long-term injection of aerosols on temperature, precipitation, cloud cover and insolation (measured in terms of aerosol optical depth). We then apply our crop model that we trained on the Pinatubo eruption (which accounts for changes in temperature, rainfall, cloud cover, and insolation independently) to calculate how these geoengineering-induced changes in climate impact crop yields. This two-step process allows us to disentangle the effects of solar geoengineering on climate (which we got from the earth system model) and of climate on crops (which we got from Pinatubo). Thus, we can calculate the change in yields due to a solar geoengineering scenario even though volcanic eruptions and solar geoengineering have different climatic fingerprints. Still, as with any projection to 2050, caveats abound such as the role of adaptation, the possibility of optimized particle design, or the possibility that variables other than sunlight, temperature, rainfall and cloud cover could play a substantial role.

Estimated global net effect of a geo-engineering on crop yields through four channels (temperature, insolation, cloud cover, precipitation) for four crops. The total effect is the sum of these four partial effects.

So, what should we do? For agriculture, our findings suggest that sulfate-based solar geoengineering might not work as well as previously thought to limit the damaging effects of climate change. However, there are other sectors of the economy that could potentially benefit substantially from geoengineering (or be substantially damaged, we just don’t know). To continue the metaphor from earlier, just because the first test of an experimental surgery had side effects for a specific part of the human body does not mean that the procedure is always immediately abandoned. There are many illnesses that are so harmful that procedures known to cause side effects are sometimes still worth the risk. Similarly, research into geoengineering should not be entirely abandoned because our analysis demonstrated one adverse side effect, there may remain good reasons to eventually pursue such a strategy despite some known costs. With careful study, humanity will eventually gain a better understanding of this powerful technology. We hope that the methodology developed in this paper might be extended to study the effects of sulfate aerosol injection on ecosystem or human health and would be open to collaborate on future studies. Thanks for reading, and I’m excited to hear any thoughts the community may have.

Tuesday, April 3, 2018

Claims of Bias in Climate-Conflict Research Lack Evidence [Uncut Version]

Marshall and I had an extremely brief Correspondence published in Nature last week. We were reacting to an article "Sampling bias in climate–conflict research" by Adams et al. in Nature Climate Change and an Editorial published in Nature discussing and interpreting some of the statements by Adams et al.  Below is the full 300-word unedited "director's cut" of our original submission, which was edited down by the journal. Two other short comments in the same issue (actually same page!) provided other perspectives.

Butler & Kefford pointed out that prior literature describes climate stress as amplifying pre-existing conflict risk, rather than being the "sole cause" as Adams et al. suggest previous studies suggest (yes, this is getting confusing). We agree with this interpretation and the evidence seem to back it up pretty clearly. Our analyses indicate fairly consistent percentage changes in conflict risk induced by climatic shifts, so places with high initial risk get more of a boost from climatic events.

Glieck, Lewandowsky & Kelly argued that the earlier articles were an oversimplification of prior research, and that focusing on locations where conflict occurs is important for helping to trace out how climate induces conflict. I would think that this point would resonate with Adams et al, since some of those authors do actual case studies as part of their research portfolio. It does kind of seem to me that case studies would be the most extreme case of "selection on the outcome" as Adams et al define it.

Finally, here's what we originally wrote to fit on a single MS Word page:

Claims of Bias in Climate-Conflict Research Lack Evidence 
Solomon Hsiang and Marshall Burke 
A recent article by Adams et al. [1] and accompanying editorial [2] criticize the field of research studying links between climate and conflict as systematically biased, sowing doubt in prior findings [3,4]. But the underlying analysis fails to demonstrate any evidence of biased results. 
Adams et al. claim that because most existing analyses focus on conflict-prone locations, the conclusions of the literature must be biased. This logic is wrong. If it were true, then the field of medicine would be biased because medical researchers spend a disproportionate time studying ill patients rather than studying each of us every day when we are healthy. 
Adams et al.’s error arises because they confuse sampling observations within a given study based on the dependent variable (a major statistical violation) with the observation that there are more studies in locations where the average of a dependent variable, the conflict rate, is higher (not a violation). Nowhere does Adams et al. provide evidence that any prior analysis contained actual statistical errors. 
We are also concerned about the argument advanced by Adams et al. and repeated in the editorial that it is “undesirable” to study risk factors for populations at high risk of conflict because it may lead to them being “stigmatized.” Such logic would imply that study of cancer risk factors for high risk patients should not proceed because success of these studies may lead to the patients being stigmatized.  We believe that following such recommendations will inhibit scientific research and lead to actual systematic biases in the literature. 
Research on linkages between climate and conflict is motivated by the desire to identify causes of human suffering so it may be alleviated. We do not believe that shying away from findings in this field is an effective path towards this goal. 
References 
1. Adams, Ide, Barnett, Detges. Sampling bias in climate–conflict research. Nature Climate Change (2018).
2. Editorial. Don’t jump to conclusions about climate change and civil conflict. Nature, 555, 275-276 (2018).
3. Hsiang, Burke, Miguel. Science (2013) doi:10.1126/science.1235367
4. Burke, Hsiang, Miguel. Ann. Rev. Econ. (2015) doi:10.1146/annurev-economics-080614-115430



Monday, December 4, 2017

New Damage Functions for the Agricultural Sector – Guest Post by Fran Moore

Last week I had a new paper come out with some fantastic co-authors (Delavane Diaz, Tom Hertel, and Uris Baldos) on new damage functions in the agricultural sector. Since this covers a number of topics of interest to G-FEED readers (climate damages, agricultural impacts, SCC etc), I thought I’d dig a bit into what I see as some of the main contributions.
Firstly what we do. Essentially this is an exercise in trying to link a damage function directly to the scientific consensus on climate impacts as represented by findings in the IPCC. We therefore rely on a database of over 1000 results on the yield response to temperature previously published by Challinor et al. and used to support conclusions in the food security chapter of Working Group 2. We do a bit of reanalysis (using a multi-variate regression, adding soybeans, allowing for more geographic heterogeneity in response) to get yield-temperature response functions that can be extrapolated to a global grid to give productivity shocks at 1, 2, and 3 degrees of warming (available for other researchers to use here).
As readers of G-FEED well know though, getting productivity shocks is only half the battle because what we really want are welfare changes. Therefore, we introduce our productivity shocks into the 140-region GTAP CGE model, which has a particularly rich representation of the agriculture sector and international trade. This then gives us welfare changes due to agricultural impacts at different levels of warming, which is what we need for an IAM damage function. Finally, we take our findings all the way through to the SCC by replacing the agricultural damages from FUND with our new estimates. Our headline result is that improving damages just in the agricultural sector leads the overall SCC to more than double.
There’s lots more fun stuff in the paper and supplementary information including a comparison with results from AgMIP, some interesting findings around adaptation effectiveness, and a sensitivity analysis of GTAP parameters. But here I want to highlight what I see as three main contributions.
Firstly I think a major contribution is to the literature on improving the scientific validity of damages underlying IAM results. The importance of improving the empirical basis of damage functions has been pointed out by numerous people. This is something Delavane and I have worked on previously using empirically-estimated growth-rate damages in DICE, and something that Sol and co-authors have done a lot of work on. I think what’s new here is using the existing biophysical impacts literature and tracing its implications all the way through to the SCC. There is an awful lot of scientific work out there on the effects of climate change relevant to the SCC, but wrangling the results from a large number of dispersed studies into a form that can be used to support a global damage function is not straightforward (something Delavane and I discuss in a recent review paper). In this sense the agriculture sector is probably one of the more straightforward to tackle – we definitely relied on previous work from the lead authors of the IPCC chapter and from the AgMIP team. I do think this paper provides one template for implementing the recommendation of the National Academy of Sciences SCC report around damage functions – that they be based on current and peer-reviewed science, that they report and quantify uncertainty wherever possible, and that calibrations are transparent and well-documented.
Secondly, an important contribution of this paper is on quantifying the importance of general-equilibrium effects in determining welfare changes. Since so much climate impacts work estimates local productivity changes, it raises the question of what these can or can’t tell us about welfare under global climate change. We address this question directly by decomposing our welfare changes into three components: the direct productivity effect (essentially just the local productivity change multiplied by crop value), the terms-of-trade effect, and the allocative efficiency effect (caused by interactions with existing market distortions). The last one is generally pretty small in our work, so regional changes in welfare boil down to a combination of local productivity changes and the interaction between a region’s trade position and global price changes. This breakdown for 3 degrees of warming is shown below.

For these productivity shocks, the terms-of-trade (ToT) effect is regionally pretty important. In a number of cases (notably the US, Australia, Argentina, and South Africa) it reverses the sign of the welfare change of the productivity effect. In other words, a number of regions experience yield losses but welfare gains because the increasing value of exports more than compensates for the productivity losses (with the opposite in South Africa). There are a few rules-of-thumb for thinking about how important the ToT effects are likely to be. Firstly, the ToT effects cancel out at the global level, so if you are only interested in aggregate global damages, you don’t have to worry about this effect. By similar logic, the higher the level of regional aggregation, the less important ToT effects are likely to be, since larger regions will likely contain both importers and exporters. Secondly, the importance of ToT effects depends on the magnitude of relative price changes which in turn depends on the productivity shocks. If the effect of climate change is to redistribute production around the globe rather than to increase or decrease aggregate production, then ToT effects will be smaller. We see this in our AgMIP results, where losses in the tropics are offset to a greater degree by gains in temperate regions, leading to smaller price changes and correspondingly smaller ToT effects.
A final point I’d like to highlight is the quantitative importance of our findings for the SCC. We knew from previous work by Delavane that agriculture is important in driving the SCC in FUND. Nevertheless, such a large increase in the total SCC from updating just one sector, in a model that contains 14 different impact sectors, is striking and begs the question of what would happen with other sectors. Moreover, there are suggestions that other models might also be underestimating agricultural impacts. Neither PAGE nor DICE model the agricultural sector explicitly, but both include agricultural impacts as part of more aggregate damage functions (market impacts in PAGE09 and non-SLR damages in DICE 2013). By coupling damage functions from these models to standardized socio-economic and climate modules, we are able to make an apples-to-apples comparison of our agriculture-sector damages with these damage functions. The graph below didn’t make it into the paper, but I think is informative:
                                                                                                                         
What you see is that our estimated agricultural damages (from the meta-analysis of yield estimates) are actually larger than all market damages included in PAGE. This indicates either that there are very small negative impacts and large off-setting benefits in other market sectors, or that PAGE is substantially under-estimating market damages. Comparing our results to DICE would imply that agricultural impacts constitute 45% of non-SLR damages. This seems high to me, considering this damage function includes both market and non-market (i.e. mortality) damages. For instance, in their analysis of US damages, Sol and co-authors found agricultural impacts to be dwarfed by costs from increased mortality, and to be smaller than effects on labor productivity and energy demand, suggesting to me that agricultural damages might also be currently under-estimated in DICE.

That’s my (excessively-long) take on the paper. Thank you to the GFEEDers for the opportunity. Do get in touch if you have any questions or comments on the paper – fmoore at ucdavis dot edu.

Tuesday, November 7, 2017

[Pretty urgent] Call for papers: Berkeley Climate Economics Workshop

We are currently soliciting papers from PhD students and post-docs for a climate economics workshop at UC Berkeley. Proposals are due on November 20, 2017 by 8:00am PST.


BACKGROUND AND GOALS


We encourage papers by PhD students and post-docs undertaking research in any area related to the economics of climate change. We encourage papers that use empirical methods, theory or numerical modelling. Papers can be single authored or co-authored. No restrictions apply to co-authors, i.e. coauthors can be senior researchers.

The workshop will explore recent advances in climate economics, with an emphasis on the linkage between empirical and numerical modeling methods. One goal of the workshop is to bring junior and senior researchers together. The final program will combine presentations from invited leading senior researchers and presentations from the most promising junior researchers (PhD students and postdocs).


HOW TO APPLY


Applications should be submitted online here.

Please include either a full working paper or an extended abstract (1-2 pages). PhD students should include a brief letter of recommendation from their advisor that indicates that the submitted abstract/paper will be ready for a full presentation for the workshop.


WORKSHOP INFORMATION


The workshop will be held at UC Berkeley on Fri 1/19 and Sat 1/20, 2018. All travel and lodging costs will be covered for presenters.

The workshop is organized by David Anthoff,  Max Auffhamer, and Solomon Hsiang, in collaboration with the Social Science Matrix at UC Berkeley.

Any questions regarding the workshop should be directed to Eva Seto.

Monday, October 30, 2017

Climate impacts research is getting real

A few recent examples of our research getting out of the ivory tower and into the real world. This came up in a recent seminar I teach, so it seemed like others might appreciate the update.

1. The NYT sent some journalists to India to do a story following up on Tamma Carleton's recent work on climate and economic drivers of suicide in India.  This is powerful and humanizing work by the Times making hard-nosed numbers appropriately real and heartbreaking. Some of the striking one-line quotes they obtained from personal interviews:
"I go without food more days than I eat"
"If I were to take out any more loans, the interest would grow, and my whole family would be forced to kill themselves."
2. Last week the US Government Accountability Office released to congress the report Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. The report drew heavily form the American Climate Prospectus that we published a few years ago and our recent paper on US costs of warming. The GAO summary concludes:
Information about the potential economic effects of climate change could inform decision makers about significant potential damages in different U.S. sectors or regions. According to several experts and prior GAO work, this information could help federal decision makers identify significant climate risks as an initial step toward managing such risks. This is consistent with, for example, National Academies leading practices, which call for climate change risk management efforts that focus on where immediate attention is needed. The federal government has not undertaken strategic government-wide planning to manage climate risks by using information on the potential economic effects of climate change to identify significant risks and craft appropriate federal responses. By using such information, the federal government could take an initial step in establishing government-wide priorities to manage such risks.
3. Trevor Houser and I recently estimated the potential long-run economic consequences of Hurricane Maria on the economic growth of Puerto Rico and published an op-ed explaining the issue and putting the event in context. Basically, I ran my LICRICE model to compute wind exposure across the island, which totals 123 mph max wind speeds on average across the entire territory. For a territory of this size, especially in the Atlantic, this is unprecedented in my data.


Then we applied these numbers to the results of work with Amir Jina on the macro-economic effects of these storms. If you were take central estimates from our benchmark model, the picture is a 21% drop in GDP per capita, relative to trend, over the next 15 years. Based on the low pre-Maria growth rate, we estimate that this storm undid roughly 26 years of growth in under 12 hrs. As stated in the op ed 
"Almost nothing on the planet, short of nuclear weaponry, destroys economic value as rapidly as a mega-hurricane."



Thursday, August 10, 2017

Climate change, crop failure, and suicides in India (Guest post by Tamma Carleton)

[This is a guest post by Tamma Carleton, a Doctoral Fellow at the Global Policy Lab and PhD candidate in the Ag and Resource Econ department here at Berkeley]

Last week, I published a paper in PNAS addressing a topic that has captured the attention of media and policymakers around the world for many years – the rising suicide rate in India. As a dedicated student of G-FEED contributors, I focus on the role of climate in this tragic phenomenon. I find that temperatures during India’s main growing season cause substantial increases in the suicide rate, amounting to around 65 additional deaths if all of India gained a degree day. I show that over 59,000 suicides can be attributed to warming trends across the country since 1980. With a range of different approaches I’ll talk about here, I argue that this effect appears to materialize through an agricultural channel in which crops are damaged, households face economic distress, and some cope by taking their own lives. It’s been a pretty disheartening subject to study for the last couple years, and I’m glad to see the findings out in the world, and now here on G-FEED.

First, a little background on suicides in India. The national suicide rate has approximately doubled since 1980, from around 6 per 100,000 to over 11 per 100,000 (for reference, the rate in the U.S. is about 13 per 100,000). The size of India’s population means this number encompasses many lives – today, about 135,000 are lost to suicide annually. There have been a lot of claims about what contributes to the upward trend, although most focus on increasing risks in agriculture, such as output price volatility, costly hybrid seeds, and crop-damaging climate events like drought and heat (e.g. here, here, and here). While many academic and non-academic sources have discussed the role of the climate, there was no quantitative evidence of a causal effect. I wanted to see if this relationship was in the data, and I wanted to be able to speak to the ongoing public debate by looking at mechanisms, a notoriously thorny aspect of the climate impacts literature.

The first finding in my paper is that while growing season temperatures increase the annual suicide rate, also hurting crops (as the G-FEED authors have shown us many times over), these same temperatures have no effect on suicides outside the growing season. While the results are much less certain for rainfall (I’m stuck with state-by-year suicide data throughout the analysis), a similar pattern emerges there, with higher growing season rainfall appearing to cause reductions in suicide: 


These effects seem pretty large to me. As I said above, a degree day of warming throughout the country during the growing season causes about 65 suicides throughout the year, equivalent to a 3.5% increase in the suicide rate per standard deviation increase in growing season degree days. 

The fact that the crop response functions are mirrored by the suicide response functions is consistent with an agricultural mechanism. However, this isn’t really enough evidence. Like in other areas of climate impacts research, it’s difficult to find exogenous variation that turns on or shuts off the hypothesized mechanism here –  I don’t have an experiment where I randomly let some households’ farm income be unaffected by temperature, as others’ suffer. Therefore, aspects of life that are different in the growing and non-growing seasons could possibly be driving heterogeneous response functions between temperature and suicide.  Because this mechanism is so important to policy, I turn to a couple additional tests.

I first show that there are substantial lagged effects, which are unlikely to occur if non-economic, direct links between the climate and suicidal behavior were taking place (like the psychological channels linking temperature to violence discussed in Sol and Marshall’s work). I also estimate spatial heterogeneity in both the suicide response to temperature, as well as the yield response, and find that the locations where suicides are most sensitive to growing season degree days also tend to be the locations where yields are most sensitive: 



The fact that higher temperatures mean more suicides is troubling as we think about warming unfolding in the next few decades. However, I’m an economist, so I should expect populations to reallocate resources and re-optimize behaviors to adapt to a gradually warming climate, right? Sadly, after throwing at the data all of the main adaptation tests that I’m aware of from the literature, I find no evidence of adaptation, looking both across space (e.g. are places with hotter average climates less sensitive?) and across time (e.g. has the response function flattened over time? What about long differences?):


Keeping in mind that there is no evidence of adaptation, my last calculation is to estimate the total number of deaths that can be attributed to warming trends observed since 1980. Following the method in David, Wolfram and Justin Costa-Roberts’ 2011 article in Science, I find that by end of sample in 2013, over 4,000 suicides per year across the country can be attributed to warming. Integrating from 1980 to today and across all states in India, I estimate that over 59,000 deaths in total can be attributed to warming. With spatially heterogeneous warming trends and population density, these deaths are distributed very differently across space:



While the tools I use are not innovative by any means (thanks to the actual authors of this blog for developing most of them), I think this paper is valuable to our literature for a couple reasons. First, while we talk a lot about integrating our empirical estimates of the mortality effects of climate change into policy-relevant metrics like the SCC, this is a particular type of death I think we should be incredibly concerned about. Suicide indicates extreme distress and hardship, and since we care about welfare, these deaths mean something distinct from the majority of the deaths driving the mortality rate responses that we often study. 

Second, mechanisms really matter. The media response to my paper has been shockingly strong, and everyone wants to talk about the mechanism and what it means for preventative policy. While I have by no means nailed the channel down perfectly here, a focus on testing the agricultural mechanism has made my findings much more tangible for people battling the suicide epidemic on the ground in India. I look forward to trying to find ways to improve the tools at our disposal for identifying mechanisms in this context and in others.

Finally, as climate change progresses, I think we could learn a lot from applying David, Wolfram, and Justin’s method more broadly. While attribution exercises have their own issues (e.g. we can’t, of course, attribute with certainty the entire temperature trend in any location to anthropogenic warming), I think it’s much easier for many people to engage with damages being felt today, as opposed to those likely to play out in the future. 

Thursday, July 6, 2017

Yesterday's maximum temperature is... today's maximum temperature? (Guest post by Patrick Baylis)

[For you climate-data-wrangling nerds out there, today we bring you a guest post by Patrick Baylis, current Stanford postdoc and soon-to-be assistant prof at UBC this fall. ]
You may not know this, but Kahlil Gibran was actually thinking about weather data when he wrote that yesterday is but today’s memory, and tomorrow is today’s dream. (Okay, not really.)
Bad literary references aside, readers of this blog know that climate economists project the impacts of climate change is by observing the relationships between historical weather realizations and economic outcomes. Fellow former ARE PhD Kendon Bell alerted me to an idiosyncrasy in one of the weather datasets we frequently use in our analyses. Since many of us (myself included) rely on high-quality daily weather data to do our work, I investigated. This post is a fairly deep dive into what I learned, so if you happen to not be interested in the minutiae of daily weather data, consider yourself warned.
The PRISM AN81-d dataset is daily minimum and maximum temperatures, precipitation, and minimum and maximum vapor pressure deficit data for the continental United States from 1981 to present. It is created by the PRISM Climate Group at Oregon State, and it is really nice. Why? It’s a gridded data product: it is composed of hundreds of thousands of 4km by 4km grid cells, where the values for each cell are determined by a complex interpolation method from weather station data (GHCN-D) that accounts for topological factors. Importantly, it’s consistent: there are no discontinuous jumps in the data (see figure below) and it’s a balanced panel: the observations are never missing.
PRISM 30 year normals
Source: PRISM Climate Group
These benefits are well-understood, and as a result many researchers use the PRISM dataset for their statistical models. However, there is a particularity of these data that may be important to researchers making use of the daily variation in the data: most measurements of temperature maximums, and some measurements of temperature minimums, actually refer to the maximum or minimum temperature of the day before the date listed.
To understand this, you have to understand that daily climate observations are actually summaries of many within-day observations. The reported maximum and minimum temperature are just the maximum and minimum temperature observations within a given period, like a day. The tricky part is that stations define a “day” as “the 24 hours since I previously recorded the daily summary”, but not all stations record their summaries at the same time. While most U.S. stations record in the morning (i.e, “morning observers”), a hefty proportion of stations are either afternoon or evening observers. PRISM aggregates data from these daily summaries, but in order to ensure consistency tries to only incorporate morning observers. This leads to the definition of a “PRISM day”. The PRISM documentation defines a “PRISM day” as:
Station data used in AN81d are screened for adherence to a “PRISM day” criterion. A PRISM day is defined as 1200 UTC-1200 UTC (e.g., 7 AM-7AM EST), which is the same as the [the National Weather Service’s hydrologic day]. Once-per day observation times must fall within +/- 4 hours of the PRISM day to be included in the AN81d tmax and tmin datasets. Stations without reported observation times in the NCEI GHCN-D database are currently assumed to adhere to the PRISM day criterion. The dataset uses a day-ending naming convention, e.g., a day ending at 1200 UTC on 1 January is labeled 1 January.
This definition means that generally only morning observers should be included in the data. The last sentence is important: because a day runs from 4am-4am PST (or 7am-7am EST) and because days are labeled using the endpoint of that time period, most of the observations from which the daily measures are constructed for a given date are taken from the day prior. A diagram may be helpful here:
Diagram
The above is a plot of temperature over about two days, representing a possible set of within-day monitor data. Let’s say that this station takes a morning reading at 7am PST (10am EST), meaning that this station would be included in the PRISM dataset. The top x-axis is the actual date, while the bottom x axis shows which observations are used under the PRISM day definition. The red lines are actual midnights, the dark green dotted line is the PRISM day definition cutoff and the orange (blue) dots in the diagram are the observations that represent the true maximums (minimums) of that calendar day. Because of the definition of a PRISM day, the maximum temperatures (“tmax”s from here on out) given for Tuesday and Wednesday (in PRISM) are actually observations recorded on Monday and Tuesday, respectively. On the other hand, the minimum temperatures (“tmin”s) given for Tuesday (in PRISM) is actually drawn from Tuesday, but the tmin given for Wednesday (in PRISM) is also from Tuesday.
To see this visually, I pulled the GHCN data and plotted a histogram of the average reporting time by station for the stations that report observation time (66% in the United States). The histogram below shows the average observation time by stations for all GHCN-D stations in the continental United States in UTC, colored by whether or not they would be included in PRISM according to the guidelines given above.
Histogram of observation time
This confirms what I asserted above: most, but not all, GHCN-D stations are morning observers, and the PRISM day definition does a good job capturing the bulk of that distribution. On average, stations fulfilling the PRISM criterion report at 7:25am or so.
The next step is to look empirically at how many minimum and maximum temperature observations are likely to fall before or after the observation time cutoff. To do that, we need some raw weather station data, which I pulled from NOAA’s Quality Controlled Local Climatological Data(QCLCD). To get a sense for which extreme temperatures would be reported as occurring on the actual day they occurred, I assumed that all stations would report at 7:25am, the average observation time in the PRISM dataset. The next two figures show histograms of observed maximum and minimum temperatures.
Histogram of observed maximum temperatures Histogram of observed minimum temperatures
I’ve colored the histograms so that all extremes (tmins and tmaxes) after 7:25am are red, indicating that extremes after that time will be reported as occurring during the following day. As expected, the vast majority of tmaxes (>94%) occur after 7:25am. But surprisingly, a good portion (32%) of tmins do as well. If you’re concerned about the large number of minimum temperature observations around midnight, remember that a midnight-to-midnight summary is likely to have this sort of “bump”, since days with a warmer-than-usual morning and a colder-than-usual night will have their lowest temperature at the end of the calendar day.
As a more direct check, I compared PRISM leads of tmin and tmax to daily aggregates (that I computed using a local calendar date definition) of the raw QCLCD data described above. The table below shows the pairwise correlations between the PRISM day-of observations, leads (next day), and the QCLCD data for both maximum and minimum daily temperature.
MeasurePRISM day-ofPRISM lead
tmin (calendar)0.9620.978
tmax (calendar)0.9340.992
As you can see, the the PRISM leads, i.e., observations from the next day, correlated more strongly with my aggregated data. The difference was substantial for tmax, as expected. The result for tmin is surprising: it also correlates more strongly with the PRISM tmin lead. I’m not quite sure what to make of this - it may be that the stations who fail to report their observation times and the occasions when the minimum temperature occurs after the station observation time are combining to make the lead of tmin correlate more closely with the local daily summaries I’ve computed. But I’d love to hear other explanations.
So who should be concerned about this? Mostly, researchers with econometric models that use daily variation in temperature on the right-hand side, and fairly high frequency variables on the left-hand side. The PRISM group isn’t doing anything wrong, and I’m sure that folks who specialize in working with weather datasets are very familiar with this particular limitation. Their definition matches a widely used definition of how to appropriately summarize daily weather observations, and presumably they’ve thought carefully about the consequences of this definition and of including more data from stations who don’t report their observation times. But researchers who, like me, are not specialists in using meteorological data and who, like me, use PRISM to examine at daily relationships between weather and economics outcomes, should tread carefully.
As is, using the PRISM daily tmax data amounts to estimating a model that includes lagged rather than day-of temperature. A quick fix, particularly for models that include only maximum temperature, is to simply use the leads, or the observed weather for the next day, since it will almost always reflect the maximum temperature for the day of interest. A less-quick fix is to make use of the whole distribution using the raw monitor data, but then you would lose the nice gridded quality of the PRISM data. Models with average or minimum temperature should, at the very least, tested for robustness with the lead values. Let’s all do Gibran proud.