Showing posts with label extreme events. Show all posts
Showing posts with label extreme events. Show all posts

Monday, September 12, 2016

Everything we know about the effect of climate on humanity (or "How to tell your parents what you've been doing for the last decade")

Almost exactly ten years, Nick Stern released the his famous global analysis on the economics of climate change.  At the time, I was in my first year of grad school trying to figure out what to do research on and remember fiercely debating all aspects of the study with Ram and Jesse in the way only fresh grad students can.  Almost all the public discussion of the analysis revolved around whether Stern had chosen the correct discount rate, but that philosophical question didn’t seem terribly tractable to us.  We decided instead that the key question research should focus on was the nature of economic damages from climate change, since that was equally poorly known but nobody seemed to really be paying attention to it.  I remember studying this page of the report for days (literally, days) and being baffled that nobody else was concerned about the fact that we knew almost no facts about the actual economic impact of climate—and that it was this core relationship that drove the entire optimal climate management enterprise:

click to enlarge

So we decided to follow in the footsteps of our Sensei Schlenker and to try and figure out effects of climate on different aspects of the global economy using real world data and rigorous econometrics. Together with the other G-FEEDers and a bunch of other folks from around the world, we set out to try and figure out what the climate has been doing and will do to people around the planet. (Regular readers know this.)

Friday, Tamma Carleton and I published a paper in Science trying to bring this last decade of work all together in one place. We have learned a lot, both methodologically and substantively. There is still a massive amount of work to do, but it seemed like a good idea to try and consolidate and synthesize what we’ve learned at least once a decade…

Here’s one of the figures showing some of the things we’ve learned from data across different contexts, it’s kind of like a 2.0 version of the page from Stern above:


click to enlarge

Bringing all of this material together into one place led to a few insights. First, there are pretty clear patterns across sectors where adaptation appears to either be very successful (e.g. heat related mortality or direct losses from cyclones) or surprisingly absent (e.g. temperature losses for maize, GDP-productivity losses to heat). In the latter case, there seem to be “adaptation gaps” that are persistent across time and locations, something that we might not expect if adaptation is costless in the long run (as many people seem to think). We can’t say exactly what’s going on that’s causing these adaptation gaps to persist, for example, it might be that all actors are behaving optimally and this is simply the best we can do with current technology and institutions, or alternatively there might be market failures (such as credit constraints) or other disincentives (like subsidized crop insurance) that prevent individuals from adapting. Figuring out (i) whether current adaptation is efficient, or (ii) if it isn’t, what’s wrong so we can fix it, is a multi-trillion-dollar question and the area where we argue researchers should focus attention.

Eliminating adaptation gaps will have a big payoff today and in the future. To show this, we compute the total economic burden borne by societies today because they are not perfectly adapted today. Even before one accounts for climate change, our baseline climate appears to be a major factor determining human wellbeing around the world.

For example, we compute that on average the current climate

- depresses US maize yields by 48%
- increases US mortality rates 11%
- increases US energy use by 29%
- increases US sexual assault rates 6%
- increases the incidence of civil conflict 29% in Sub-Saharan Africa
- slows global economic growth rates 0.25 percentage points annually

These are all computed by estimating a counterfactual where climate conditions at each location are whatever historically observed values at that location are most ideal.

Our first reaction to some of these numbers were that they were too big. But then we reflected on the numbers more and realized maybe they are pretty reasonable. If we could grow all of US maize in greenhouses where we control the temperature, would our yields really be 48% higher? That’s not actually too crazy if you think about cases where we have insulated living organisms much more from their environment and they do a whole lot better because of it. For example, life expectancy for people has more than double in the last few centuries as we started to protect ourselves from all the little health insults that use to plague people. For similar reasons, if you just look at pet cats in the US, indoor cats live about twice as long as more exposed outdoor cats on average (well, at least according to Petco and our vet). Similarly, lot of little climate insults, each seemingly small, can add up to generate substantial costs—and they apparently do already.

We then compared these numbers (on the current effect of the current climate) to (i) the effects of climate change that has occurred already (a la David and Wolfram) and (ii) projected effects of climate change.

When it comes to effects of climate change to date, these numbers are mostly small. The one exception is that warming should have already increased the integrated incidence of civil conflict  since 1980 by >11%.

When it comes to future climate change, that we haven’t experienced yet, the numbers are generally large and similar-ish in magnitude to the current effect of the current climate. For example, we calculate that future climate change should slow global growth by an additional 0.28 percentage points per year, which is pretty close in magnitude to the 0.25 percentage points per year that temperatures are already slowing things down. For energy demand in the US, current temperatures area actually doing more work today (+29%) than the additional effect of future warming (+11%), whereas for war in Sub-Saharan Africa, current temperatures are doing less (+29%) than near term warming (+54%).

All these numbers are organized in a big table in the paper, since I always love a big table. There's also a bit of history and summary of methods in there as well, for those of you who, like Marshall, don't want to bother slogging through the sister article detailing all the methods.

Monday, September 21, 2015

El Niño is coming, make this time different

Kyle Meng and I published an op-ed in the Guardian today trying to raise awareness of the potential socioeconomic impacts, and policy responses, to the emerging El Niño.  Forecasts this year are extraordinary.  In particular, for folks who aren't climate wonks and who live in temperate locations, it is challenging to visualize the scale and scope of what might come down the pipeline this year in the tropics and subtropics. Read the op-ed here.

Countries where the majority of the population experience hotter conditions under El Niño are shown in red. Countries that get cooler under El Niño are shown in blue (reproduced from Hsiang and Meng, AER 2015)

Saturday, January 10, 2015

Searching for critical thresholds in temperature effects: some R code



If google scholar is any guide, my 2009 paper with Wolfram Schlenker on the nonlinear effects of temperature on crop outcomes has had more impact than anything else I've been involved with.

A funny thing about that paper: Many reference it, and often claim that they are using techniques that follow that paper.  But in the end, as far as I can tell, very few seem to actually have read through the finer details of that paper or try to implement the techniques in other settings.  Granted, people have done similar things that seem inspired by that paper, but not quite the same.  Either our explication was too ambiguous or people don't have the patience to fully carry out the technique, so they take shortcuts.  Here I'm going to try to make it easier for folks to do the real thing.

So, how does one go about estimating the relationship plotted in the graph above?

Here's the essential idea:  averaging temperatures over time or space can dilute or obscure the effect of extremes.  Still, we need to aggregate, because outcomes are not measured continuously over time and space.  In agriculture, we have annual yields at the county or larger geographic level.  So, there are two essential pieces: (1) estimating the full distribution of temperatures of exposure (crops, people, or whatever) and (2) fitting a curve through the whole distribution.

The first step involves constructing the distribution of weather. This was most of the hard work in that paper, but it has since become easier, in part because finely gridded daily weather is available (see PRISM) and in part because Wolfram has made some STATA code available.  Here I'm going to supplement Wolfram's code with a little bit of R code.  Maybe the other G-FEEDers can chime in and explain how to do this stuff more easily.

First step:  find some daily, gridded weather data.  The finer scale the better.  But keep in mind that data errors can cause serious attenuation bias.  For the lower 48 since 1981, the PRISM data above is very good.  Otherwise, you might have to do your own interpolation between weather stations.  If you do this, you'll want to take some care in dealing with moving weather stations, elevation and microclimatic variations.  Even better, cross-validate interpolation techniques by leaving one weather station out at a time and seeing how well the method works. Knowing the size of the measurement error can also help correcting bias.  Almost no one does this, probably because it's very time consuming... Again, be careful, as measurement error in weather data creates very serious problems (see here and here).

Second step:  estimate the distribution of temperatures over time and space from the gridded daily weather.  There are a few ways of doing this.  We've typically fit a sine curve between the minimum and maximum temperatures to approximate the time at each degree in each day in each grid, and then aggregate over grids in a county and over all days in the growing season.  Here are a couple R functions to help you do this:

# This function estimates time (in days) when temperature is
# between t0 and t1 using sine curve interpolation.  tMin and
# tMax are vectors of day minimum and maximum temperatures over
# range of interest.  The sum of time in the interval is returned.
# noGrids is number of grids in area aggregated, each of which 
# should have exactly the same number of days in tMin and tMax
 
days.in.range <- function( t0, t1 , tMin, tMax, noGrids )  {
  n <-  length(tMin)
  t0 <-  rep(t0, n)
  t1 <-  rep(t1, n)
  t0[t0 < tMin] <-  tMin[t0 < tMin]
  t1[t1 > tMax] <-  tMax[t1 > tMax]
  u <- function(z, ind) (z[ind] - tMin[ind])/(tMax[ind] - tMin[ind])  
  outside <-  t0 > tMax | t1 < tMin
  inside <-  !outside
  time.at.range <- ( 2/pi )*( asin(u(t1,inside)) - asin(u(t0,inside)) ) 
  return( sum(time.at.range)/noGrids ) 
}

# This function calculates all 1-degree temperature intervals for 
# a given row (fips-year combination).  Note that nested objects
# must be defined in the outer environment.
aFipsYear <- function(z){
  afips    = Trows$fips[z]
  ayear    = Trows$year[z]
  tempDat  = w[ w$fips == afips & w$year==ayear, ]
  Tvect = c()
  for ( k in 1:nT ) Tvect[k] = days.in.range(
              t0   = T[k]-0.5, 
              t1   = T[k]+0.5, 
              tMin = tempDat$tMin, 
              tMax = tempDat$tMax,
              noGrids = length( unique(tempDat$gridNumber) )
              )
  Tvect
}

The first function estimates time in a temperature interval using the sine curve method.  The second function calls the first function, looping through a bunch of 1-degree temperature intervals, defined outside the function.  A nice thing about R is that you can be sloppy and write functions like this that use objects defined outside of the environment. A nice thing about writing the function this way is that it's amenable to easy parallel processing (look up 'foreach' and 'doParallel' packages).

Here are the objects defined outside the second function:

w       # weather data that includes a "fips" county ID, "gridNumber", "tMin" and "tMax".
        #   rows of w span all days, fips, years and grids being aggregated
 
tempDat #  pulls the particular fips/year of w being aggregated.
Trows   # = expand.grid( fips.index, year.index ), rows span the aggregated data set
T       # a vector of integer temperatures.  I'm approximating the distribution with 
        #   the time in each degree in the index T

To build a dataset call the second function above for each fips-year in Trows and rbind the results.

Third step:  To estimate a smooth function through the whole distribution of temperatures, you simply need to choose your functional form, linearize it, and then cross-multiply the design matrix with the temperature distribution.  For example, suppose you want to fit a cubic polynomial and your temperature bins that run from from 0 to 45 C.  The design matrix would be:

D = [    0          0          0   
            1          1           1
            2          4           8
             ...
           45     2025    91125]

These days, you might want to do something fancier than a basic polynomial, say a spline. It's up to you.  I really like restricted cubic splines, although they can over smooth around sharp kinks, which we may have in this case. We have found piecewise linear works best for predicting out of sample (hence all of our references to degree days).  If you want something really flexible, just make D and identity matrix, which effectively becomes a dummy variable for each temperature bin (the step function in the figure).  Whatever you choose, you will have a (T x K) design matrix, with K being the number of parameters in your functional form and T=46 (in this case) temperature bins. 

To get your covariates for your regression, simply cross multiply D by your frequency distribution.  Here's a simple example with restricted cubic splines:


library(Hmisc)
DMat <- rcspline.eval(0:45)
XMat <- as.matrix(TemperatureData[,3:48])%*%DMat
fit <- lm(yield~XMat, data=regData)
summary(fit)

Note that regData has the crop outcomes.  Also note that we generally include other covariates, like total precipitation during the season,  county fixed effects, time trends, etc.  All of that is pretty standard.  I'm leaving that out to focus on the nonlinear temperature bit. 

Anyway, I think this is a cool and fairly simple technique, even if some of the data management can be cumbersome.  I hope more people use it instead of just fitting to shares of days with each maximum or mean temperature, which is what most people following our work tend to do.  

In the end, all of this detail probably doesn't make a huge difference for predictions.  But it can make estimates more precise, and confidence intervals stronger.  And I think that precision also helps in pinning down mechanisms.  For example, I think this precision helped us to figure out that VPD and associated drought was a key factor underlying observed effects of extreme heat.

Wednesday, May 15, 2013

Consensus Statements on Sea-Level Rise


In my mailbox from the AGU:
After four days of scientific presentations about the state of knowledge on sea-level rise, the participants reached agreement on a number of important key statements. These statements are the reflection of the participants of the conference and not official positions from the sponsoring societies.
 
Earth scientists agree that the global sea level is rising at an accelerated rate overall in response to climate change.
Scientists have a professional responsibility to inform government, the public, and the private sector about the impacts of rising sea levels and extreme events, and the risks they pose.
 
The geological record indicates that the current rates of sea-level rise in many regions are unprecedented relative to rates of the last several thousand years.
Global sea-level rise has changed rapidly in the past and scientific projections show it will continue to rise over the course of this century, altering our coasts.
 
Extreme events and their associated impacts will be more damaging and pose higher risks in the immediate future than sea-level rise.
Increasing human activity, such as land use change and water management practices, adds stress to already fragile ecosystems and can affect coasts just as much as sea-level rise.
 
Sea-level rise will exacerbate the impacts of extreme events, such as hurricanes and storms, over the long-term.
Extreme events have contributed to loss of life, billions of dollars in damage to infrastructure, massive taxpayer funding for recovery, and degradation of our ecosystems.
 
In order to secure a sustainable future, society must learn to anticipate, live with, and adapt to the dynamics of a rapidly evolving coastal system.
Over time feasible choices may change as rising sea level limits certain options. Weighing the best decisions will require the sharing of scientific information, the coordination of policies and actions, and adaptive management approaches.
 
Well-informed policy decisions are imperative and should be based upon the best available science, recognizing the need for involvement of key stakeholders and relevant experts.
As we work to adapt to accelerating sea level rise, deep reductions in emissions remain one of the best ways to limit the magnitude and pace of rising seas and cut the costs of adaptation.