Wednesday, April 16, 2014

Is it foolish to act locally on global problems?

I'm late and out-of-order on my blog posting activities, in part because I've been blogging a little on energy in a different forum (a new schtick for me).  Anyhow, that experience has me thinking more broadly about climate change and policy.  Since I don't have any great new statistics to report, I'm going to change gears and scratch an uncomfortable itch on our climate problem.

Economists see that this is a global problem and typically argue that solving it requires global action.  The whole developed world can go carbon neutral, but this means little if China and India don't follow suit. Yet we have little apparent ability to act on even a national scale, let alone a global one.

Instead, in fits and starts, we're seeing states like California and Hawai'i, and the EU, take action, seemingly despite themselves.  Yesterday I heard Barbara Boxer talk about California's cap and trade program and strict new fuel economy standards.  Hawai'i has the most ambitious clean energy goals in the country, goals we are nevertheless likely to exceed, perhaps by a wide margin.  But what California (and certainly Hawai'i) does or doesn't do to reduce greenhouse gas emissions is simply trivial.

These countries and states enact policies to reduce greenhouse gas emissions, some of which may be costly, even though local actions will have little bearing on our global problem.  Worse, some states, by acting locally, might put their regions at a competitive disadvantage economically.  So, acting locally appears to be all cost and no benefit.

How foolish is it to act locally on what is truly a global problem?  Quite, might say some respected economics.

I'm a little less cynical and increasingly believe that local actions might make a difference, and possibly thrive by taking unilateral action.  Here are four five reasons why:

(1) Acting locally can demonstrate proof of concept.  Curbing greenhouse gas emissions really shouldn't be that costly.  But while IPCC and CBO reports are nice, showing it can really be done without killing an economy is a lot more compelling.  A state, even a tiny one like Hawaii, can do this, which will lower the costs for others to follow suit.

(2) Local successes can be leveraged to provide moral, social and political pressure to invoke action on a larger scale. Prices can motivate behavior.  But positive examples can too.

(3) Early adopters may even gain economically in the short run.  Even if we don't have national or global policies today, we may expect them in the future.  New technologies and businesses need to be developed, and environmental entrepreneurs and startups may gravitate toward places on the cutting edge of going green.  This kind of thing is happening here in Hawaii.  It's small in scale, but could grow.  And these companies, and economies where they sit, could then be positioned to boom when larger scale policies are put in place.

(4) Spillover effects from technological development could be tremendous.  Local economies may not gain directly as ideas developed locally are replicated.  But they do gain indirectly by reduced greenhouse gas emissions.  Green technology is not necessarily the intellectual property we want to protect.

(5) For a tourist economy like Hawaii, Green branding might have an advertising benefit.



Saturday, April 12, 2014

Daily weather data: original vs knock-off

Any study that focuses on nonlinear temperature effects requires precise estimates of the exact temperature distribution.  Unfortunately,  most gridded weather data sets only give monthly estimates (e.g., CRU, University of Delaware, and up until recently PRISM).  Monthly averages can hide extremes - both hot and cold. Monthly means don't capture how often and by how much temperatures pass a certain threshold.

At the time Michael Roberts and I wrote our article on nonlinear temperature effects in agriculture, the PRISM climate group only made its monthly aggregates publicly available for download, but not the underlying daily data.  In the end we hence reverse-engineered the PRISM interpolation algorithm, i.e., we regressed monthly averages at each PRISM grid on monthly averages at the (7 or 10, depends on the version) closest weather stations that are publicly available.  Once we had the regression estimates linking monthly PRISM averages to weather stations, we bravely applied them to the daily weather data at the stations to get daily data at the PRISM cells (for more detail, see the paper).  Cross-validation suggested we weren't that far off, but then again, we only could do cross-validation tests in areas that have weather stations.

Recently, the PRISM climate group made their daily data available from the 1980s onwards.  I finally got a chance to download them and compare them to the daily data we previously had constructed from monthly averages.  This was quiet a nerve-wrecking exercise: how far were we off and does it change the results - or in the worst case, did I screw up the code and got garbage for our previous paper?

Below is a table that summarizes PRISM's daily data for the growing season (April-September) in all counties east of the 100 degree meridian except Florida that either grow corn or soybeans, basically the set of counties we had used in our study (small change: our study used 1980-2005, but since PRISM's daily data is only available from 1981 onwards, the tables below use 1981-2012). The summary statistics are:

First sigh of relieve! It looks like the numbers are rather close (strangely enough, the biggest deviations seems to be for precipitation, yet we used PRISM's monthly aggregates to derive season-totals and did not rely on any interpolation, so the new daily PRISM data is a bit different from the old PRISM data). Also, recall from a recent post that looked at the NARR data that degrees above 29C can differ a lot between data sets, as small differences in the daily maximum temperature will give vastly different results.

Next, I plugged both data sets into a panel of corn and soybean yields to see which one explains those yields better (i) in sample; and (ii) out of sample.  I used models using only temperature variables (columns a and b) as well as models using the same four weather variables we used before (columns c and d). PRISM's daily data is used in columns a and c, our re-engineered data are in columns b and d:

Second sigh of relief: It seems to be rather close again. In all four comparisons (1b) to (1a), (1d) to (1c), (2b) to (2a), and (2d) to (2c), our reconstruction for some strange reason has a larger in-sample R-square.  The reduction in RMSE is given in the second row of the footer: it is the reduction in out-of sample prediction error compared to a model with no weather variables. I take 1000 times 80% of the data as estimation sample and derive the prediction error for the remaining 20%. The given number is the average of the 1000 draws. For RMSE reductions, the picture is mixed: for the corn models that only include the two degree days variables, the PRISM daily data does slightly better, while the reverse is true for soybeans.  In models that also include precipitation, the construction of season-total precipitation seems to do better when I added the monthly PRISM totals (columns d) rather than adding the new daily PRISM precipitation totals (columns c).

Finally, since the data we constructed is a knock-off, how can it do better than the original in some cases?  My wild guess (and this is really only speculation) is that we took great care in filling in missing data for weather stations to get a balanced panel.  That way we insured that year-to-year fluctuations are not due to fact that one averages over a different set of stations.  I am not aware how exactly PRISM deals with missing weather station data.