Wednesday, April 16, 2014

Is it foolish to act locally on global problems?

I'm late and out-of-order on my blog posting activities, in part because I've been blogging a little on energy in a different forum (a new schtick for me).  Anyhow, that experience has me thinking more broadly about climate change and policy.  Since I don't have any great new statistics to report, I'm going to change gears and scratch an uncomfortable itch on our climate problem.

Economists see that this is a global problem and typically argue that solving it requires global action.  The whole developed world can go carbon neutral, but this means little if China and India don't follow suit. Yet we have little apparent ability to act on even a national scale, let alone a global one.

Instead, in fits and starts, we're seeing states like California and Hawai'i, and the EU, take action, seemingly despite themselves.  Yesterday I heard Barbara Boxer talk about California's cap and trade program and strict new fuel economy standards.  Hawai'i has the most ambitious clean energy goals in the country, goals we are nevertheless likely to exceed, perhaps by a wide margin.  But what California (and certainly Hawai'i) does or doesn't do to reduce greenhouse gas emissions is simply trivial.

These countries and states enact policies to reduce greenhouse gas emissions, some of which may be costly, even though local actions will have little bearing on our global problem.  Worse, some states, by acting locally, might put their regions at a competitive disadvantage economically.  So, acting locally appears to be all cost and no benefit.

How foolish is it to act locally on what is truly a global problem?  Quite, might say some respected economics.

I'm a little less cynical and increasingly believe that local actions might make a difference, and possibly thrive by taking unilateral action.  Here are four five reasons why:

(1) Acting locally can demonstrate proof of concept.  Curbing greenhouse gas emissions really shouldn't be that costly.  But while IPCC and CBO reports are nice, showing it can really be done without killing an economy is a lot more compelling.  A state, even a tiny one like Hawaii, can do this, which will lower the costs for others to follow suit.

(2) Local successes can be leveraged to provide moral, social and political pressure to invoke action on a larger scale. Prices can motivate behavior.  But positive examples can too.

(3) Early adopters may even gain economically in the short run.  Even if we don't have national or global policies today, we may expect them in the future.  New technologies and businesses need to be developed, and environmental entrepreneurs and startups may gravitate toward places on the cutting edge of going green.  This kind of thing is happening here in Hawaii.  It's small in scale, but could grow.  And these companies, and economies where they sit, could then be positioned to boom when larger scale policies are put in place.

(4) Spillover effects from technological development could be tremendous.  Local economies may not gain directly as ideas developed locally are replicated.  But they do gain indirectly by reduced greenhouse gas emissions.  Green technology is not necessarily the intellectual property we want to protect.

(5) For a tourist economy like Hawaii, Green branding might have an advertising benefit.



Saturday, April 12, 2014

Daily weather data: original vs knock-off

Any study that focuses on nonlinear temperature effects requires precise estimates of the exact temperature distribution.  Unfortunately,  most gridded weather data sets only give monthly estimates (e.g., CRU, University of Delaware, and up until recently PRISM).  Monthly averages can hide extremes - both hot and cold. Monthly means don't capture how often and by how much temperatures pass a certain threshold.

At the time Michael Roberts and I wrote our article on nonlinear temperature effects in agriculture, the PRISM climate group only made its monthly aggregates publicly available for download, but not the underlying daily data.  In the end we hence reverse-engineered the PRISM interpolation algorithm, i.e., we regressed monthly averages at each PRISM grid on monthly averages at the (7 or 10, depends on the version) closest weather stations that are publicly available.  Once we had the regression estimates linking monthly PRISM averages to weather stations, we bravely applied them to the daily weather data at the stations to get daily data at the PRISM cells (for more detail, see the paper).  Cross-validation suggested we weren't that far off, but then again, we only could do cross-validation tests in areas that have weather stations.

Recently, the PRISM climate group made their daily data available from the 1980s onwards.  I finally got a chance to download them and compare them to the daily data we previously had constructed from monthly averages.  This was quiet a nerve-wrecking exercise: how far were we off and does it change the results - or in the worst case, did I screw up the code and got garbage for our previous paper?

Below is a table that summarizes PRISM's daily data for the growing season (April-September) in all counties east of the 100 degree meridian except Florida that either grow corn or soybeans, basically the set of counties we had used in our study (small change: our study used 1980-2005, but since PRISM's daily data is only available from 1981 onwards, the tables below use 1981-2012). The summary statistics are:

First sigh of relieve! It looks like the numbers are rather close (strangely enough, the biggest deviations seems to be for precipitation, yet we used PRISM's monthly aggregates to derive season-totals and did not rely on any interpolation, so the new daily PRISM data is a bit different from the old PRISM data). Also, recall from a recent post that looked at the NARR data that degrees above 29C can differ a lot between data sets, as small differences in the daily maximum temperature will give vastly different results.

Next, I plugged both data sets into a panel of corn and soybean yields to see which one explains those yields better (i) in sample; and (ii) out of sample.  I used models using only temperature variables (columns a and b) as well as models using the same four weather variables we used before (columns c and d). PRISM's daily data is used in columns a and c, our re-engineered data are in columns b and d:

Second sigh of relief: It seems to be rather close again. In all four comparisons (1b) to (1a), (1d) to (1c), (2b) to (2a), and (2d) to (2c), our reconstruction for some strange reason has a larger in-sample R-square.  The reduction in RMSE is given in the second row of the footer: it is the reduction in out-of sample prediction error compared to a model with no weather variables. I take 1000 times 80% of the data as estimation sample and derive the prediction error for the remaining 20%. The given number is the average of the 1000 draws. For RMSE reductions, the picture is mixed: for the corn models that only include the two degree days variables, the PRISM daily data does slightly better, while the reverse is true for soybeans.  In models that also include precipitation, the construction of season-total precipitation seems to do better when I added the monthly PRISM totals (columns d) rather than adding the new daily PRISM precipitation totals (columns c).

Finally, since the data we constructed is a knock-off, how can it do better than the original in some cases?  My wild guess (and this is really only speculation) is that we took great care in filling in missing data for weather stations to get a balanced panel.  That way we insured that year-to-year fluctuations are not due to fact that one averages over a different set of stations.  I am not aware how exactly PRISM deals with missing weather station data.

Monday, March 31, 2014

IPCC report

I’m just getting back from the IPCC approval session in Yokohama, Japan. A full copy of the summary for policy makers, which we spent a week approving line by line, is here. The chapter reports are also there.

I’ve been doing some interviews on the report, but not that many given long flights, the different time zone here in Brizzy (yes, that's what they call it), etc. One really good overview story in NY Times is here.

One of the reasons we started the G-FEED blog was to have a chance to speak in our own words, rather than just through the media. So far some of the stories on the report have been over the top negative. So I thought it worthwhile to offer here a few thoughts:

First, the sky is not falling, and the report doesn’t say that it is. To me, the report has some simple messages: 1) The impacts of climate change are already evident throughout the world, in many different places and types of natural and human systems. 2) The risks of further impacts are very real. 3) There are many things we can do to reduce those risks.

The report is sobering because the facts are sobering. But it is also tries to be very constructive by pointing out all the options going forward. It lays out a vision, led by our co-chairs, of a much better world. I think the final summary for policymakers is a very well balanced and thoughtful report, and one that I am very proud to have been a part of.

I also get a lot of questions about the process. What is it like to be an IPCC lead author? Writing the chapter took about 3 years. When we meet we typically work up to 12 hour days, including weekends, often skipping lunch, usually jetlagged from long flights in coach class, and for no pay. But other than that it’s great!

More seriously, and on the positive side, it’s extremely rewarding to work with colleagues that are the best in the world at what they do, and to work hard with them to synthesize evidence and figure out what we feel comfortable saying, and how to say it most clearly. It’s also an honor to represent your country in an international process that is geared to providing the best possible scientific evidence. I typically leave the meetings tired but deeply impressed with the devotion and critical thinking of my fellow scientists. And jealous of those whose countries fly them business class.


As for the plenary approval, the only word I can think of right now to explain it is “exhausting”. Maybe I will have more energy and perspective later on to write about that. But if you want to know why some of the media stories are not completely clear, it may be that they were talking to authors who had slept maybe 5 hours in the past 48 hours, if that. It’s a further testament to the genius and stamina of Chris Field that he chaired endless sessions and still managed to be so articulate and upbeat in the press conference.


Monday, March 24, 2014

Breaking down the pause

The only redeeming thing about long plane rides is the chance to catch up on reading and movies. Alas, my flight to Japan had no movies, and it was only eight hours, so I didn’t have time to finish reading Sol’s last G-FEED post.

But I was able to catch up on various papers I’ve been meaning to read, including a recent collection of short papers that Nature Climate Change put together related to the recent hiatus, or pause, in global warming since 1998. As readers of this blog will surely know, a lot of attention has been given to the lack of significant warming trend since 1998, aka “the pause”. (Not to be confused with an "awkward pause.”) I’ve normally viewed the pause as much ado about nothing, or at least very little, since you’d expect variation around the long-term trend and models have consistently shown a non-small probability for flat trends over a 10 or even 15 year period, even as longer-term trends are positive. So having not followed the conversation too closely, I was interested to catch up with these papers. Here are a couple of interesting lessons:

First, Gavin Schmidt and co-authors explain how a lot of the disparity between expected and observed trends is not necessarily due to natural variability, but instead to the fact that short-term forcings since 2000 are not what models had assumed. As they say “the influence of volcanic eruptions, aerosols in the atmosphere and solar activity all took unexpected turns over the 2000s. The climate model simulations, effectively, were run with the assumption that conditions were broadly going to continue along established trajectories.” But instead, all of these factors deviated in a way that caused climate to be cooler. In other words, the models weren’t the problem, but the assumptions used to force them were. They also emphasize that none of these factors should be expected to continue to cool climate much, and predict that “ENSO will eventually move back into a positive phase and the simultaneous coincidence of multiple cooling effects will cease. Further warming is very likely to be the result.“

Second, an interesting piece led by Sonia Seneviratne shows that trends in daytime extreme temperatures over land, arguably a more relevant measure in terms of climate impacts, haven’t slowed down at all (red line in figure below). Essentially, they object to the entire notion of a “pause.” 



All of these papers on the pause also reminded me of a great book I recently finished – “Thinking Fast and Slow” by Daniel Kahneman. He talks a lot about loss aversion - how humans tend to dislike losses about twice as much as they like gains -  and how even small losses can be very annoying. One consequence is that for an identical putt, professional golfers will tend to try harder and make it more often if it’s for par (to avoid loss) than if for birdie (to get a gain). I’m sure that explains why I always miss birdie putts! Another consequence is that people are usually risk averse when it comes to gains (i.e. prefer to take $100 than flip a coin for a 50/50 chance of winning $200), but risk seeking when it comes to losses (i.e. prefer to flip a coin for a 50/50 chance of losing $200 rather than give $100, because $100 hurts almost as much as $200). This is compounded by another tendency humans have - to way overweight events that have low probability in their decisions. So even if there’s a very small chance of a “no loss” outcome, people will tend to take a chance hoping that outcome will happen, because it will be so much more pleasant than even a small loss, and because they think it’s more likely than it actually is.

All this seems like a pretty good description of many people's reaction to climate change. It might be a small deal or it could be really bad. We could maybe guarantee a small loss if we paid for a lot of mitigation and adaptation. Or we can roll the dice and hope for the best. After reading his book, it hardly seems surprising that people would opt for the latter. They do in all walks of life, and are worse off because of it. It is also hardly surprising that people will latch onto anything that seemingly justifies overweighting the probability of no loss, such as the pause. I'm not saying that either are the appropriate response to the problem, especially by institutions that should be less prone to these behavioral quirks, but it may help to partly explain people's fascination with the pause.

The pause has also highlighted for me how wide the full distribution of potential trends over a 10 year period is, and that includes the potential for very rapid warming. What will happen if warming rates in the next 10 years are at the other end of the distribution – other than, of course, people asking climate scientists why their models are too conservative? It’s a question I’ve been looking at with Claudia Tebaldi in terms of crop implications, which hopefully will be a paper in the not-so-distant future to blog about.


Friday, March 21, 2014

When evidence does not suffice

Halvard Buhaug and numerous coauthors have released a comment titled “One effect to rule them all? A comment on climate and conflict” which critiques research on climate and human conflict that I published in Science and Climatic Change with my coauthors Marshall Burke and Edward Miguel

The comment does not address the actual content of our papers.  Instead it states that our papers say things they do not say (or that our papers do not say thing they actually do say) and then uses those inaccurate claims as evidence that our work is erroneous.

Below the fold is my reaction to the comment, written as the referee report that I would write if I were asked to referee the comment.

(This is not the first time Buhaug and I have disagreed on what constitutes evidence. Kyle Meng and I recently published a paper in PNAS demonstrating that Buhaug’s 2010 critique of an earlier paper made aggressive claims that the earlier paper was wrong without actually providing evidence to support those claims.)

Friday, March 14, 2014

Violence is expensive


We've blogged before (ad nauseam?) about our ongoing research that suggests that changes in climate could substantially affect patterns of human violence.  Imagine, for a moment, that you buy these results.  A natural question is, how much should we care?

One way to answer this question it to try to calculate the added economic cost of a climate-induced change in conflict.  E.g., if temperatures were to rise 1 degree, what would be the economic cost of the ensuing increase in conflict?  Expressing the cost in dollars then allows us to compare it against other things we spend money on to give a sense of how "large" the costs of climate-induced violence might be.  And to the extent that we actually think future changes in climate could increase conflict risk, such a calculation could also inform estimates of the "social costs of carbon" -- essentially, the overall cost of emitting one more ton of CO2 today.  

Clearly it is not easy to calculate how much a climate-induced change in conflict would cost. It's going to be some combination of the economic damage wrought by different types of conflict, and the increase in each type of conflict due to a change in climate.   Our paper provides some estimates of the latter, but figuring out the former seems like a real bear.


This is why it was very interesting to see a new report entitled "The Economic Cost of Violence Containment", put out by a group called the Institute for Economics and Peace [ht: Tom Murphy].  As the title suggests, the goal of this report is to calculate the aggregate economic costs of violence and what we spend to contain it.   Violence, they calculate, is very very expensive.  Their headline number is that we spend about $10 trillion a year in "violence containment", which they define as "economic activity that is related to the consequences or prevention of violence… directed against people or property." For those scoring at home, $10 trillion is about 10% of the total value of stuff the world produces (the so-called Gross World Product).  For those of you who think only in Benjamins, it's 100 billion of them. 

Their estimate is the result of a big adding-up exercise where they use existing estimates from the literature on how much each type of violence costs directly -- from what we spend to house and feed conflict refugees, to the economic cost of a homicide -- and add to them estimates of how much we spend to "contain" violence more broadly, which for them includes all military expenditure (more on that below).  The figure below shows their assessment of breakdown of the different costs, with the shares given by the exploding pie chart (yeah!) on the left, and the absolute values on the right.  


There are clearly some questions about what the authors have decided to include and not include.  For instance, it doesn't seem like military expenditure can just be thought of as a cost. Sure, with a fixed budget, increased expenditure on the military necessarily reduces the investments we could otherwise make, many of which could be higher return.  But this doesn't mean that investment in the military has no economic benefit.  Irrespective of one's feelings about military spending, the military employs a lot of people, both directly and indirectly, so this sort of expenditure has benefits as well as costs.

Then as you can see in the table on the right, to get from $5 trillion in direct costs to $10 trillion in total costs, they literally multiply the direct costs by 2.  The claim is that the economic spillover from reduced violence -- e.g. from investments made elsewhere with the money you save, and from people no longer having to protect themselves from violence -- is as large as the direct cost of violence.  This number seems to come out of nowhere.

Nevertheless, if we that military expenditure is actually a wash in terms of total costs, and assume that there is no multiplier effect, we still have a cost of violence estimate of $2.3 trillion dollars.  If you drop out the "private security" and "internal security" categories for similar reasons (e.g. they are employing people), you are down closer to $1.3 trillion dollars.  So call it $1 trillion dollars in annual costs of violence.  I.e., to be conservative, let's assume that the report was off by an order of magnitude.

So what of the economic costs of climate-induced increase in conflict?  For a quick back-of-the-envelope, we can combine this $1 trillion estimate with our earlier estimates on how conflict risk responds to increases in temperature.  We had calculated these latter estimates as standardized effects -- i.e. a percentage change in conflict per 1 standard deviation change in temperature or precipitation -- and came up with numbers between about 4% and 14%, depending on the type of violence.  And since we're not used to thinking of temperature changes in terms of standard deviations, we made the map below (Fig 6 in our Science paper) to show the projected change in temperature between 2000 and 2050, expressed as multiples of the historical temperature standard deviation at each location.



So putting this all together:  $1 trillion annual cost of violence, say a 5% increase in violence for every SD increase in temperature to be conservative, and say a 2SD increase in temperature by 2050 (most populated regions are higher than that, as shown in the figure).  Under the assumption that future societies will respond to temperature increases as societies have in the past, then this would give us a $100 billion increase in the annual cost of violence by 2050.  Assuming a linear temperature increase between now and 2050 (and assuming effects stop after 2050), and setting the discount rate at 2%, you can calculate the present value of a future increase in climate-induced conflict by adding up the effects in each year and discounting them back to the present.

The number I get is just over $1.5 trillion, or a little over 1% of current Gross World Product. That is a large number.  As a simple calibration, it is about 1/5th of the total cost of climate change calculated in the Stern Review a few years back (which did not consider costs from violence).

Clearly there are a ton of assumptions that go into these sorts of calculations, but these order-of-magnitude exercises can be useful for getting a basic answer to the "should we care" question.  And even if this new report is off by an order of magnitude about the costs of violence and conflict, I think the answer to whether we should care about the potential costs of climate-induced violence is a simple "Yes".  These are big numbers.

Monday, March 10, 2014

The power of avocado

I woke up last Saturday to several emails and voice mails asking for my view on guacamole. That isn’t usually how my weekend starts. But it turns out that Chipotle issued some statement in their annual report about risks of price increases, and among them was avocado. I presume this is mainly related to the current drought, but then Chipotle wrote something about this being a possible trend and cited a paper we wrote nearly 10 years ago with some projections for avocado.

None of that would have amounted to much, but I guess it was a slow news day and reporters rarely pass up the chance to use “Holy Guacamole” in a headline (nor should they). Today I checked and a search for “avocado Chipotle” on Google News gives over 6000 results, ranging from the predictable to the fairly impressive “Guacapocalypse”.

The study we did looked at state level data and tried to infer climate sensitivity for a range of high value crops. Avocados were one that seemed to suffer with very high late summer temperatures. This was based on only a couple of hot years and so the uncertainties were quite large, as we reported in the study. We also did some follow up work with more data and more fancy statistics in what I consider a better paper. There we decided to focus on crops where the relationships were most robust, and that didn’t include avocado. But it did include some popular crops, namely the four shown below (figure shows distribution of projected impacts in terms of % yield, not including CO2 effects). Which makes me wonder what the best headline for a story on cherries would be? I’m sure Max has already thought of a few good ones.


Maybe I’m over-analyzing (and by maybe, I mean almost definitely) but I think the episode demonstrates a few common things. First, it is very difficult to contrast current trends in crop yields or prices to what would have happened without anthropogenic climate change. Max’s last post discusses this issue, one he and I have been grappling with for years in our service for the IPCC. Should we expect more down years for avocado in the future? That’s not an easy question, certainly not one I’ve looked at enough for this particular crop to offer a firm answer, even if one was possible.

Second, the media has a bit of a tendency to exaggerate things. I assume I’m the first person to ever notice this. (That was sarcasm). Things are either a total non-issue or the end of the world, and nothing in between is newsworthy. That makes it tricky to communicate an issue like climate change where almost everything is somewhere between these two extremes. 

Third, and probably most important, is that people really take what businesses say related to climate change risks very seriously. The shame is that I know a lot of businesses are convinced of the science and have thought a lot about risks posed by climate change, but they rarely make these concerns public. I recall sitting on a panel at a large agricultural company and was asked what the company could do to help society prepare for climate change. My answer was that they should not be so silent about the issue. They were the third agricultural company that year to tell me they believe the science, that they are concerned about the risks, but that they don’t dare talk publicly about it for fear of alienating customers who see climate change as a political issue.


As a bit of consolation for US readers, please know it is hard to find an avocado here in Australia for less than $3 a piece. So that extra $2 for guacamole at Chipotle is a real bargain. I hope there’s still some left when I get back!

Sunday, March 9, 2014

Observing Climate Change Impacts

One can tell that David is on sabbatical. He is cracking jokes left and right and they are funny! We will be posting weekly from now on and since my last name starts with A, I start. I also get to be lead author on all papers we write together.

I have spent the last three years with a very talented group of individuals, writing a chapter on detection and attribution of climate change impacts on natural and human systems for the IPCC. The chapter will be released in a few weeks in Yokohama and I will blog live from the meeting. The fancy term detection and attribution can be casually interpreted as "what observable impact has climate change already had on [insert favorite system.outcome here] ". This is harder than I thought. Let me outline some of the issues:

1) Just because your system's changing doesn't mean climate is to blame for negative trends and can't possibly be an issue if your experiencing favorable trends. Think about crop yields for example. As has been pointed out again and again, better management practices, fertilizers, irrigation and pesticides have doubled the yields of many crops over the past 40 years. Growing yields does not mean that climate change cannot be a problem. Yields could have grown even faster in its absence. A slowdown in yield growth is possibly consistent with climate change, but could be due to worsening of other factors. So in order to blame climate change, you need to show two things. 1) A sensitivity of your system to climate change while properly controlling for all confounders. This is really hard. Wolfram is really good at this. 2) A changed climate in the region under study.

2) That opens the question, what is a changed climate? Many of the systems we are concerned with are localized systems. AR3 and AR4 in their treatment of detection and attribution focused on just "climate change", not necessarily "anthropogenic climate change". A lot of the detection and attribution literature does this. I think that this is good for now. As Marshall and Kyle have pointed out, there is plenty of local climate change happening, which they in turn use to identify a climate sensitivity.

3) The data for most sectors are not very good and weather data required are thin at best in many areas. Just because you have a gridded dataset, which provides a number, doesn't mean that number has anything to do with local temperature. There are large swaths of land and time with no reliable measure of temperature or rainfall. On the outcomes side, AR4 focused largely on detecting changes in phenologies, which was a highly publicized result. But while I love butterflies, frogs and flowers, as a social scientist I am also keenly interested in what is happening to human health, agricultural yields, fisheries and economic growth. These literatures do a decent job at characterizing sensitivities of these sectors to fluctuations in weather (and sometimes even climate), but the vast majority of them focus on projecting 100 years into the future. A very small number of papers actually turn around and take a look back.

So here is what I think we should do over the next 5-6 years until someone else gets to write the D&A paper for AR6:

1) When you are estimating sensitivities using your fancy econometric models, be clear what the sensitivities are and what they capture. Are they weather sensitivities? Climate sensitivities? What omitted variables should we worry about that you could not control for? Can we expect these sensitivities to remain stable over the next 100 years?

2) When you do your projections use multiple climate models. Relying on a single model is for suckers. OK. I was a sucker until I met Marshall Burke.

3) Don't just download the projections of climate, download the historical values too. They are freely available.

4) Simulate the changes in [insert favourite outcome here] with and without anthropogenic climate change. Daithi Stone and the gang can tell you how to do this. This essentially means you leave on Volcanoes etc. and assume away human emissions. Then you turn on the humans. The difference is anthropogenic climate change. Do this for the past 30 years and calculate impacts. Then do this to your heart's delight for the future.

5) Put the word detection and attribution in either your title, keywords or paper to make sure we can find you when we are looking for you.

I think these studies are very powerful and important. I am working on four of them now. If you are in the business of projecting climate impacts using multistep models you should do the same. It's not hard.

Tuesday, March 4, 2014

Here a MIP, there a MIP, everywhere a MIP, MIP

A growing number of papers are looking at climate change impacts using multiple models. A few more are out this week in a special issue of PNAS. Mostly I just want to point readers of this blog to them if they are interested. I am generally a big fan of model intercomparisons (MIPs). I talk about them so much in my modeling course that I can usually hear the students’ collective sigh of relief when I move on to another topic.

Most of the strengths of MIPs have been demonstrated clearly with the climate MIPs, now on their 5th rendition. They are useful for estimating uncertainties, they can point to important weaknesses in some models, and most of all they can create something that is more than the sum of its parts – the magical multi-model mean – where independent model errors cancel and estimates become more reliable. And a positive externality of these activities is usually that experiments and observational data to rigorously test models tends to improve, since each model isn’t in charge of testing their own model (“trust is, it’s great, we validated it years ago!”)

I think two reasons the climate MIPs were so successful is that the results were made available to the entire community, and relatedly that most of the groups performing the comparisons were not simultaneously working on model improvement. I’m not sure yet if AgMIP will follow this example. In conversations with them, I think they’d like to, but are not quite there.

With all of the positives going for it, I am still a little puzzled by a few things in the recent MIP papers. For one, it’s not clear to me why agriculture studies still do so much comparison of “no CO2” and “with CO2” runs, and conclude that the difference represents some indication of how much more work needs to be done on CO2. I’m not saying that I haven’t heard various explanations, but none of them are satisfying. The chance that CO2 has no effect on crops is about the same as the chance that Wolfram will show up to work tomorrow in a dress (that’s a very low chance, in case you were wondering).  If you are looking at uncertainty from CO2, you should look at various plausible responses to CO2, and zero isn’t one of them. I can see reasons to make estimates without CO2, including if your model doesn’t treat it, or if you are focused on effects of heat in order to test adaptations, but if you are trying to look at impacts of different emissions scenarios, why keep running a model without CO2? It reeks of trying to make the problem look worse than it is.

Another niggle is that the experimental design isn’t always set up to provide insight into what causes differences between models. I’m sure that will improve with time. But for now they are drawing some big conclusions from fairly weak comparisons. For example, the figure below shows the huge spread in model results is largely from two models from LPJ and one from GAEZ being very positive. They use this to conclude that models without N limitation have more positive impacts. But there are tons of differences between these and other models, why not conclude that models that start with L or G are more optimistic? The theory is that models with N limits can’t respond as much to CO2, but it should also be the case that they can’t respond as much to temperature, as work Wolfram and I did a while back concluded. (We also saw the GAEZ model was way positive in regions that shouldn't have much N stress). They’d need to have an experimental design to really demonstrate that it's nitrogen and not something else.  

Just to be clear, I really do like the MIPs and the people involved are high quality and have been very generous with their repeated offers to participate. Unfortunately, they have more meetings than Australia has poisonous snakes (which is a lot, in case you were wondering). I am participating in a wheat site-level intercomparison, which hopefully will be out this year.


On another note, I am now fully settled in to live in Brisbane (on sabbatical), and will try to post a little more often. I’m learning lots of interesting stuff, and not all of it about cricket (although there has been a curious uptick in the national team’s performance since I went to their match my first week – see “Australia’s resurgence as a world power in cricket has been swift, ruthless and dangerous”).  Mostly I’m deep into crop physiology, which readers of this blog (if there are any left) may or may not care about, but it may be the only thing I have to talk about for a while. Also, there’s the IPCC approval session coming in a few weeks which should be interesting. I think Max will also be there blogging for one of the other sites he actually writes things for.

Friday, February 21, 2014

An orgy of technical advice for applied economists


Apologies for the tepid pace of G-Feed blogging.  Some of us are on sabbatical, some are looking for post-graduate employment, some of us have a new job and multiple cats, and some are mentally and physically preparing for the upcoming Formula 1 season.

In lieu of actual content, allow me to point folks toward a incredibly useful compendium of methodological advice for folks doing applied economics research. It's here, and is a list of the somewhat more technical blog postings that have shown up on the excellent Development Impact blog.
There you will find advice on a range of important topics from how to account for observer ("Hawthorne") effects in experimental work, to how to design randomized controlled trials to measure spillover effects, to how best to implement various popular strategies for causal inference in non-experimental data.

Also, if you're even remotely interested in development or in these methodological topics, the Development Impact blog is worth a follow.