All of the attention on the presidential election has brought up some issues that are familiar to those of us who work in the world of anticipating and preparing for climate change impacts. In particular, there's been a clear contrast in the election coverage between, on the one hand, a lot of media stories that describe the race as a "toss-up" or "too close to call" and, on the other hand, careful analysis of the actual data on polls in swing states that say the odds are overwhelmingly in favor of another term for President Obama. Nate Silver has become a nerd celebrity for his analysis and daily blog posts (his new book is also really good). But there are many others who come to similar or even stronger conclusions. Like Sam Wang at Princeton who has put Obama's chances at over 98%.
I think there are a few things going on here. One is that the popular media has basically no incentive to report anything but a very close race. It keeps readers checking back frequently, and campaigns may be more likely to spend more money to media outlets for advertising if the narrative is for a very close race (although admittedly, they have so much money that the narrative may not make much difference). A more fundamental reason, though, is just a basic misunderstanding of probability. Not being able to entirely rule out something from happening (e.g., Romney winning) is not the same as saying it could easily happen. People mistake the possible for the probable. They want black and white, not shades of gray (at least not fewer than 50 shades of gray).
(Also in the news this week: hurricane Sandy. Another case where people who understand probabilities, like Mayor Bloomberg, have little trouble seeing the link to global warming, while others continue the silly argument that if it was possible for such things to happen in the past, then global warming can't play a role. In their black and white world, things can either happen or they can't. There is no understanding of probability or risk. I call this the Rava view of the world, based on the episode of Seinfeld when Elaine tries to convince Rava that there are degrees of coincidence:
RAVA: Maybe you think we're in cahoots.
ELAINE: No, no.. but it is quite a coincidence.
RAVA: Yes, that's all, a coincidence!
ELAINE: A big coincidence.
RAVA: Not a big coincidence. A coincidence!
ELAINE: No, that's a big coincidence.
RAVA: That's what a coincidence is! There are no small coincidences and
ELAINE: No, there are degrees of coincidences.
RAVA: No, there are only coincidences! ..Ask anyone! (Enraged, she asks
everone in the elevator) Are there big coincidences and small coincidences,
or just coincidences? (Silent) ..Well?! Well?!..)
Back to my point (you have a point!?), when we turn to climate impacts on agriculture, it's still quite common to hear people say that we just don't know what will happen. Usually this comes in some form of a "depends what happens to rainfall, and models aren't good with rainfall" type of argument. It's true that we do not know with complete certainty which direction climate change will push food production or hunger. But we do know a lot about the probabilities. Given what we know about how fast temperature extremes are increasing, and how sensitive crops are to these extremes, it's very probable in many cases, like U.S. corn, that impacts on crop yields will be negative. (For example, a few years back I tried with Claudia Tebaldi to estimate the probabilities that climate change would negatively impact global production of key crops by 2030. For maize, we put the odds at over 95%). Even in cases where rainfall goes up, the negatives tend to predominate. It's also also very likely that in some cases, like potatoes in England, that impacts will be positive. In either case we cannot say anything with absolute certainty, but that doesn't mean we should describe impacts as "too close to call"
Us academics can probably learn a thing or two from how Nate Silver is trying to explain risk and probability in his daily posts. But it's also fair to say that our task is a little hard for a couple of reasons. First, there are lots of data on past polls and election results, which people can use to figure out empirically how accurate their methods would have been in past cases. With climate change, we are often talking about changes that have not been seen in the past, or at least not by enough cases to develop a large sample size for testing. A second and, in my view, more critical difference is that climate impacts happen on top of many other changes in society. Elections provide a clear outcome - a candidate wins or loses. But what does a climate impact look like? How do we know if our predictions are right or not? A lot of the entries in this blog are around that question, but the short answer is we can't directly measure impacts, we have to be clever in thinking of ways to pull them out of the data.
So maybe all of the attention to the election forecasts will help the public understand probabilities a little better. If nothing else, people should understand the difference between a 50% chance and an 80% chance of something happening. Reporting the latter as if it were the former is annoying in the context of the election, or as Paul Krugman says "Reporting that makes you stupid". But confusing the two in the case of climate impacts is more than annoying, it can lead to a lot more wishful thinking and a lot fewer smart investments than would otherwise be the case.
One final note: even when people are on board with the meaning of probabilities, it's still not so easy to get them right. Silver has the election at ~85% chances for Obama. That's high, but his chances of Romney winning are about 10 times higher than what Wang has. So just like with climate impacts, smart people can disagree, and it usually comes down to what they assume about model bias (Silver seems to admit a much higher chance that all polls are wrong in the same direction.) But even if smart analysts disagree, very few if any of them think the election results (or climate impacts) are a toss-up..