This post is a convolution of David's post earlier today and Sol's from a few days ago. Our debate with Buhaug et al that Sol blogged about has dragged on for a while now, and engendered a range of press coverage, the most recent by a news reporter at Science. "A debate among scientists over climate change and conflict has turned ugly", the article begins, and goes on to catalog the complaints on either side while doing little to engage with the content.
Perhaps it should be no surprise that press outlets prefer to highlight the mudslinging, but this sort of coverage is not really helpful. And what was lost in this particular coverage were the many things I think we've actually learned in the protracted debate with Buhaug.
We've been having an ongoing email dialog with ur-blogger and statistician Andrew Gelman, who often takes it upon himself to clarify or adjudicate these sorts of public statistical debates, which is a real public service. Gelman writes in an email:
In short, one might say that you and Buhang are disagreeing on who has the burden of proof. From your perspective, you did a reasonable analysis which holds up under reasonable perturbations and you feel it should stand, unless a critic can show that any proposed alternative data inclusion or data analytic choices make a real difference. From their perspective, you did an analysis with a lot of questionable choices and it’s not worth taking your analysis seriously until all these specifics are resolved.I'm sympathetic with this summary, and am actually quite sympathetic to Buhaug and colleagues' concern about our variable selection in our original Science article. Researchers have a lot of choice over how and where to focus their analysis, which is a particular issue in our meta-analysis since there are multiple climate variables to choose from and multiple ways to operationalize them. Therefore it could be that our original effort to bring each researcher's "preferred" specification into our meta-analysis might have doubly amplified any publication bias -- with researchers of the individual studies we reviewed emphasizing the few significant results, and Sol, Ted, and I picking the most significant one out of those. Or perhaps the other researchers are not to blame and the problem could have just been with Sol, Ted, and my choices about what to focus on.