We’ve had a few really hot days here in California. It won’t surprise readers of this blog to know the heat has made Marshall
unusually violent and Sol unusually unproductive. They practice what they
preach. Apart from that, it’s gotten me thinking back to a common issue in our
line of work - getting “good” measures of heat exposure. It’s become quite
popular to be as precise as possible in doing this – using daily or even hourly
measures of temperature to construct things like ‘extreme degree days’ or ‘killing
degree days’ (I don’t really like the latter term, but that’s beside the point
for now).
I’m all for precision when it is possible, but the reality
is that in many parts of the world we still don’t have good daily measures of
temperature, at least not for many locations. But in many cases there are more
reliable measures of monthly than daily temperatures. For example, the CRU has gridded time series
of monthly average max and min temperature at 0.5 degree resolution.
It seems a common view is that you can’t expect to do too
well with these “coarse” temporal aggregates. But I’m going to go out on a limb
and say that sometimes you can. Or at least I think the difference has been
overblown, probably because many of the comparisons between monthly and daily
weather show the latter working much better. But I think it’s overlooked that most
comparisons of regressions using monthly and daily measures of heat have not
been a fair fight.
What do I mean? On the one hand, you typically have the
daily or hourly measures of heat, such as extreme degree days (EDD) or
temperature exposure in individual bins of temperature. Then they enter into
some fancy pants model that fits a spline or some other flexible function that
capture all sorts of nonlinearities and asymmetries. Then on the other hand,
for comparison you have a model with a quadratic response to growing season
average temperature. I’m not trying to belittle the fancy approaches (I bin
just as much as the next guy), but we should at least give the monthly data a fighting
chance. We often restrict it to growing season rather than monthly averages, often
using average daily temperatures rather than average maximums and minimums, and,
most importantly, we often impose symmetry by using a quadratic. Maybe this is
just out of habit, or maybe it’s the soft bigotry of low expectations for those
poor monthly data.
As an example, suppose, as we’ve discussed in various other
posts, that the best predictor of corn yields in the U.S. is exposure to very
high temperatures during July. In particular, suppose that degree days above 30°C
(EDD) is the best. Below I show the correlation of this daily measure for a
site in Iowa with various growing season and monthly averages. You can see that
average season temperature isn’t so good, but July average is a bit better, and
July average daily maximum even better. In other words, if a month has a lot of really hot days, then that month's average daily maximum is likely to be pretty high.
You can also see that the relationship isn’t exactly linear.
So a model with yields vs. any of these monthly or growing season averages
likely wouldn’t do as well as EDD if the monthly data entered in as a linear or
quadratic response. But as I described in an old post that
I’m pretty sure no one has ever read, one can instead define simple assymetric
hinge functions based on monthly temperature and rainfall. In the case of U.S.
corn, I suggested these three based on a model fit to simulated data:
This is now what I’d consider more of a fair fight between
daily and monthly data. The table below is from what I posted before. It
compares the out-of-sample skill of a model using two daily-based measures (GDD
and EDD), to a model using the three monthly-based hinge functions above. Both
models include county fixed effects and quadratic time trends. In this
particular case, the monthly model (3) even works slightly better than the
daily model (2). I suspect the fact it’s even better relates less to
temperature terms than to the fact that model (2) uses a quadratic in growing
season rainfall, which is probably less appropriate than the more assymetric
hinge function – which says yields respond up to 450mm of rain and are flat
afterwards.
Model
|
Calibration R2
|
Average root mean square error for calibration
|
Average root mean square error for out-of-sample data
(for 500 runs)
|
% reduction in out-of-sample error
|
1
|
0.59
|
0.270
|
.285
|
--
|
2
|
0.66
|
0.241
|
.259
|
8.9
|
3*
|
0.68
|
0.235
|
.254
|
10.7
|
Overall, the point is that monthly data may not be so much
worse than daily for many applications. I’m sure we can find some examples
where it is, but in many important examples it won’t be. I think this is good
news given how often we can’t get good daily data. Of course, there’s a chance the heat is making me crazy and I’m wrong about all this. Hopefully at least I've provoked the others to post some counter-examples. There's nothing like a good old fashioned conflict on a hot day.
No comments:
Post a Comment