This is a bit trivial, but I was recently on travel, and I
often ponder a couple of things when traveling. One is how to use my work time
more efficiently. Or more specifically, what fraction of requests to say yes
to, and which ones to choose? It’s a question I know a lot of other scientists
ask themselves, and it’s a moving target as the number of requests change over
time, for talks, reviews, etc.
The other thing is that I usually get a rare chance to sit and
watch Sportscenter, and I'm continually amazed by how many statistics are now used to
discuss sports. Like “so-and-so has a 56% completion percentage when rolling
left on 2nd down” or “she’s won 42% of points on her second serve
when playing at night on points that last less than 8 strokes, and when someone
in the crowd sneezes after the 2nd stroke.” Ok, I might be exaggerating
a little, but not by much.
So it gets me wondering why scientists haven’t been more pro-active
in using numbers to measure our perpetual time management issues. Take reviews
for journals as an example. It would seem fairly simple for journals to report
how many reviews different people perform each year, even without revealing who
reviewed which papers. I’m pretty sure this doesn’t exist, but could be wrong (The
closest thing I’ve seen to this is Nature sends an email at the end of each
year saying something like “thanks for your service to our journal family, you
have reviewed 8 papers for us this year”). It would seem that comparing the number of
reviews to the number of papers you get reviewed by others (also something journals could easily report) would be a good measure of whether
each person is doing their part.
Or more likely you’d want to share the load with your co-authors,
but also account for the fact that a single paper usually requires about 3
reviewers. So we can make a simple “science citizen index” or “scindex” that would
be
SCINDEX = A / (B x C/D) = A x D / (B x C)
where
A = # of reviews performed
B = # of your submissions that get reviewed (even if the
paper ends up rejected)
C = average number of reviews needed per submission (assume
= 3)
D = average number of authors per your submitted papers
Note that to keep it simple, none of this counts time spent
as an editor of a journal. And it doesn’t adjust for being junior or senior,
even though you could argue junior people should do less reviews and make up
for it when they are senior. And I’m sure some would complain that measuring
this will incentivize people agreeing but then doing lousy reviews. (Of course
that never happens now). Anyhow, if this number is equal to 1 than you are
pulling your own weight. If it’s more than 1 you are probably not rejecting
enough requests. So now I’m curious how I stack up. Luckily I have a folder
where I save all reviews and can look at the number saved in a given year. Let’s
take 2013. Apparently I wrote 27 reviews, not counting proposal or assessment
related reviews. And Google Scholar can quickly tell me how many papers I was
an author on in that year (14), and I can calculate the average number of
authors per paper (4.2). Let’s also assume that a few of those were first
rejected after review elsewhere (I don’t remember precisely, but that’s an
educated guess), so that my total submissions were 17. So that makes my scindex
27 x 4.2 / (17 x 3) = 2.2. For 2012 it was 3.3.
Holy cow! I’m doing 2-3 times as many reviews as I should be
reasonably expected to. And here I was thinking I had a reasonably good balance
of accepting/rejecting requests. It also
means that there must be lots of people out there who are not pulling their
weight (I’m looking at you, Sol).
It would be nice if the standard citation reports add
something like a scindex to the h-index and other standards. Not because I
expect scientists to be rewarded for being a good citizen, though that would be
nice, or because it would expose the moochers. But because it would help us
make more rational decisions about how much of the thankless tasks to take on. Or
maybe my logic is completely off here. If so, let me know. I’ll blame it on
being tired from too much travel and doing too many reviews!
So David, I'm willing to accept the logic for your index. What I'm wondering is whether a more senior scientist should actually aim to have a higher 'scindex' to make up for earlier years where reviewing was not as much a part of his/her expected activities. You specifically mention this in the 5th paragraph - and this assertion seems defendable.
ReplyDeleteThis still leaves the overall question though of how far one should go in this 'repayment' (if we wish to consider this an academic debt). And I would further argue that even if one has personally accounted for their costs to the system from junior efforts and dutifully repaid the debt there may still be reason to expect more senior scientist to maintain a 'scindex' above 1 because other metrics such as the h-index tend to increase with time even in the face of reduced output. But aside from this particular argument I do think you should be able to feel as though you've done your share at the levels you've described here.
Now if I might - what would you think of a different proposal. Let's name reviewers and credit (or discredit) them in some fashion. Several journals are already listing some reviewer/editors. But I'm wondering if there have been any attempts to quantitate (ala impact metrics for one example) review quality. This seems fraught with potential problems on the surface, but I have faith that if intelligent folk kick the notion around a bit there may be a way to get at review quality.
Just goes to show how even non-nerd sports types can be motivated to use math when winning is on the line. First baseball with "sabermetrics" and now practically every winning pro sports team has become data-focused.
ReplyDeleteIn the academic world, "winning" means publishing high-impact papers in high-impact journals. What an editor really needs to know is which reviewers have the most effect on the creation of high-impact papers. This means getting out of the old-boy network method of review assignment and creating a new method of "papermetrics" that collects everything from turnaround time on reviews to number of words of comments to emotional content of word choice, then correlating these with post-publication citation and impact profiles. At the very least, it's yet another publication for your vita; at best, it would make a permanent improvement in the efficiency of the publication process.