Saturday, May 18, 2013

Do journal impact factors distort science?

From my inbox:
An ad hoc coalition of unlikely insurgents -- scientists, journal editors and publishers, scholarly societies, and research funders across many scientific disciplines -- today posted an international declaration calling on the world scientific community to eliminate the role of the journal impact factor (JIF) in evaluating research for funding, hiring, promotion, or institutional effectiveness.
Here's the rest of the story at Science Daily.

And a link to DORA, the "ad hoc coalition" in question.

It seems fairly obvious that impact factors do distort science.  But I wonder how much, and I also wonder if there are realistic alternatives that would do a better job of encouraging good science.

There are delicate tradeoffs here: some literatures seem to become mired within their own dark corners, forming small circles of scholars that speak a common language.  They review each others' work, sometimes because no one else can understand it, or sometimes because no one else cares to understand it.  The circle has high regard for itself, but the work is pointless to those residing outside of it.

At the same time, people obviously have very different ideas about what constitutes good science.

So, what does the right model for evaluating science look like?

No comments:

Post a Comment