Marshall posted last week
about our poverty mapping paper in Science. One thing a reporter asked me the
other day was about the origins of the project, and it got me thinking about an
issue that’s been on my mind for a while – how does innovation in science
happen, and how sub-optimal is the current system for funding science?
First the brief back story:
Stefano started as an assistant professor in Computer Science a couple years back, and reached
out to me to introduce himself and discuss potential areas of mutual interest.
I had recently been talking with Marshall about the GiveDirectly approach to finding poor
people, and we had been wondering if there was (a) a better (automated) way to
find thatched vs. metal roofs in satellites and (b) some way to verify if roof
type is actually a decent predictor of poverty. So we chatted about this once
or twice, and then some students working with Stefano tried to tackle the roof
problem. That didn’t work too well, but then someone (Stefano, I think, but I
can’t really recall) suggested we try to train a neural network to predict
poverty directly without focusing on roofs. Then we talked about how poverty
data was pretty scarce, how maybe we could use something like night lights
instead, and yada yada yada, now we have the paper.
Now the more general issue. I think most people outside of
science, and even many people starting out as scientists, perceive the process
as something like this: you apply for funding, you get it, you or a student in your group does the work, you
publish it. That sounds reasonable, but in my experience that hasn’t been how
it works for the papers I’m most excited about. The example of this paper is
more like: you meet someone and chat about an idea, you do some initial work
that fails, you keep talking and trying, you apply for funding and get rejected
(in this case, twice by NSF and a few times by foundations), you keep doing the
work anyway because you think it’s worthwhile, and eventually you make a
breakthrough and publish a paper.
In this telling, the progress happens despite the funding
incentives, not because of them. Luckily at Stanford we have pretty generous
start-up packages and internal funding opportunities that enable higher risk
research, and we are encouraged by our departments and the general Silicon
Valley culture to take risks. And we have unbelievably good students, many of whom are partially or fully funded or very cheap (a key contributor on the Science paper was an undergrad!). But that only slightly lessens the frustration of
having proposals to federal agencies being rejected because (I’m paraphrasing
the last 10+ proposal reviews I’ve gotten) “it would be cool if it worked but I
don’t think it’s likely to work.” If I wasn’t at Stanford, I probably would
have long ago stopped submitting risky ideas, or gotten out of academia
altogether.
I know this is a frustration shared by many colleagues, and
also that there’s been a fair number of academic studies on incentives and
innovation in science. One of the most interesting studies I’ve read is this
one, about the effects of receiving a Howard Hughes Medical Institute
(HHMI) investigator grant
on creativity and productivity. The study isn’t all that new, but definitely a
worthwhile read. For those not familiar with the HHMI, it is a fairly
substantial amount of funding given to a person, rather than for a specific
project, with a longer time horizon than most awards. It’s specifically designed
to foster risk taking and transformational work.
The article finds a very large effect of getting a HHMI on
productivity, particularly in output of “top hit” publications. Interestingly,
it also finds an increase in “flops”, meaning papers that get cited much less
than typical for the investigator (controlling for their pre-award
performance). This is consistent with the idea that the awardees are taking
more risks, with both more home runs and more strike outs. Also consistent is
the fact that productivity drops in the few years after getting an award,
presumably because people start to pursue new directions. Even more interesting
to me was the effect of getting an HHMI on applications to NIH. First, the
number of applications goes way down, presumably because recipients spend less
time seeking funds and more time actually doing science. Second, the average ratings
for their proposals gets worse (!) consistent with the idea that federal funds
are biased against risky ideas.
Unfortunately, there aren’t any studies I can find on the “people
not project” types of awards in other areas of science. Personally, I know my
NASA new investigator program award was instrumental in freeing me up to
explore ideas as a young faculty. I never received an NSF Career award
(rejected 3 times because – you guessed it – the reviewers weren’t convinced
the idea would work), but that would be a similar type of thing. I’d like to
see a lot more empirical work in this area. There’s some work on awards, like in this paper,
but awards generally bring attention and prestige, not actual research funds,
and they apply to a fairly small fraction of scientists.
I’d also like to see some experiments set up, where people
are randomly given biggish grants (i.e. enough to support multiple students for
multiple years) and then tracked over time. Then we can test a few hypotheses I
have, such as:
- Scientists spend way too much time writing and reviewing proposals. An optimal system would limit all proposals to five pages, and give money in larger chunks to promote bigger ideas.
- There is little or maybe even zero need to include feasibility as a criteria in evaluating proposals for specific projects. More emphasis should be placed on whether the project will have a big positive impact if it succeeds. Scientists already have enough incentive to make sure they don’t pursue dead ends for too long, since their work will not get published. Trying to eliminate failure, or even trying hard to reduce failure rates, based on a panel of experts is counterproductive. (It may be true that panel ratings are predictive of future project impact but I think that comes from identifying high potential impact rather than correctly predicting the chance of failure)
- People who receive HHMI-like grants are more likely to ponder and then pursue bigger and riskier ideas. This will result in more failure and more big successes, with an average return that is much higher than a lot of little successes. (For me, getting the Macarthur award was, more than anything, a challenge to think about bigger goals. I try to explain this to Marshall when he constantly reminds me that people’s productivity decline after getting awards. I also don’t think he’s read the paper to know it’s only a temporary decline. Temporary!)
- Aversion to risk and failure is especially high for people who do not have experience as researchers, and thus don’t appreciate the need to fail on the way to innovation. One prediction here is that panels or program managers with more successful research histories will tend to pick more high impact projects.
One final thought. On several occasions I have been asked by
foundations or other donors what would be a good “niche” investment in topics
around sustainability. I think they often want to know what specific topics, or
what combination of disciplines, are most ripe for more funding. But my answer
is typically that I don’t know enough about every topic possible to pick
winners. Better to do something like HHMI for our field, i.e. encourage big
thinking and risk taking among people that have good track records or indicators of promise. But that requires a tolerance for failure, and even
foundations in the midst of Silicon Valley seem to struggle with that.