Tuesday, August 23, 2016

Risk Aversion in Science

Marshall posted last week about our poverty mapping paper in Science. One thing a reporter asked me the other day was about the origins of the project, and it got me thinking about an issue that’s been on my mind for a while – how does innovation in science happen, and how sub-optimal is the current system for funding science?

First the brief back story: Stefano started as an assistant professor in Computer Science a couple years back, and reached out to me to introduce himself and discuss potential areas of mutual interest. I had recently been talking with Marshall about the GiveDirectly approach to finding poor people, and we had been wondering if there was (a) a better (automated) way to find thatched vs. metal roofs in satellites and (b) some way to verify if roof type is actually a decent predictor of poverty. So we chatted about this once or twice, and then some students working with Stefano tried to tackle the roof problem. That didn’t work too well, but then someone (Stefano, I think, but I can’t really recall) suggested we try to train a neural network to predict poverty directly without focusing on roofs. Then we talked about how poverty data was pretty scarce, how maybe we could use something like night lights instead, and yada yada yada, now we have the paper.  

Now the more general issue. I think most people outside of science, and even many people starting out as scientists, perceive the process as something like this: you apply for funding, you get it, you or a student in your group does the work, you publish it. That sounds reasonable, but in my experience that hasn’t been how it works for the papers I’m most excited about. The example of this paper is more like: you meet someone and chat about an idea, you do some initial work that fails, you keep talking and trying, you apply for funding and get rejected (in this case, twice by NSF and a few times by foundations), you keep doing the work anyway because you think it’s worthwhile, and eventually you make a breakthrough and publish a paper.

In this telling, the progress happens despite the funding incentives, not because of them. Luckily at Stanford we have pretty generous start-up packages and internal funding opportunities that enable higher risk research, and we are encouraged by our departments and the general Silicon Valley culture to take risks. And we have unbelievably good students, many of whom are partially or fully funded or very cheap (a key contributor on the Science paper was an undergrad!). But that only slightly lessens the frustration of having proposals to federal agencies being rejected because (I’m paraphrasing the last 10+ proposal reviews I’ve gotten) “it would be cool if it worked but I don’t think it’s likely to work.” If I wasn’t at Stanford, I probably would have long ago stopped submitting risky ideas, or gotten out of academia altogether.

I know this is a frustration shared by many colleagues, and also that there’s been a fair number of academic studies on incentives and innovation in science. One of the most interesting studies I’ve read is this one, about the effects of receiving a Howard Hughes Medical Institute (HHMI) investigator grant on creativity and productivity. The study isn’t all that new, but definitely a worthwhile read. For those not familiar with the HHMI, it is a fairly substantial amount of funding given to a person, rather than for a specific project, with a longer time horizon than most awards. It’s specifically designed to foster risk taking and transformational work.

The article finds a very large effect of getting a HHMI on productivity, particularly in output of “top hit” publications. Interestingly, it also finds an increase in “flops”, meaning papers that get cited much less than typical for the investigator (controlling for their pre-award performance). This is consistent with the idea that the awardees are taking more risks, with both more home runs and more strike outs. Also consistent is the fact that productivity drops in the few years after getting an award, presumably because people start to pursue new directions. Even more interesting to me was the effect of getting an HHMI on applications to NIH. First, the number of applications goes way down, presumably because recipients spend less time seeking funds and more time actually doing science. Second, the average ratings for their proposals gets worse (!) consistent with the idea that federal funds are biased against risky ideas.

Unfortunately, there aren’t any studies I can find on the “people not project” types of awards in other areas of science. Personally, I know my NASA new investigator program award was instrumental in freeing me up to explore ideas as a young faculty. I never received an NSF Career award (rejected 3 times because – you guessed it – the reviewers weren’t convinced the idea would work), but that would be a similar type of thing. I’d like to see a lot more empirical work in this area. There’s some work on awards, like in this paper, but awards generally bring attention and prestige, not actual research funds, and they apply to a fairly small fraction of scientists.

I’d also like to see some experiments set up, where people are randomly given biggish grants (i.e. enough to support multiple students for multiple years) and then tracked over time. Then we can test a few hypotheses I have, such as:
  1. Scientists spend way too much time writing and reviewing proposals. An optimal system would limit all proposals to five pages, and give money in larger chunks to promote bigger ideas. 
  2. There is little or maybe even zero need to include feasibility as a criteria in evaluating proposals for specific projects. More emphasis should be placed on whether the project will have a big positive impact if it succeeds. Scientists already have enough incentive to make sure they don’t pursue dead ends for too long, since their work will not get published. Trying to eliminate failure, or even trying hard to reduce failure rates, based on a panel of experts is counterproductive. (It may be true that panel ratings are predictive of future project impact but I think that comes from identifying high potential impact rather than correctly predicting the chance of failure)
  3. People who receive HHMI-like grants are more likely to ponder and then pursue bigger and riskier ideas. This will result in more failure and more big successes, with an average return that is much higher than a lot of little successes. (For me, getting the Macarthur award was, more than anything, a challenge to think about bigger goals. I try to explain this to Marshall when he constantly reminds me that people’s productivity decline after getting awards. I also don’t think he’s read the paper to know it’s only a temporary decline. Temporary!)
  4. Aversion to risk and failure is especially high for people who do not have experience as researchers, and thus don’t appreciate the need to fail on the way to innovation. One prediction here is that panels or program managers with more successful research histories will tend to pick more high impact projects.

I’m sure some of the above are wrong, but I’m not sure which ones. If anyone has answers, please let me know. It’s an area I’m mostly ignorant on but interested to learn more. I’d apply for some funding to study it, but it’d probably be rejected. I’d rather waste my time blogging than writing more proposals.



One final thought. On several occasions I have been asked by foundations or other donors what would be a good “niche” investment in topics around sustainability. I think they often want to know what specific topics, or what combination of disciplines, are most ripe for more funding. But my answer is typically that I don’t know enough about every topic possible to pick winners. Better to do something like HHMI for our field, i.e. encourage big thinking and risk taking among people that have good track records or indicators of promise. But that requires a tolerance for failure, and even foundations in the midst of Silicon Valley seem to struggle with that.

5 comments:

  1. Although it supports a different stage of academic careers, I think the NSF GRFP functions a bit like the HHMI - seeking to support individuals with potential rather than specific projects. And there is some quasi-experimental analysis of its positive impacts on publication outcomes - https://www.nsf.gov/ehr/Pubs/GRFP_Final_Eval_Report_2014.pdf.

    Out of curiosity, did you feel the NSF application process helped you refine and advance your hypotheses? Would the nightlights/poverty paper have evolved in the same way without the failed NSF applications and foundation conversations? Maybe I'm just looking for reasons to justify the amount of time I'm spending on my first round of NSF/NASA applications :)

    ReplyDelete
  2. Hi Robert

    thanks for pointer to the NSF GRFP study. i think in this case the paper wasn't affected by the proposal writing. there's definitely some value in thinking through a project, and havingt to submit a proposal is a common time to do that. but in this case we met weekly or biweekly and thought through the project pretty often regardless of proposals. i don't think proposals should be eliminated, just that proposals could be shorter and evaluated more on potential impact than chances of failing. also, i'm sure you are a much better proposal writer than me, so you'll be fine!

    ReplyDelete
  3. Great blog post, David! I definitely agree with many of your assertions. We learn a lot from our failures in science and this needs to be better communicated -- both the incidence of failure as well as the failed investigation itself.
    I, too, have experienced reviewers arguing that something really ambitious is unlikely to be feasible -- even when we have already made significant progress already. The idea of short proposals, evaluated on their potential impact/contribution is very appealing. Where I differ with you is on the size of the grants. I think 'small is beautiful'! Modest funding for a grad student to work with a faculty member for two years to develop an interesting, high risk idea can be very productive. Also, students are often the best vehicle for bringing together faculty from different disciplines. I look forward to your next paper in Science! Cheers, Tom Hertel

    ReplyDelete
  4. I was just pointed to this great post by Jerry Shively. HT to Jerry, and thank you David for this excellent description of something that's bugged me for a long time.

    The problem identified in this post is really deep and widespread: in addition to the HHMI study and other findings mentioned, there is an earlier of agricultural research showing how competitive grants distract researchers and slow the pace of productivity growth:
    http://ajae.oxfordjournals.org/content/88/4/783.short

    Another angle is how short-term impact factors distract researchers from big breakthroughs:
    http://www.nber.org/papers/w22180

    In the end there will have to be a mix of instruments and incentives - so thank you for getting these points out there!

    --Will

    ReplyDelete
  5. There's another paper that looks at what happens when mathematicians win the Fields Medal:

    http://www3.nd.edu/~tjohns20/RePEc/deendus/wpaper/022_Fields.pdf

    They find that the number of papers published declines for winners, but that they increasingly wander into new subfields. This could be consistent with risk-taking, but could also be consistent with a newfound know-it-all-ness (the physics-Nobel-turned-renewable-energy-expert being a common phenotype in this regard...).

    On a separate point: since the poverty paper got written anyway, maybe NSF made the right choice? [I probably shouldn't put this in writing]. This was of course not the announced reason for rejection, but maybe NSF's marginal dollar is not best spent at a wealthy private university. This leads to a related research question, to add to your list: are there constant, increasing, or decreasing returns to scale in research funding? Surely returns are increasing through some part of the domain...

    ReplyDelete