Sunday, June 28, 2009

Can the grant system be overhauled?

Gina Kolata writes in today's New York Times that $105 billion dollars have been spent on cancer research since 1971 when Richard Nixon declared 'war on cancer', but, measuring progress in death rates, we're not a whole lot better off now than we were then. Indeed, cancer is good business, and cancer specialty clinics are opening or expanding all over the country (and advertising about how good they are, to promote sales). Cancer treatments are long and costly to our health care system, so it's a serious problem, both economically and for anyone who has cancer.

Ms Kolata correctly attributes much of this to the conservative nature of the grant system. Researchers don't even apply for money for new ideas because they know the system doesn't reward innovation, so what does get funded is research that doesn't even attempt to go beyond incremental progress. And researchers' careers, prestige, even salaries depend on grants.

Kolata's article is specifically about cancer, but the conservative nature of the grant system is true for all fields. It's partly because 'peer review'--the judging of grants by people who do similar work--keeps it that way. People can only evaluate what they already know. And it's partly because the system demands that the researcher demonstrate that the work can be done, which requires pilot data. And as with any large establishment, it learns how to protect and perpetuate its own interests.

It is not easy to say what to do about it. What kind of accountability for grant recipients would be appropriate? The research questions being asked are tough, so 'cures' cannot be promised in advance, and the more basic the research the less clear what the criteria for success could be. The idea of accountability is that if your research is paid by a health institute it should make notable contributions to health, not just journal articles or to the researcher's career. A candid observer could quickly eliminate a high fraction of grant applications on the grounds that, even if successful as promised, their contribution would be very minor, as Kolata illustrates. Perhaps there should be a penalty for making promises that aren't kept--at least, that could help make the system more honest.

Limits on the size or length of projects or of an investigator's total grants would help spread funds around. But what about the role, sometimes legitimate and sometimes mainly based on a love of technology, of very expensive equipment and approaches? Is there a way to identify less technically flashy, but perhaps more efficacious work? It's easy to see that this can be true: lifestyle changes could prevent vastly more cancer than, say, identifying genetic susceptibility causes, yet we spend much money on cancer genetics research compared to environmental change.

Speaking of lifestyles, one cannot order up innovations the way one can order a burger with fries. Might there be 'meta' approaches that would increase the odds that someone, somewhere will make a key finding or have a penetrating idea? Would that more likely come from someone in a big lab working on long-term projects, or someone in a small lab working in relative obscurity?

Or is it OK to perpetuate the system, assuming good will come of it here and there, meanwhile a lot of people are employed to manage big labs, run the experiments, collect data, make machinery and lab equipment, and sweep the floors in large lab buildings?

These reflections apply to much that is happening in the life (and other) sciences today. They drive the system in particular directions, including fads and technologically rather than conceptually based approaches, and in that sense some things are studied while other approaches may not be considered (or funded because they're out of the mainstream). An example relevant to our blog and work is the way that genetic determinism and, more broadly, a genome-centered focus, drives so much of life and health sciences. By no means irrelevant or all bad! But it is a gravitational force that pulls resources away from other areas that might be equally important.

Clearly major findings are generated by this way of doing science, even if false promises go unsanctioned (indeed, those making them usually continue to do that and continue to be funded with major grants). Life sciences certainly do increase our knowledge, in many clearly important ways. Yet disease rates are not dropping in proportionate ways relative to grandiose promises.

Is there a solution to all this? Could the system be dramatically overhauled, with, say, research money being parceled out equally to anyone a university has deemed worthy of employment? Could the peer review system be changed, so that some non-experts are on review panels, ensuring that the system doesn't simply perpetuate the insider network? Or would they not know enough to act independently? Universities encourage and reward grant success not because it allows important work to be done by their brilliant professors but because it brings prestige, score-counting, and 'overhead' money to campus--can university dependence on overhead or faculty's on salary be lessened? Is there a way to encourage and reward innovation?

No comments: