Facebook Twitter LinkedIn

Home Back Issues No. 22: Grant Clinic: Impact Score vs. Significance Score

Mar 29
2010

No. 22: Grant Clinic: Impact Score vs. Significance Score

Posted by: PIA in

Tagged in: Untagged 

Sign Up to receive free weekly articles like these

Grant Clinic:

Impact Score vs. Significance Score

Reader Question: I just received the summary statement for a grant proposal I submitted to NIH, and I can’t figure it out. I got a really good score for Significance, but my Overall Impact score was mediocre. Isn’t Impact the same as Significance?

Expert Comments:

It’s not surprising that you are confused about the difference between the “Impact” and “Significance” categories. Many NIH reviewers have asked the same question since the review format was changed. In fact, the Center for Scientific Review even put out a pdf explaining the difference.

In the NIH’s words, “Significance” is how important the research project would be if everything worked perfectly. “Impact” is the likelihood that the project, as written, will change the relevant field of research and make a difference in human health. Put another way: “Significance” is whether the project is worth doing, while “Impact” is what NIH gets for its money at the end of the project.

It can’t have impact if it isn’t worth doing, so high scores in these two areas are important. But if the research plan is seriously flawed, or if the reviewers don’t think the research team has the necessary experience and resources to complete the proposed experiments, then the research is unlikely to have much impact even if the topic has high significance.

So the “Impact” score really is a combination of all the review criteria: Significance, Innovation, Investigators, Approach and Environment.

Comments by Karin Rodland, PhD, Laboratory Fellow, Biological Sciences Division, Pacific Northwest National Laboratory (PNNL), Richland, Wash.

Dr. Rodland, an NIH reviewer since 1998, is also the speaker of the upcoming teleconference "New NIH Short Form: Best Tactics" on March 31 at 1 p.m. EST.

The preceding information is of necessity general in nature and may not apply to every case: obtain professional advice for your particular situation.

Enjoy this article? Sign Up to receive these free every week

Do lab animal regulations and compliance issues concern you? You should see our NEW sister e-newsletter, Lab Animal e-Alert, which is devoted exclusively to providing Principal Investigators WEEKLY alerts on the welfare and regulatory compliance issues pertaining to laboratory animal research. It is distributed worldwide, without charge, to hundreds of thousands of Principal Investigators whose research involves animals. Just register your email to get your issues flowing.

This eAlert is brought to you as an informational training tool by the Principal Investigators Association, which is an independent organization. Neither the eAlert nor its contents have any connection with the National Institutes of Health (NIH), nor are they endorsed by this agency. All views expressed are those personally held by the author and are not official government policies or opinions.

Comments (18)
...
written by randomreviewer, March 30, 2010
I'm still confused by Dr. Rodland's explanation. As a reviewer myself, I interpret "significance" as the significance of the topic being studied. e.g., I might consider breast cancer as a highly significant topic worth investigating. On the other hand, the overall impact score reflects how well the proposed experiments would make a dent on the topic of breast cancer. If there are flaws or deficiencies perceived in the proposed research, environment or PI, then the overall impact score would be low even if the topic was highly significant.
...
written by the captain, March 30, 2010
I find randomreviewer's explanation very reasonable and ALSO quite consistent with Dr. Rodland's.
...
written by LPS, March 30, 2010
Randomreviewer and Dr Rodland said exactly the same thing. I am confused as to why randomreviewer would remain confused!
...
written by wwlytton, March 30, 2010
I review frequently and have interpreted 'significance' as follows in light of what I have been told by SROs and chairs. It is of course typical for reviewers to mention eg cancer, AIDS or schizophrenia as the area of focus. However, to be significant, a study must at least aim for a major, exciting central issue. To get the highest significance score, a proposal should be what the NSF calls 'transformative' -- able to change or at least bend the dominant paradigm. The discussion of impact above seems to me quite complete. In practice it is the summary score yet it is definitely NOT an average of the other scores since some aspect may be of greater import (impact) for a particular proposal. Sorry to perhaps muddy the waters further.
...
written by Brer Rabbit, March 30, 2010
Vis-a vis the Significance vs Impact question NIH has met the tar baby and it is stuck.
...
written by jaderesearch, March 30, 2010
I have a slightly different perspective, and less arcane. Rather than the impact of the research team's experience, my understanding was that "impact" means roughly the same as broader impacts for NSF and relevance from grants.gov, in which the benefit of the work must be important in a broader sense in addition to the scientific merit. I am not generally an NIH person, however. I think the funding agencies have been trying to "harmonize" some of their terminology and processes, much as the E.U. has tried to "harmonize" some of the varied laws and regulations of the individual EU states, apparently, in light of the confusion, with about the same degree of success!
...
written by long-time reviewer, March 30, 2010
These comments illustrate the broken state of the study section system at NIH. Reviewers are left to interpret the standards as they will and are led to believe they are the experts. The new criteria and shorter grant forms were a valiant effort to improve the review system, but the CSR has failed to adequately choose, vet, train, or monitor the quality of reviewers. The result is the wide-spread perception that applying for a grant these days is like a craps shoot. Approval and funding are very nearly random.
...
written by Bioscientist101, March 30, 2010
Just an observation about Long-time reviewer's "craps shoot" comment. Your analogy with a dice game caused me to consider the relative probablities of success. In a fair game of dice, with a single unbaised die, you have a one-sixth or 16.6 % chance of being successful (i.e., getting the number/score you want). Some of my colleagues argue that this is a better chance than we have with an NIH R01 proposal. But is it the review process that is the hold-up or the lack of adequate funding for R&D;? In other words, would the vagaries of the proposal review process be as much of an issue if one third of all scientifically valid proposals were funded?
...
written by BrainyJoyWoman, March 30, 2010
Unfortunately, we are in the middle of an experiment that is probably failing. Someone, somewhere, failed to appreciate the role of NIH reviewers as mentors for grant applicants. Thus, the bullet point idea. The result is certainly that this is more of a crapshoot than ever. This was probably not the best time to also implement the 2-strikes rule although the shorter applications, I think, are a success. I also struggle, as a reviewer, with the distinction between impact and significance and find myself rereading the definition when I score a grant. Having said that, the truly great grants get funded, at least in my study section. It is at the almost-great level that things get very dicey.
...
written by NIH Panel Member, March 30, 2010
It is clear that this can be a bit of a problem in distinction between the two. One should remember that Significance, as others have stated, is importance. Significance doesn't = Importance or Interesting, however. The lethal dose of a drub is important, but not significant in terms of grantpersonship. Similarly, why we yawn, why we can make others yawn just by thinking of yawning (including reading this), is interesting, but likely not important or significant (even though it can be done between species). Impact is how the work would move the field forward if it all worked out...will it propel it forward and far, or just inch it along.
...
written by FromBothSidesNow, March 30, 2010
Reviewers are being asked to make macro domain judgments about each application relative to many other applications, not to score discrete micro criteria. Judgment does involve measurement (e.g., to what extent does this applicant seem to have taken into account the most important relevant literature?), but it can not be reduced to measurement (this app has 87 references). So it should neither surprise nor discomfit that there can be various good ways to describe how different reviewers reach those judgments. My two-cents-worth is: Significance of the proposed problem (or questions, or hypotheses being tested) versus Impact of the solution (or method of answering, or adequacy and strength of the tests). The NIH and NSF criteria don't match up exactly, but why must they? These agencies have different missions.
...
written by High Roller, March 30, 2010
I have to agree with long-time reviewer that the major reason that submitting grants has become a crap shoot is that CSR has failed to adequately choose, vet, train, or monitor the quality of reviewers. I've seen this on review panels and in reviews of my own grants. In the latter, I've seen wildly different scores and completely contradictory opinions among reviews in the same panel and across panels. CSR reviewers seem to have much less expertise than RFA reviewers - sometimes applying an understanding of my field (posttraumatic stress disorder) that seems to be based on T.V. and movie depictions rather than knowledge of mental disorders. Also see reviewers ignoring the scoring guidelines, giving a score that means "strong, but with numerous minor weaknesses" and writing "no weaknesses" in the review bullet.
...
written by SBIR Center, March 30, 2010
If think it was bad in the past, just wait until you get the next round of reviews back. CSR has decoupled the scored criteria from the Overall/Priority scores. It may not make any difference whether you get good or bad scores on the main ctriteria; the score used to place your grant request into the "WIN" or "LOSE" columns is that Priority score. We have actually had to develop innovative strategies to deal with this issue, and the very difficult 6-page limit on Research Strategy (for the SBIR/STTR crowd), for our How To WIn SBIR Awards(SM) workshops and NIH/PHS SBIR-ToolKit(TM)s. John Davis, General Manager, SBIR Resource Center(R)
...
written by Moon.Song, March 30, 2010
I agree 100% with the comments made by the long-time reviewer. The current reviewing system is indeed flawed. CSRs mostly do not know who are the experts in the field. Due to the inability to recruit the experts, they assign the proposals to young investigators who are not familiar with the studys and does not have knowledge to comment on the proosal. Thus they comments anything they want to say which are not proper. The ability to recruit more experienced reviewers with high integrity and expertise is highly desirable. The NIH staff explanations explainging significance and aproaches are not proper. Aapproaches are many ways to assess the proposed study and most know the methods. However, significances are diffrent which should be regarded most highly.
...
written by Analageezer, March 31, 2010
This confusion is most easily cleared up by considering an analogy: If your postdoc wears an attractive red dress to the department cocktail reception, she will have an IMPACT on everybody. But the SIGNIFICANCE of the outfit depends on how far her relationship has already developed with each different person in the room, and what she is now signaling to each separate person. Grant proposals work exactly the same way.
...
written by frequentreviewer, March 31, 2010
To clarify a point raised. The Impact is NOT the average of all of the individual scores, but reflects the overall evaluation of the reviewer based on the composite proposal.
...
written by Anonymous, March 31, 2010
One further concern with the changes, as I understand them, is that still fewer applications will be discussed at study section. This compounds the problem because a proposal can be triaged based on a fundamental misunderstanding of one reviewer. Without discussion, there is little or no opportunity to save what might be a good and even ground-breaking proposal sunk by the inexperience or incareful reviewing of one person. That's unfair, but is more importantly a potential serious hindrance to progress overall and risks loss of very solid talent from the biomedical research field (the reference to loss of mentoring in the process mentioned earlier is relevant here).
...
written by Fantastic Audio, April 03, 2010
As is often the case in our evolving overly politically correct, conservatively punctilious, pretentiously virtuous, etc. society (especially in academia), a modicum of practical intelligence i.e. common sense (which unfortunately isn't all that common) would go a long way to untangle this issue of plaguarism. It slays me that this issue has largely become one of linguistic gamesmanship, where one's ability to conjure up novel and/or different phraseology becomes the price of admission for consideration to be deemed a fair user of another's written material. Yep, this approach really adds to a society's knowledge! Hey, if it's not blatant, for the sake of Adam let it go. Note that I'm not advocating zero referencing. A major related issue that these people have failed to consider is the exponential rate at which technology will spiral the written word throughout the universe, which essentially will significantly diminish the time it will/should take for published works to become more or less public knowledge. In as little as ten years, this entire issue may become quite trite and largely irrelevant. In this regard I have personally thought for a number of years now, especially where students' writing and publications are concerned, that academicians should be significantly adjusting their antiquated perspectives and rules on these fronts.

Write comment
smaller | bigger


Write the displayed characters