As the leading supplier of basic research money in the US, the National Institutes of Health likes to get credit for its work. But not too much credit.
For the second time in three years, the NIH’s top official in charge of research grants has pleaded with scientists to stop crediting the agency in their journal publications if the NIH was not actually involved in supporting their work.
The practice, said Michael Lauer, the NIH’s deputy director in charge of extramural research, might “distort the true effect of NIH funding”. Among other things, Dr Lauer said, that made it harder for the NIH to prove its value to Congress and taxpayers.
What might seem a fairly straightforward part of writing a research article, Dr Lauer acknowledged, could actually become fairly complicated. Scientists often received financial support from a number of sources, and it was not always easy to decide how to allocate credit.
“In some cases,” he told the research community in a blog posting, “it may be difficult to identify which awards directly support a specific activity, especially in situations such as for large programme projects with multiple components.”
The NIH required researchers to credit the agency on articles that derive from its funding, Dr Lauer said – but, just as importantly, “overcitation” was not allowed because the information was critical to accountability processes.
Also, Dr Lauer said, faulty listings of grant support greatly complicated the job for researchers who volunteered to review NIH grant applications, because they needed such information to assess productivity levels.
Paul Fuchs, a professor of otolaryngology at Johns Hopkins University who has served on multiple NIH grant review panels, said he understood the NIH’s concern, but also saw ways the agency could be more helpful with the complexity of the citation challenge.
If researchers listed all their funders and collaborators in their publications, Professor Fuchs said, that could greatly complicate the job of determining a fair allocation of credit. But if researchers gave a shorter list, that could short-change smaller labs that contributed to scientific findings, he said.
Professor Fuchs added that, even as a volunteer reviewer, he took the time to try to figure out who really contributed to a research project when assessing which applications were worthy of more support. That process “at least gives me a way of trying to evaluate how a big powerful lab with lots of collaborations may in fact really stack up to a small lab without collaborations”, he said.
The NIH did not collect data on the frequency of overcitation problems, an agency spokesperson said. But in some cases, they added, the mistake was obvious, such as when a published study on research involving human participants gave credit to a grant that did not allow for human participant research.
Other mistakes appeared to involve researchers trying too hard to head off any perception of hiding financial conflicts of interest, the spokesperson said.
The NIH did not have any indication, they added, that the practice of overcitation could be related to the China Initiative – the Trump-era campaign, which the NIH assisted, to identify and prosecute scientists of Chinese descent, often on the legal grounds that they took money from both US and Chinese sources without fully disclosing it.