Skip to main content

Bringing Rigor Back to Science: SciCrunch Supports New NIH Requirements for Biological Citations

By:

  • Tiffany Fox

Media Contact:

Published Date

By:

  • Tiffany Fox

Share This:

Article Content

Image

Anita Bandrowski

Is your grandmother a better scientist than you are?

University of California San Diego researcher Anita Bandrowski, a neuroscientist and specialist at the Center for Research in Biological Sciences at the Qualcomm Institute, suspects the answer is yes -- particularly for researchers who rely on biological reagents (such as specific cell lines, antibodies, chemical agents or types of mice) for their research.

Consistency and accuracy in scientific citation is a hallmark of the scientific method, but Bandrowski argues that a grandmother writing down a recipe for snickerdoodles is more apt to provide a detailed description of the ingredients she uses than is the typical scientist citing biological reagents. Often these reagents have cumbersome names and differ from laboratory to laboratory or over time, which makes it difficult to reproduce the research results or reuse the data if exact specifications are not given.

“The way that people refer to these reagents is rooted in the scientific publications of the 1930s,” says Bandrowski, “which asked researchers to cite them by including the name of the city and state or country that manufacturer was in. That’s useful if you’re going to call directory assistance, but now there’s this thing called the Internet, and we get antibodies and mice from an online catalog. Now these manufacturers have thousands of products they are selling, and if you don’t use the online catalog number, you don’t get that exact antibody or mouse.”

Instead researchers often truncate the names of reagents used in their methodologies, if they name them at all, and Bandrowski says that some researchers even make up the names of the reagents they’re using, especially when they have long names, which are difficult to remember and pronounce. She cites an example from a 2016 paper that included in its methodology: “NOD.PkSCID.IL2 receptor gamma chain null mice were purchased from Jackson Laboratories (Bar Harbor, ME)”; however this name is not found at the Jackson Labs website.

Although the principal author of the paper was able provide Bandrowski the stock number of the mouse within a few hours of her query (suggesting that the information is attainable for some time after publication), she notes that the correct information is not found in the paper itself and required the extra step of tracking down the author.

“The analogous thing is calling your colleague ‘Bobby’ in the lab,” says Bandrowski. “However when you go to publish a manuscript it is best to find out if Bobby's name is Robert, Roberto or Bob. Similarly, with a transgenic organism (an organism that contains artificially introduced genetic material), it is important to check back with the naming authority like Jackson Labs, or FlyBase before publishing.”

Conversely, says Bandrowski, “recipe sites have links to the ingredients needed so you can find them and replicate this ‘experiment.’ The question is, why aren’t we doing that in science?”

Now researchers are required to do that very thing in science -- at least if they hope to receive funding from the National Institutes of Health (NIH). Last month, as part of its Rigor and Transparency Initiative, NIH began requiring that biological reagents be authenticated. Applicants for federal funding must describe the method they plan to use to authenticate these resources, such as short tandem repeat (STR) profiling for authenticating cell lines.

But the first step in authentication is to nail down what reagent is being used. In other words, jokes Bandrowski, “you now need a UPC for that mouse.”

Mice, of course, don’t come stamped with a UPC, and that’s where SciCrunch comes in. SciCrunch is a free, open-access technology platform developed at the Qualcomm Institute in collaboration with the National Institute of Diabetes and Digestive and Kidney Diseases and CRBS’s Neuroscience Information Framework (NIF). It evolved from an effort to support the Minimal Data Standards required by the scholarly publisher Elsevier for neuroscientific research and therefore supports many of the biological reagents now required for authentication by NIH. (In a recent newsletter, the National Institutes on Drug Abuse advocated for SciCrunch as a good resource for meeting the NIH new guidelines.)

“At publication, a researcher would go to SciCrunch – which aggregates the proper long names and catalog numbers from naming authorities – and would verify the short names are correct, finding the proper citation format,” explains Bandrowski. “This way, if they run into problems -- such as insufficient information about a reagent to nail down a single product, or incorrect information that result in no matching mice -- it’s before the paper is published, not after. This information is available in the lab that did the experiments and can at that point be provided in the paper. This gets you a heck of a lot closer to that recipe site where you’re able to click on the star anise -- and at least find what you need to buy -- in order to make that roast duck.”

Cellosaurus, the cell line database that is part of SciCrunch, can also be used to determine if there have been any questions raised by scientists about a particular cell line, such as known cell line drift, in which cells become exposed to viruses and other variables that can corrupt an experiment.

Maryann Martone, a professor emeritus of neuroscience at UC San Diego, president of the Future of Research Communications and e-Scholarship (FORCE11) and an affiliate of the Qualcomm Institute, says the use of so-called Research Resource Identifiers also make it possible “to do something that should be simple, but which currently is not: Find all papers that used mouse X, antibody X etc.”

Martone continues: “Such information is valuable for funding agencies and resource providers, who would like to track the use of their investments. But more importantly, it also allows information to be propagated when things go wrong. If we find contamination in a spice, for example, we can track down which products use the spice and where they’ve been sold. We should be able to do the same for the scientific literature.”

Ensuring research reproducibility, notes Bandrowski, is far from a purely intellectual pursuit: A lack of diligence and consistency can have real-world implications that erode the public’s confidence in scientific research. She notes work done several years ago by oncology researcher C. Glenn Begley, then at Amgen, who led a team that attempted to replicate the results of 53 distinct landmark cancer studies done by researchers around the world. They succeeded in replicating only six, citing a lack of methodological precision among the research flaws that compromised reproducibility.

Academic journals have contributed to the problem, Bandrowski points out, by reducing the space allotted for research methodology and thereby diminishing its importance.

“This has been ongoing as an issue,” she adds, “and there’s been a big fallout and groundswell of enthusiasm for doing something about it. I think there’s going to be a positive shift in terms of how we do science and review science.

”Public money should be going to things in the public good,” she continues. “We’re getting paid as scientists to try and cure cancer and other diseases, and that shouldn’t include wasting time trying to figure out which mouse someone used. The last 30 years has seen a shifting away from what science is and should be, which is a lot of hard work. Science has for too long been these cool ideas, these TED talks where people don’t worry about the methods. I think the methods are now coming back to the center of science.”

Share This:

Category navigation with Social links