Intelligent Design Icon Intelligent Design

Peer-Reviewed Pro-Intelligent Design Articles and the “Insurrection” Against Journal Impact Factors

Last year, an e-mailer contacted us to challenge our list of peer-reviewed pro-ID articles, pointing to the fact that some of the journals are apparently not listed in the Thomson-Reuters Journal Citation Reports. He claimed that as a result, our list was “oversold.”

As occasionally happens, his e-mail got lost in my inbox. But recently while cleaning out my inbox, I stumbled across it and decided to reply to him. It was perhaps fortunate that I waited until 2013 to reply, because just recently there has been an “insurrection” among leading scientists worldwide against the journal impact factor [“JIF”] list calculated by Thomson-Reuters.

In replying to the commenter, I first explained that our list of papers isn’t “oversold” at all. The page says that it lists “Scientific Publications Supportive of Intelligent Design Published in Peer-Reviewed Scientific Journals, Conference Proceedings, or Academic Anthologies.” And that’s exactly what it does.

But what about the question of journal impact factors? Well, many of the listed papers come from journals with very solid JIFs. For example, the Journal of Molecular Biology has a JIF of 3.981; Annual Review of Genetics has a JIF of 22.233; and Quarterly Review of Biology‘s JIF ranks it 3 out of 84 in biology.

Regarding the journals that my correspondent points out are not listed in the Thomson-Reuters Journal Citation Reports, the reason we say those papers were peer-reviewed is because the journals and/or the authors of the papers informed us the papers were peer-reviewed. Whether a journal appears in the Thomson-Reuters Journal Citation Reports does not determine (a) whether the paper was peer-reviewed, (b) whether it was authored by a credible scientist, (c) whether papers itself holds valid arguments, or (d) whether the journal is overall a credible journal. In fact, all these metrics — journal lists, “citation reports,” “impact factors” — have been widely decried by credible scientists who find them to be outdated and error-prone when it comes to (a) – (d).

Last month the journal Science published a news article, “In ‘Insurrection,’ Scientists, Editors Call for Abandoning Journal Impact Factors,” noting that:

More than 150 prominent scientists and 75 scientific groups from around the world today took a stand against using impact factors, a measure of how often a journal is cited, to gauge the quality of an individual’s work. They say researchers should be judged by the content of their papers, not where the studies are published.

We covered this story in some detail here at ENV, but it’s worth highlighting again. The Science article explained the origin of JIFs:

Journal impact factors, calculated by the company Thomson Reuters, were first developed in the 1950s to help libraries decide which journals to order. Yet, impact factors are now widely used to assess the performance of individuals and research institutions. The metric “has become an obsession” that “warp[s] the way that research is conducted, reported, and funded,” said a group of scientists organized by the American Society for Cell Biology (ASCB) in a press release. Particularly in China and India, they say, postdocs think that they should try to publish their work in only journals with high impact factors.

The problem, the scientists say, is that the impact factor is flawed. For example, it doesn’t distinguish primary research from reviews; it can be skewed by a few highly cited papers; and it dissuades journals from publishing papers in fields such as ecology that are cited less often than, say, biomedical studies.

In a statement, the American Society for Cell Biology largely agreed:

Since the JIF is based on the mean of the citations to papers in a given journal, rather than the median, a handful of highly cited papers can drive the overall JIF, says Bernd Pulverer, Chief Editor of the EMBO Journal. “My favorite example is the first paper on the sequencing of the human genome. This paper, which has been cited just under 10,000 times to date, single handedly increased Nature’s JIF for a couple of years.”

“The Journal Impact Factor (JIF) was developed to help librarians make subscription decisions, but it’s become a proxy for the quality of research,” says Stefano Bertuzzi, ASCB Executive Director, one of more than 70 institutional leaders to sign the declaration on behalf of their organizations. “Researchers are now judged by where they publish not by what they publish. This is no longer a question of selling subscriptions. The ‘high-impact’ obsession is warping our scientific judgment, damaging careers, and wasting time and valuable work.”

Nature also covered the story, noting that the JIF metric “bears little relation to the citations any one article is likely to receive, because only a few articles in a journal receive most of the citations. Focus on the JIF has changed scientists’ incentives, leading them to be rewarded for getting into high-impact publications rather than for doing good science.” The article quoted structural biologist Stephen Curry of Imperial College London who complained, “I am sick of impact factors and so is science.” Nature added:

Even the company that creates the impact factor, Thomson Reuters, has issued advice that it does not measure the quality of an individual article in a journal, but rather correlates to the journal’s reputation in its field. (In response to DORA, Thomson Reuters notes that it’s the abuse of the JIF that is the problem, not the metric itself.)

These arguments are nothing new. In 2005, an editorial in Nature stated that “Research assessment rests too heavily on the inflated status of the impact factor.” Nature, of course, has an extremely high JIF, but its editorial looked at its own JIF to show how the metric can be a skewed measure of a journal’s true merits:

For example, we have analysed the citations of individual papers in Nature and found that 89% of last year’s figure was generated by just 25% of our papers.

The most cited Nature paper from 2002-03 was the mouse genome, published in December 2002. That paper represents the culmination of a great enterprise, but is inevitably an important point of reference rather than an expression of unusually deep mechanistic insight. So far it has received more than 1,000 citations. Within the measurement year of 2004 alone, it received 522 citations. Our next most cited paper from 2002-03 (concerning the functional organization of the yeast proteome) received 351 citations that year. Only 50 out of the roughly 1,800 citable items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations.

These figures all reflect just how strongly the impact factor is influenced by a small minority of papers — no doubt to a lesser extent in more specialized journals, but significantly nevertheless.

However, we are just as satisfied with the value of our papers in the ‘long tail’ as with that of the more highly cited work. (emphasis added)

The editors concluded: “Impact factors don’t tell us as much as some people think about the quality of the science that journals are publishing.”

Stephen Jay Gould made a similar comment shortly before he died, co-writing:

Automatically rejecting dissenting views that challenge the conventional wisdom is a dangerous fallacy, for almost every generally accepted view was once deemed eccentric or heretical … The quality of a scientific approach or opinion depends on the strength of its factual premises and on the depth and consistency of its reasoning, not on its appearance in a particular journal or on its popularity among other scientists.

Unfortunately, many people seem eager to forget Gould’s advice. They would love to be able to automatically reject ID, without considering its evidence and arguments, because it hasn’t received sufficient support in one particular journal, ignoring the fact that ID proponents have published sound research supporting their ideas in other credible journals.

Thus, JIFs aren’t a measure of the merits of intelligent design. Rather, the emphasis placed on JIFs by certain ID-critics is a measure of their own unwillingness to consider the merits of intelligent design.

 

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.

Share

Tags

peer-reviewResearchscience