Is Nothing Sacred? Public Science 2.0 Revolutionizes Scientific Practices
The revolution in open access and peer review is demonstrating that scientific practices and standards are not set in stone. In fact, the new openness is dragging some dark secrets of traditional peer review into the light.
It's nostalgic, in a way: science returning to the public arena, instead of being hidden away in academic cloisters. Public scientific lectures and demonstrations were all the rage in the 17th through 18th centuries, as Carsten Könneker and Beatrice Lugger remind readers of Science. By the early 1800s, however, "researchers began to conduct their work away from the public eye." Scientists communicated among themselves in jargon, and "Scientific journals, using peer review, became the central feature of the scientific process that remained largely hidden from the public."
That's all changing. The Internet age is forcing scientists back into the public eye, some kicking and screaming, others celebrating. One lesson has emerged, among others: there was never anything sacred about traditional scientific practices.
When Vitek Tracz set a new track of open-access journals with BioMed Central, and Michael Eisen founded the Public Library of Science (PLOS), many Internet users got their first taste of instantly available scientific papers online. Writing for Science, Tania Rabesandratana began her look into open access publishing with a shocking prediction:
"Nobody reads journals," says science publisher Vitek Tracz, who has made a fortune from journals. "People read papers." Tracz sees a grim future for what has been the mainstay of scientific communication, the peer-reviewed print journal. Within the next 10 years, he says, it will cease to exist. (Emphasis added.)
In its October 4 issue, Science includes a special section called "Communication in Science: Pressures and Predators," focusing on topics that would tend to justify its continuing existence as a print journal using traditional peer review: "the lack of scrutiny at open-access journals, the rarity of published negative studies, and publishing sensitive data."
The lack of scrutiny was demonstrated recently in a spoof reminiscent of Alan Sokal's famous hoax in 1996. This time, not just one, but "more than a hundred lower-tier scientific journals accepted a fake, error-ridden ... study for publication in a spoof organized by Science magazine," says National Geographic News. John Bohannon's "sting operation," according to his Science report, was intended to showcase the lack of rigorous standards in the "wild west" of open-access peer review.
The joke goes both ways, though. Richard Stone and Barbara Jasny, writing in Science, note that some print journals fell for the hoax, too. Martin Eve at The Conversation points out that the top-tier online journals did not fall for it. He and Nature identify serious failings, moreover, in the traditional journals:
Peer review is failing to ensure data quality, finds a study ... The analysis, led by the US National Institute of Standards and Technology (NIST), found that about one-third of papers submitted to five physical-chemistry journals between 2003 and 2013 contained erroneous or incomplete data, which can make it hard to replicate findings and can lead to poor regulatory decisions. Peer review does not have the capacity to evaluate the current flood of data, say co-authors Michael Frenkel and Robert Chirico, chemists at NIST in Boulder, Colorado. "The rate of errors is an elephant in the room," says Frenkel.
That flood of data is undermining another pillar of scientific tradition. "Science is in a reproducibility crisis," Fiona Fidler and Ascelin Gordon write at The Conversation. Among the causes, they say, are the "publish or perish" pressures on scientists in academia, publication bias by journals lusting for "high impact" material, the lack of sufficient audits to catch fraud, and more. It's evident that traditional journals can't throw stones.
Another serious error by traditionalists is the frequent failure to report negative results. Knowing what doesn't work is arguably as important as what does work. Yet, as Jennifer Couzin-Frankel points out in the Science series, cherry-picking is rampant in publication. Sometimes a team will publish one positive result out of a slew of negative results, in order to shine with a "breakthrough."
Then there's the lack of peer review for software. More and more, scientists use homegrown or off-the-shelf software for data analysis that is not peer reviewed. Nature reported that Mozilla is trying to fix that with a new initiative gathering IT volunteers to check code used for research, offering another tier of peer review. There are doubts, though, whether the scientists will cooperate, and whether the programmers will understand the science enough to find flaws.
There's another sacred tradition on the chopping block: the annual meeting. Jeffrey Mervis explains the controversy in Science:
Nearly 22,000 scientists converged on San Francisco last December for a meeting of the American Geophysical Union (AGU). Local hotels and restaurants feasted on the biggest annual gathering of physical scientists on the planet, and AGU turned a tidy profit on what was its largest meeting ever. But in a world in which the main currency of information is now bytes, have such megaconclaves become an endangered species?
If the traditionalists argue that some open-access journals act like predators, what about "predatory conferences," as Jon Cohen writes in Science? Some of these science conclaves "flatter, but don't deliver" scientific value, serving only to attract tourism to their cities.
Traditional peer review itself has come under fire, Rabesandratana shows in her article. Vitek Tracz has more harsh words for it:
In another bold strike, Tracz is taking aim at science's life force: peer review. "Peer review is sick and collapsing under its own weight," he contends. The biggest problem, he says, is the anonymity granted to reviewers, who are often competing fiercely for priority with authors they are reviewing. "What would be their reason to do it quickly?" Tracz asks. "Why would they not steal" ideas or data?
Anonymous review, Tracz notes, is the primary reason why months pass between submission and publication of findings. "Delayed publishing is criminal; it's nonsensical," he says. "It's an artifact from an irrational, almost religious belief" in the peer-review system.
If Tracz can be criticized for saying this because of the money he makes with open-access journals, then why not turn the same criticism against Nature and Science, who justify their profits with arguments for maintaining tradition?
Last month, Science exposed a "colossal problem" in traditional peer review. At least in biomedical journals, a substantial number were "filled with biased reporting and basic errors." Peer review is good in theory, but "perfectly awful" in practice:
But while many pieces of the publications puzzle are yielding to scrutiny, the peer-review process itself is not. "Nothing much has changed in 25 years," says Ana Marušić of the University of Split in Croatia, who studies research methodologies. "It's always the same story." Interventions to improve peer review fail again and again. Mentorship to train reviewers doesn't make a difference in their ability to spot problems in papers. And there is still scant evidence that peer review makes published papers any stronger.
With so many failings in traditional practices coming to light, open-access advocates are pointing to the advantages of what some call Public Science 2.0: rapid publication, online commenting by other scientists (post-publication peer review), and wide access to the public. The online data revolution is re-opening doors for the "citizen scientist," Könneker and Lugger argue: "With digital options like sharing project ideas and sampling data, citizen science today is potentially available to all."
That raises another issue: who owns research? If the public paid for it, why can't they look at it? Open-access advocates feel so strongly that taxpayer-funded research should be in the public domain, some are taking matters into their own hands. John Bohannon (the spoofer) writes in Science about Michael Eisen of PLOS re-posting some copyrighted papers on the recent Mars mission as a protest against "journal paywalls" that end up making taxpayers pay twice for information:
Eisen says he was "astonished" to discover that the papers were behind Science's paywall, and that NASA should have pushed to make them freely available because many of the authors were government employees. "The research was funded with $2.5 billion of tax money," Eisen says. "It's more than just a missed opportunity for NASA. It should be a scandal."
It's going to be difficult for Science to argue against that. Indeed, it may be hopeless for them to try to stop it. Print journals have, in fact, taken steps to publish their papers online after a time delay, typically a year -- but by then it's old news. The Twitter generation expects rapid information.
As with any revolution, there are new problems to face, some serious. Bohannon's spoof illustrated the need for higher standards of peer review. Even more important, some sensitive research should not be publicly available on grounds of national security. As David Malakoff discussed in Science, even harmless-looking pure research might turn out to have dual-use applications (i.e., for good or evil), such as genetic engineering that could advance biological warfare if published. Archaeologists might invite looting if they publish maps of sensitive sites.
These problems can be solved with some intelligent design, you might say. Indeed, they must be solved, because the juggernaut of open access is too far down the track to stop now. Print journals know this. They're raising legitimate concerns at the same time that they are enlarging their online footprints.
The moral of the story? There is nothing sacred about scientific traditions. Practices can and do change, sometimes radically. As they changed in the 19th century, and then in the 20th (especially because of World War II), scientific practices are "evolving" again (by artificial selection, that is -- a form of intelligent design). Keep that in mind when critics of intelligent design argue that ID is not scientific because (like Darwin himself, they always fail to add) it doesn't publish in peer-reviewed journals.*
*For a response to this false claim, see Casey Luskin's list of peer-reviewed, pro-ID papers.
Image credit: Graham Horn/Wikicommons.