Tech thrives on disruptive innovation, so it comes as no surprise that publishers regard proposals from publishing tech with suspicion — after all, why change something that works? At least part of this resistance can be traced to a tacit understanding of the current system as having been in place for a significant amount of time. A look at history disabuses us of this.
Consider modern peer review. For anyone even loosely associated with academia, the process of submitting a draft to a journal, which will be anonymously evaluated by two or three referees, is probably familiar and taken for granted as simply the way things are done. In a recent publication in the History of Science journal Isis, Melinda Baldwin, a senior editor of Physics Today, argues that this norm of compulsory peer review is more recent that most of us would imagine. Here’s her narrative in brief.
Although the sending of a submission to experts for their comments can be traced back to at least Henry Oldenburg, the first Secretary of the Royal Society, peer review was neither systematically carried out or seen to bestow scientific credibility for centuries. A more familiar system was proposed by William Whewell in 1831, when he proposed that submissions in the Philosophical Transactions should have two Fellows of the Royal Society comment openly in the new journal Proceedings of the Royal Society of London. Whewell’s proposal for reports to be published was never picked up, but it became increasingly common to send submissions out to anonymous referees.
Still, even in the mid-20th century, it was not uncommon for all editorial decisions to be made in-house, with editors making all the decisions about a paper and only consulting external referees occasionally, whenever they deemed it essential. The shift to external peer review was brought about because of the increasing amount of work that editors had to do. For example, editors at Science reasoned that “the job of refereeing and suggesting revisions for hundreds of technical papers is neither the best use of their time nor pleasant, satisfying work.” It was simply the increased burden that gave rise to the popularity of external review.
As for the perception that peer review was crucial to scientific legitimacy, Baldwin argues that we need to look at the specific history of the late 20th century United States. The Cold War led to a ballooning of science spending, and soon this increase was noticed by the public and came under scrutiny and skepticism. Under pressure to become more accountable to non-scientific political actors, modern peer review was touted as the only solution that could ensure both scientific autonomy and public accountability. At the end of this saga, Baldwin argues that it was accepted that any [scientific] organization had to rely on external referees in order to judge “good science” properly.
The point of looking at this fascinating history is not to simplistically argue that peer review should be done away with because it is new or because it has a history that is embedded in a particular political culture. After all, any aspect of how things are done can be deconstructed in this manner. However, what histories like this make clear is that no part of the rules we abide by and the institutions that bind us are eternal or set in stone. Any proposed change, however initially surprising, should be given a fair shot instead of being resisted because of procedural inertia or complacence. Change is always around the corner, however solid our present world may appear to be.