The concept of peer review has been deconstructed and reconstructed by countless others, but I still feel compelled to add my two cents, particularly in light of recent journal articles on how scientists may feel disenchanted with the whole process. Not that many articles actually get entirely rejected, rather undergo an endless cycle of revisions based on adequately fulfilling the demands of random reviewers, who more often than not contradict one another, in a surreal cycle of Catch-22, snarky reviewer commentary, and a system where those who decide whether something ultimately gets printed often have distinguishable ideological or theoretical biases, inadequate knowledge about a particular subject to offer constructive criticism (this does not prevent them from criticizing), or feel any need to behave in a professional, reasonable, or at least mannerly fashion [did you notice the length of this grammatical correct sentence-this gives reviewers fits!]. For those of you who have not experienced the joy that is peer review, it is the evaluation of creative work or performance by other people in the same field in order to maintain or enhance the quality of manuscripts submitted for publication. And when you’re a “scholar” of sorts, publication generally determines your future positions, tenure, salary, and countless other things that affect your professional career, self-esteem, and the affordability of eating.
Did you know that most academic papers are never cited and let’s not even get into the debate of how often they are read? On the other, hand, why not? Let’s face it, even though publication is the pinnacle of academic achievement by which a university and your colleagues will judge you, the vast majority of people in the world, and probably even in your specific field will never read a word you write—this has to do with the fact that the typical scientific paper is so esoteric, micro-focused, and of dubious value in application, that regardless of the significance of its findings, it has very little to contribute to the world where people work for a living and have contact with study populations that are not college undergraduates. Yeah, I’m looking at you academics. With more than a million papers published per year and rising, nobody has time to read every paper in any but the narrowest fields, so some selection is essential. It probably does not matter much to anyone except the author whether a weak paper is published in an obscure journal. It goes on the curriculum vitae and is never heard from again.
Referees’ evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from options provided by the journal or funding agency. Most recommendations are along the lines of the following:
- to unconditionally accept the manuscript or proposal,
- to accept it in the event that its authors improve it in certain ways,
- to reject it, but encourage revision and invite resubmission,
- to reject it outright.
Reviewers need to be chosen from the right field for the paper, so that their judgmental role is naturally balanced by a genuine interest in welcoming innovations within the specialist area. But I really wonder about the selection of reviewers. I once had three reviewers for a particular article that was very applied in nature. One reviewer was middle of the road with some good constructive feedback. Another, was very complimentary stating that this would help them in their everyday work. The third was just a beast. The third reviewer (Ms. Angry-at-the world and I probably have a Ph.D and never really applied my work) stated my manuscript and work described therein added nothing of significant value and proceeded to berate, me, the author without much constructive feedback. Clearly I had touched some sort of theoretical nerve, or had encountered one of the many academics that sneer at anything that sounds vaguely applied, and they are pretty common in the wild. I took the middle-of-the-road -reviewer recommendations and improved the manuscript. It got accepted and the paper has turned out to actually be useful in the field and I have spoken with many people who have read it and used it. Go figure that a paper could actually be applied in the field. I wonder what the beast is doing today: sitting in their ivory tower at their antique writer’s desk, banging out another useless, hardly cited and hardly read paper on their vintage typewriter.
For another submitted manuscript, I again had three reviewers who had distinct feedback. I loved the one that stated that the paper should be rejected because my “language was too flowery” for an academic journal. Lo and behold, when I wrote a book chapter and that book in which the chapter was in, was reviewed by a major newspaper-my book chapter was cited as the best written because of that flowery language-I made the points understandable and relatable to a wider audience. This is part of the academic disdain for popular articles, books, and information intended for mass communication. It’s like living in the Middle Ages where you are beholden to a Guild, and god forbid you let anyone know about the secrets of the trade. Stonings will ensue. The only people meant to understand most published scientific articles are those who have already been accepted by the Guild, and can be entrusted with the secret knowledge, not to mention having served the requisite time as graduate student slave labor for a sufficiently notable professor. But Mr. toe-the party-line type of writing is probably some curmudgeon that cites every sentence he writes and speaks. Probably one of those that corrects your silly cocktail party factoid, with “well actually….”. You know who I am talking about. These are the people that decide what is good writing and good science. Bah, humbug! The fact that most papers don’t get cited means that of course peer reviewers have a vested interest in recommending that every sentence in your manuscript include a citation. Oftentimes, they will blatantly, under the guise of blind review and the betterment of science, recommend that their papers be cited.
I recently had four set of reviewers that were truly among the most useless reviewers one can get. From their comments, I could tell that they were not Peer Reviewers. I honestly do not think they were in the larger field that I am in and secondly they do not seem to understand applied work-or for that matter case study types of articles. All one reviewer could keep saying was “please add citations to your sentences” Another did state that they were more familiar with the field of obesity (different from the one I was writing about) . Their review was four sentences long. Another stated that in a particular section of the case study paper, “no citations are provided, and therefore most of this seems to be observational data.” That’s right, the horror of someone actually getting out into the real world to collect data and solve real world problems clearly disturbed them. I can’t make this stuff up.
Authors, in the ostensibly blind review process, are often likely to know who the reviewer is. Even if it is not obvious, most of us try to guess the identity of the reviewers from their comments or recommendations. Should there be an open peer review system wherein you know who is providing you the feedback? First, reviewers may be more tactful and constructive. There are some wannabe be rock star academics that would relish the role of being known as a harsh, mean-spirited reviewer, the Simon Cowells and the Ann Coulters of the review process. Those types of individuals make their money by feigning an intellectual superiority and validating their existence by being obnoxious, regardless of whether they know what they are talking about. Don’t be fooled that only media-attention sluts are that way. Academics are just as attention-hungry, but are more often than not frustrated by the fact that the world doesn’t understand their brilliance although they can temporarily bask in the adulation of students, quite satisfying to many of their ego-stroking needs, until those students begin to surpass the teacher.
I have one final point to make: What about all those research studies that had fake or manipulated data? How did those get through the peer review process? Yet, they feel the need to pick on the use of “flowery language” During my tenure in graduate school there was one “famous” instance in the field of social psychology in which a researcher made a quick name for herself disputing and presenting findings on stigma and gender that no other researcher had found up to that point. At one point her graduate students conducted a CSI operation and found that everything, everything was faked (even the name of the supposed research assistants). This fake data not only made it into one “prestigious” journal but several and landed her a high profile academic position that she went on to lose. And, I repeat myself: reviewers feel the need to pick on the use of “flowery language? (oh, shoot, should I have cited myself there…nah)
Can you believe that I wrote this whole entry without a single citation? The horror!