View Single Post
Old 09 September 2018, 05:37 PM
Onyx_TKD Onyx_TKD is offline
Join Date: 17 December 2007
Location: Los Angeles, CA
Posts: 394

Originally Posted by Ellestar View Post
Fact checking would be an onerous process for someone just brought in to peer-review, though. I assume it would involve going to the same sources cited in the book and reading all the the same resources and coming to the same conclusion. Basically, fact-checking would be essentially redoing all the work of the original authors.

I think peer review in the humanities would be the same or similar to peer review in the sciences. You get someone in the same or similar field to read the presented research to determine if they are citing the correct people, using accepted methodologies, and getting what appears to be valid results. In the sciences, we don't expect peer reviewers to replicate the experiments themselves to get the same results; they use their own expertise to see if it, basically, makes sense to run the experiment as they have and that the findings fall in line with what is already known.

It seems with this, as the article says, the peer-review process failed. It's probably that there were no true "peers" (Victorian-era historians familiar with medical cures for hysteria?) to faithfully review the text. Further, it sounds as though there was some pressure to publish because this looked like a book that would sell, which I'm sure is kind of rare for academic publishing.
Exactly. (Coming from the science/engineering side,) Peer-review isn't about fact-checking*; it's a logic check and sanity check that the methodology, reported results, and conclusions appear to be suitable, plausible, and self-consistent to someone with expertise in the field and that the overall paper is worth publishing (i.e., it adds value of some sort to the body of knowledge).

For example, in my areas of research, a paper generally ought to:
  • Show that the authors have reviewed the literature for relevant prior results, generally by summarizing relevant information from the literature and/or noting that they were unable to find existing literature on [topic]. Usually this includes identifying a niche where information is lacking that the paper intends to fill. As a reviewer, I'm not going to hunt down and read all of their references myself, but if I see something hinky, e.g., I know there is published literature when they claimed they couldn't find any, or they cite/summarize a paper I've read in a way that doesn't match the actual content, or they have failed to acknowledge a paper that is highly relevant, I'm going to flag it. If I see something that just looks really off (e.g., a claim that "Dewey, Cheatum, and Howe [14] observed that dropping ice into water causes it to boil"), then I'll probably pull up the reference to check. (Overall, a "good-practice" check)
  • Describe the methodology clearly and thoroughly, state assumptions involved, and justify why the methodology was chosen. As a reviewer, I'm not going to replicate the work for the review, but I'm going to be checking that they've listed enough information that I think I could replicate it (completeness check) and that what they say they did makes sense for what they're trying to accomplish (logic check).
  • Describe the results that they are using to draw conclusions in terms of the facts of what was observed rather than the conclusions drawn from the results ("good-practice" check). Again, as a reviewer, I'm not going to "fact-check" by replicating and I'm going to assume they're not lying about their results, but if something seems off-the-wall from what would be expected on current knowledge and the authors haven't adequately addressed that oddity, I'll flag it ("sanity" and logic check). (E.g., if the authors claim their experimental result is that they added ice to water and the water boiled, then they'd better have thoroughly explained how they've controlled for other factors that could have caused the water to heat up and that they have replicated their experiment to ensure it is reproducible.)
  • Present conclusions that are consistent with the data presented in the paper (even if that conclusions is "Our results are inconsistent with prior data, and further investigation is required to determine why"). As a reviewer, I'm checking for logic, not facts. If they say, e.g., "The water in our experiment boiled when ice was added. We attribute this to aliens with heat rays being attracted to ice cubes," then that's not going to fly (at least not unless they've also presented results that provide evidence for the involvement of aliens with heat rays). If they say, "The water in our experiment boiled [...], therefore we conclude that boiling is triggered by decreasing temperature," then that's not going to fly because they're jumping to conclusions that ignore other possible factors and ignore the extensive, existing body of work outside their experiment. OTOH, if they say, "Our results appear to contradict prior work [3-12] other than that of Dewey, Cheatum, and Howe [14]. There is some indication that the vacuum chamber used can malfunction at low temperature [20], so we hypothesize that a pressure decrease could have lowered the boiling point of water. As future work, we will retest using [additional experimental controls, e.g., a less crappy vacuum chamber ]" then that might be ok.

Now, a lot of those checks lean on the presence of a decent pre-existing body of knowledge in the topic area, both in terms of literature to compare to and the reviewer having knowledge of the field. So a paper with flawed methodology and bogus conclusions might get past peer-review in a novel topic area through no fault of the reviewer. That's where replication comes in--further researchers shouldn't be treating a single paper as if its results and conclusions are gospel, so if the topic is important enough to inspire further research, the collective findings of additional researchers should eventually overwhelm the original flawed conclusions.

*Keep in mind that (at least in the sciences) peer-reviewers are usually performing the review for free, on top of their normal jobs, as a service to the research field/research community and because it benefits them as researchers to make sure research being published in their field is of good quality. They are not professional editors paid to go through the research with a fine-tooth comb.
Reply With Quote