Saturday, February 16, 2008

Problems with peer review, Part Two

As usual, I begin with an apology. Sorry for taking so long to get to the second part of this intended series of posts on peer review. So, why do I think peer review is busted in the sciences? (please see caveats below about this being based on my own personal experience). Firstly, and I think this should come as no surprise to anyone, there is simply too much material being sent forward for publication. In computer science, there are thousands of annual conferences and workshops held annually. I don't exaggerate, you simply need to check whether on any given day there are at least three events being staged somewhere or not. The reviewing for these events is typically done by the program committee (PC), a bunch of academics who got together to organize the event; some of them were part of the original plan to put together the workshop or conference, others were invited to serve on the PC for various reasons (sometimes to add heft to the PC - as academics will often judge a meeting's quality by the star rating of the PC, and sometimes, quite simply, to aid in the reviewing). When submissions arrive, the papers are parceled out to the PC for reviewing. Sometimes papers are assigned to more than one member of the PC. More often than not, this stage of the reviewing is one-way-blind (I know the name of the author, but he does not know mine). In larger conferences, the reviewing is double-blind. More often than not, the PC member is over-committed. He has signed up for as many academic invitations as he can, all in a rush to add lines to the CV, to increase his visibility in the community, to network a bit more. But now, the papers are in the Inbox, and they need to be reviewed. Typically, the PC member is late with the reviews. He then receives reminder emails from the head of the PC, and he rushes off to review the paper, which is invariably read in perfunctory fashion, and then hastily reviewed/summarized/critiqued. The effect of this on the quality of the papers submitted to a typical event should be clear. Sometimes, the PC member will sub-contract the reviewing, either handing it on to a Ph.D student or to a colleague who he thinks might be able to help out (I should point out that Ph.D students can be both very harsh, or very mild, reviewers; the former is eager to show off his talents and knowledge, the latter is still convinced he does not belong in academia, and is very diffident in his reviews).

There are other problems. Sometimes a workshop or a conference will not receive enough submissions. Then the PC members panic; the event will not be viable if a miniscule number of papers are accepted. At this stage, other instructions go out to the PC members: "lets accept papers if they will spark discussion; lets accept them if they show some promise; lets accept them even if is not-met". So the event floats and all is well. The quality of the papers is uneven, but at least the workshop or conference did not get canceled.

There are problems of authority. Publications in premier conferences carry a great deal of prestige in the community. Paper acceptances are much desired. And attendance lists are familiar. Some of this has to do with the quality of the papers, some of this has to do with the established nature of the authors. Double-blind reviewing sounds very good in theory; but in fact, its quite easy to make out who the author of a paper is: writing style, subject matter, even the formatting style of mathematical symbols (a research group in France insisted on using MS-Word to format their papers, as opposed to Latex, others used idiosyncratic symbols for logical operators). A not-so-confident reviewer, confronted with a paper written by an 'authority', holds fire. The paper makes it through. Yet another, knowing that this is written by an 'authority', simply lets it go through, because 'it must be good'; others simply support friendly research groups. Peer review responsibility has been abdicated, and because a small group has been picked, there are no other opportunities to correct this. And often, because paradigms are jostling for first place (as often happened in my field, logics for artificial intelligence), reviewers are not too keen to promote papers that promote rival paradigms (but are keen to promote those that show their own favored paradigm in a good light). A colleague of mine who was trying to suggest an alternative formal framework had great difficulty getting his papers accepted; reviews of his paper were clearly off-base, prejudiced and hostile. Finally, another academic advised him to simply forget about the premier conferences and concentrate on journals whose editors would intervene, and who would guarantee him a chance to respond to his referees. So much for the impartiality of the peer review process.

Not much can be done about the volume of publication/writing problem. The modern academy demands that everyone get on the writing and publishing treadmill, and like obedient children, we jump on (how else would we get promotion and tenure?). But something can be done about the blind reviewing problem, all imperfect solutions to be sure, but they strike me as offering a better chance of ensuring the quality of that which gets through to be published. More on that later. I'll also try and write a bit on grant proposal review.


Blogger Crosbie Fitch said...

One of the papers I once submitted to Siggraph was rejected by one of its reviewers with one of the reasons being that I was evidently not a native English speaker. I began to suspect that AI bots were being used to pad out the reviews - probably as a riposte to those who submit computer generated papers.

3:04 PM  
Blogger Samir Chopra said...

Crosbie: Thanks for the comment. One reviewer for a journal paper of mine wrote that "strategize" was not a word in the English language. When I pointed out it was a transitive verb (present in the OED), he became offended and accused me of being a "lazy scientist".

3:34 PM  

Post a Comment

<< Home