There are other problems. Sometimes a workshop or a conference will not receive enough submissions. Then the PC members panic; the event will not be viable if a miniscule number of papers are accepted. At this stage, other instructions go out to the PC members: "lets accept papers if they will spark discussion; lets accept them if they show some promise; lets accept them even if
There are problems of authority. Publications in premier conferences carry a great deal of prestige in the community. Paper acceptances are much desired. And attendance lists are familiar. Some of this has to do with the quality of the papers, some of this has to do with the established nature of the authors. Double-blind reviewing sounds very good in theory; but in fact, its quite easy to make out who the author of a paper is: writing style, subject matter, even the formatting style of mathematical symbols (a research group in France insisted on using MS-Word to format their papers, as opposed to Latex, others used idiosyncratic symbols for logical operators). A not-so-confident reviewer, confronted with a paper written by an 'authority', holds fire. The paper makes it through. Yet another, knowing that this is written by an 'authority', simply lets it go through, because 'it must be good'; others simply support friendly research groups. Peer review responsibility has been abdicated, and because a small group has been picked, there are no other opportunities to correct this. And often, because paradigms are jostling for first place (as often happened in my field, logics for artificial intelligence), reviewers are not too keen to promote papers that promote rival paradigms (but are keen to promote those that show their own favored paradigm in a good light). A colleague of mine who was trying to suggest an alternative formal framework had great difficulty getting his papers accepted; reviews of his paper were clearly off-base, prejudiced and hostile. Finally, another academic advised him to simply forget about the premier conferences and concentrate on journals whose editors would intervene, and who would guarantee him a chance to respond to his referees. So much for the impartiality of the peer review process.
Not much can be done about the volume of publication/writing problem. The modern academy demands that everyone get on the writing and publishing treadmill, and like obedient children, we jump on (how else would we get promotion and tenure?). But something can be done about the blind reviewing problem, all imperfect solutions to be sure, but they strike me as offering a better chance of ensuring the quality of that which gets through to be published. More on that later. I'll also try and write a bit on grant proposal review.
One of the papers I once submitted to Siggraph was rejected by one of its reviewers with one of the reasons being that I was evidently not a native English speaker. I began to suspect that AI bots were being used to pad out the reviews - probably as a riposte to those who submit computer generated papers.
ReplyDeleteCrosbie: Thanks for the comment. One reviewer for a journal paper of mine wrote that "strategize" was not a word in the English language. When I pointed out it was a transitive verb (present in the OED), he became offended and accused me of being a "lazy scientist".
ReplyDelete