Scientific Reviews: PageRank

Peer reviews is an established process in scientific communities to evaluate quality of research. It is is used at many levels but the two most common are the evaluation of scientific publications and scientific proposals. Both these are extrememly important areas for the advancement of sciences and technology. Both these also are important for building (or destroying) careers of aspirant scientists.

In the last decade or so, there is a trend that conferences ‘brag’ about their quality by showing how only 1 in X ( usually X>6) papers submitted to the conference is accepted. Usually this process is quite rigorous. And the rigor keeps improving. At least three peer researchers (respected in the narrow community of the subject matter of the paper) analyze the paper and comment on a list of factors. Then a program committee evaluates these evaluations to finalize the selection. All care is taken to make the process as fair as possible. In many cases the process is double blind — names of the autors are not on the paper and the reviewers are not known to the authors or any other person except the coordinator of the process. Most of the time the reviewers try to help the paper.

So, why I am writing about this here. This is mostly for the benefit of young researchers — like my doctoral students — who start doing research and then get frustrated by the reviews of their papers. Many times the reviews are right and one can definitely learn from those, but the process also is a faulty process when it comes to novel research areas and topics. Most times the reviewers are guided by the established research topics and research culture. This has a STRONG bias against novel innovative and early research ideas. So when one gets her/his paper rejected, it is important to think and analyze carefully why did the paper get ‘rejected’ and what should be the next action. Many times the paper gets ‘rejected’ not because the paper failed but because the system and , though not always, the reviewers failed.

This message was prompted by a e-mail discussion on this topic in which a particular example from about 10 years ago was cited. The paper on PageRank by founders of Google (Brin and Page) was rejected by SIGIR conference. That paper possibly is the most ‘valuable’ paper that made the most money for its authors and changed the culture of the web by the formation of Google. That ‘reject’ decision shows the limitations of the system. By the way rumors are that Berners-Lee’s paper proposing WWW were also rejected by the conference on hypertext. History is full of such examples.

Pople keep trying to remove limitatios of such review processes, but like most human organization, one achieves strength at the cost of ome weaknesses.