36,194 research outputs found

    Quantifying the quality of peer reviewers through Zipf's law

    Full text link
    This paper introduces a statistical and other analysis of peer reviewers in order to approach their "quality" through some quantification measure, thereby leading to some quality metrics. Peer reviewer reports for the Journal of the Serbian Chemical Society are examined. The text of each report has first to be adapted to word counting software in order to avoid jargon inducing confusion when searching for the word frequency: e.g. C must be distinguished, depending if it means Carbon or Celsius, etc. Thus, every report has to be carefully "rewritten". Thereafter, the quantity, variety and distribution of words are examined in each report and compared to the whole set. Two separate months, according when reports came in, are distinguished to observe any possible hidden spurious effects. Coherence is found. An empirical distribution is searched for through a Zipf-Pareto rank-size law. It is observed that peer review reports are very far from usual texts in this respect. Deviations from the usual (first) Zipf's law are discussed. A theoretical suggestion for the "best (or worst) report" and by extension "good (or bad) reviewer", within this context, is provided from an entropy argument, through the concept of "distance to average" behavior. Another entropy-based measure also allows to measure the journal reviews (whence reviewers) for further comparison with other journals through their own reviewer reports.Comment: 28 pages; 8 Tables; 9 Figures; 39 references; prepared for and to be published in Scientometric

    Betweenness and Diversity in Journal Citation Networks as Measures of Interdisciplinarity -- A Tribute to Eugene Garfield --

    Get PDF
    Journals were central to Eugene Garfield's research interests. Among other things, journals are considered as units of analysis for bibliographic databases such as the Web of Science (WoS) and Scopus. In addition to disciplinary classifications of journals, journal citation patterns span networks across boundaries to variable extents. Using betweenness centrality (BC) and diversity, we elaborate on the question of how to distinguish and rank journals in terms of interdisciplinarity. Interdisciplinarity, however, is difficult to operationalize in the absence of an operational definition of disciplines, the diversity of a unit of analysis is sample-dependent. BC can be considered as a measure of multi-disciplinarity. Diversity of co-citation in a citing document has been considered as an indicator of knowledge integration, but an author can also generate trans-disciplinary--that is, non-disciplined--variation by citing sources from other disciplines. Diversity in the bibliographic coupling among citing documents can analogously be considered as diffusion of knowledge across disciplines. Because the citation networks in the cited direction reflect both structure and variation, diversity in this direction is perhaps the best available measure of interdisciplinarity at the journal level. Furthermore, diversity is based on a summation and can therefore be decomposed, differences among (sub)sets can be tested for statistical significance. In an appendix, a general-purpose routine for measuring diversity in networks is provided
    • …
    corecore