26,478 research outputs found

    Automatically detecting open academic review praise and criticism

    Get PDF
    This is an accepted manuscript of an article published by Emerald in Online Information Review on 15 June 2020. The accepted version of the publication may differ from the final published version, accessible at https://doi.org/10.1108/OIR-11-2019-0347.Purpose: Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research’s open peer review publishing platform. Design/methodology/approach: PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings. Findings: PeerJudge can predict F1000Research judgements from negative evaluations in reviewers’ comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision. Originality/value: PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments

    "On Hochberg et al.'s, the tragedy of the reviewers commons"

    Get PDF
    We discuss each of the recommendations made by Hochberg et al. (2009) to prevent the “tragedy of the reviewer commons”. Having scientific journals share a common database of reviewers would be to recreate a bureaucratic organization, where extra-scientific considerations prevailed. Pre-reviewing of papers by colleagues is a widespread practice but raises problems of coordination. Revising manuscripts in line with all reviewers’ recommendations presupposes that recommendations converge, which is acrobatic. Signing an undertaking that authors have taken into accounts all reviewers’ comments is both authoritarian and sterilizing. Sending previous comments with subsequent submissions to other journals amounts to creating a cartel and a single all-encompassing journal, which again is sterilizing. Using young scientists as reviewers is highly risky: they might prove very severe; and if they have not yet published themselves, the recommendation violates the principle of peer review. Asking reviewers to be more severe would only create a crisis in the publishing houses and actually increase reviewers’ workloads. The criticisms of the behavior of authors looking to publish in the best journals are unfair: it is natural for scholars to try to publish in the best journals and not to resign themselves to being second rate. Punishing lazy reviewers would only lower the quality of reports: instead, we favor the idea of paying reviewers “in kind” with, say, complimentary books or papers.Reviewer;Referee;Editor;Publisher;Publishing;Tragedy of the Commons;Hochberg

    Ethics and decision making in publishing journal: Issues to be taken into account

    Get PDF
    One of the most prioritized questions of publishing a new journal in the almost similar fields covered by many other journals warrants certainly some clarification which needs to be addressed in the inaugural issue. A very straight response to this query is promoting business and management science in the country as well as in the region. The unique aim of IJBMR is to focus on quantitative aspects of business and management research. IJBMR has envisioned a future for IJBMR to surrogate the research works that centre around business and management problems of this century with a quantitative view. In this editorial ethical issues in publishing journal articles has been discussed from the perspective of editor, author and reviewer. For decision making in journal publication a new method has been proposed which is known as the SAFA system. the SAFA stands for the "Standardized Acceptance Factor Average". The SAFA of the articles included in this issue are also analyzed.Standardized Acceptance Factor Average, the SAFA system, IJBMR, ethics, PR-PR dilemma, Texoplagiarism

    PEER-REVIEWING, FEEDBACK & ASSESSMENT IN ENGINEERING TEACHING

    Get PDF
    Presentatio

    The effects of change decomposition on code review -- a controlled experiment

    Get PDF
    Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering

    The Convergence of Digital-Libraries and the Peer-Review Process

    Full text link
    Pre-print repositories have seen a significant increase in use over the past fifteen years across multiple research domains. Researchers are beginning to develop applications capable of using these repositories to assist the scientific community above and beyond the pure dissemination of information. The contribution set forth by this paper emphasizes a deconstructed publication model in which the peer-review process is mediated by an OAI-PMH peer-review service. This peer-review service uses a social-network algorithm to determine potential reviewers for a submitted manuscript and for weighting the relative influence of each participating reviewer's evaluations. This paper also suggests a set of peer-review specific metadata tags that can accompany a pre-print's existing metadata record. The combinations of these contributions provide a unique repository-centric peer-review model that fits within the widely deployed OAI-PMH framework.Comment: Journal of Information Science [in press

    Scientometric studies in marketing

    Get PDF
    • 

    corecore