2,323 research outputs found

    Automatically detecting open academic review praise and criticism

    Get PDF
    This is an accepted manuscript of an article published by Emerald in Online Information Review on 15 June 2020. The accepted version of the publication may differ from the final published version, accessible at https://doi.org/10.1108/OIR-11-2019-0347.Purpose: Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research’s open peer review publishing platform. Design/methodology/approach: PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings. Findings: PeerJudge can predict F1000Research judgements from negative evaluations in reviewers’ comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision. Originality/value: PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments

    The role of emotions and conflicting online reviews on consumers' purchase intentions

    Get PDF
    Drawing on dual-process theories, this paper explains how the systematic and heuristic information processing of online reviews with conflicting information can influence consumers' purchase decision making. The study adopts major assumptions of complexity and configuration theory in employing fuzzy-set qualitative comparative analysis on 680 TripAdvisor users to test the complex interrelationships between emotions and the systematic and heuristic cues used in processing reviews. The results show that the systematic and heuristic processing of online reviews can produce independent impacts on consumer decision making. Both processing routes can interact with each other to affect the domination of one route over the other. In the case of a positive–negative sequence, consumers mainly follow a heuristic processing route. In the reverse sequence, consumers' concerns about the credibility of the reviews leads them to think more deeply (systematic processing) and actively evaluate both the argumentation quality and the helpfulness of the online reviews

    Dreading and Ranting: The Distinct Effects of Anxiety and Anger in Online Seller Reviews

    Get PDF
    This paper explores effects of the emotions embedded in a seller review on its perceived helpfulness. Drawing on frameworks from the emotion and cognitive processing literatures, the authors propose that although emotional review content is subject to a well-known negativity bias, the effects of discrete emotions will vary, and that one source of this variance is perceptions of reviewers’ cognitive effort. We focused on the roles of two distinct, negative emotions common to seller reviews: anxiety and anger. In Study 1, actual seller reviews from Yahoo Shopping websites were collected to determine the effects of anxiety and anger on review helpfulness. In Study 2, an experiment was utilized to identify and explain the differential impact of anxiety and anger in terms of perceived reviewer effort. Our findings demonstrate the importance of examining discrete emotions in online word-of-mouth, and they also carry important practical implications for consumers and online retailers

    Comprehension of online consumer-generated product review: a construal level perspective

    Get PDF
    This study explores how consumers, who differ in their psychological distance toward the purchasing event (i.e., temporal distance) or toward product review writers (i.e., social distance), comprehend concrete or abstract reviews. Two experiments were conducted. The first experiment examined how a consumer’s perception of temporal distance, near future or distant future, would affect his/her comprehension of product reviews of varying abstractness. Results reveal that consumers of near temporal distance perceive concrete reviews to be more helpful. These consumers express a higher recall ability compared to counterparts of distant temporal distance. However, consumers of near temporal distance perceive abstract reviews to be less helpful and express a lower recall ability compared to those of distant temporal distance. The second experiment investigated how social distance, i.e., whether the review is written by someone who is perceived to be socially close to the reader would influence his/her comprehension of product reviews of varying abstractness. Results indicate that, with the provision of concrete reviews, consumers perceive non-significant difference of the review helpfulness under near and distant social distance and exhibit comparable recall ability. With the provision of abstract reviews, however, consumers of a near social distance recognize the reviews as helpful and recall the product better than did those of a distant social distance. This study presents a theoretically-driven and empirically-validated proposition to improve the presentation of product reviews to aid consumer review comprehension

    A comprehensive meta-analysis of money priming

    Get PDF
    Research on money priming typically investigates whether exposure to money-related stimuli can affect people's thoughts, feelings, motivations and behaviors (for a review, see Vohs, 2015). Our study answers the call for a comprehensive meta-analysis examining the available evidence on money priming (Vadillo, Hardwicke & Shanks, 2016). By conducting a systematic search of published and unpublished literature on money priming, we sought to achieve three key goals. First, we aimed to assess the presence of biases in the available published literature (e.g., publication bias). Second, in the case of such biases, we sought to derive a more accurate estimate of the effect size after correcting for these biases. Third, we aimed to investigate whether design factors such as prime type and study setting moderated the money priming effects. Our overall meta-analysis included 246 suitable experiments and showed a significant overall effect size estimate (Hedges' g = .31, 95%CI = [0.26, 0.36]). However, publication bias and related biases are likely given the asymmetric funnel plots, Egger's test and two other tests for publication bias. Moderator analyses offered insight into the variation of the money priming effect, suggesting for various types of study designs whether the effect was present, absent, or biased. We found the largest money priming effect in lab studies investigating a behavioral dependent measure using a priming technique in which participants actively handled money. Future research should use sufficiently powerful pre-registered studies to replicate these findings

    Improving Information Systems Sustainability by Applying Machine Learning to Detect and Reduce Data Waste

    Get PDF
    Big data are key building blocks for creating information value. However, information systems are increasingly plagued with useless, waste data that can impede their effective use and threaten sustainability objectives. Using a constructive design science approach, this work first, defines digital data waste. Then, it develops an ensemble artifact comprising two components. The first component comprises 13 machine learning models for detecting data waste. Applying these to 35,576 online reviews in two domains reveals data waste of 1.9% for restaurant reviews compared to 35.8% for app reviews. Machine learning can accurately identify 83% to 99.8% of data waste; deep learning models are particularly promising, with accuracy ranging from 96.4% to 99.8%. The second component comprises a sustainability cost calculator to quantify the social, economic, and environmental benefits of reducing data waste. Eliminating 5948 useless reviews in the sample would result in saving 6.9 person hours, $2.93 in server, middleware and client costs, and 9.52 kg of carbon emissions. Extrapolating these results to reviews on the internet shows substantially greater savings. This work contributes to design knowledge relating to sustainable information systems by highlighting the new class of problem of data waste and by designing approaches for addressing this problem

    Rant or rave:Variation over time in the language of online reviews

    Get PDF
    • …
    corecore