16,745 research outputs found

    A network centrality method for the rating problem

    Get PDF
    We propose a new method for aggregating the information of multiple reviewers rating multiple products. Our approach is based on the network relations induced between products by the rating activity of the reviewers. We show that our method is algorithmically implementable even for large numbers of both products and consumers, as is the case for many online sites. Moreover, comparing it with the simple average, which is mostly used in practice, and with other methods previously proposed in the literature, it performs very well under various dimension, proving itself to be an optimal trade--off between computational efficiency, accordance with the reviewers original orderings, and robustness with respect to the inclusion of systematically biased reports.Comment: 25 pages, 8 figure

    Reviews, Reputation, and Revenue: The Case of Yelp.com

    Get PDF
    Do online consumer reviews affect restaurant demand? I investigate this question using a novel dataset combining reviews from the website Yelp.com and restaurant data from the Washington State Department of Revenue. Because Yelp prominently displays a restaurant's rounded average rating, I can identify the causal impact of Yelp ratings on demand with a regression discontinuity framework that exploits Yelp's rounding thresholds. I present three findings about the impact of consumer reviews on the restaurant industry: (1) a one-star increase in Yelp rating leads to a 5% to 9% increase in revenue, (2) this effect is driven by independent restaurants; ratings do not affect restaurants with chain affiliation, and (3) chain restaurants have declined in market share as Yelp penetration has increased. This suggests that online consumer reviews substitute for more traditional forms of reputation. I then test whether consumers use these reviews in a way that is consistent with standard learning models. I present two additional findings: (4) consumers do not use all available information and are more responsive to quality changes that are more visible and (5) consumers respond more strongly when a rating contains more information. Consumer response to a restaurant's average rating is affected by the number of reviews and whether the reviewers are certified as "elite" by Yelp, but is unaffected by the size of the reviewers' Yelp friends network.

    Reputation Agent: Prompting Fair Reviews in Gig Markets

    Full text link
    Our study presents a new tool, Reputation Agent, to promote fairer reviews from requesters (employers or customers) on gig markets. Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers and can result in lost job opportunities and even termination from the marketplace. Our tool leverages machine learning to implement an intelligent interface that: (1) uses deep learning to automatically detect when an individual has included unfair factors into her review (factors outside the worker's control per the policies of the market); and (2) prompts the individual to reconsider her review if she has incorporated unfair factors. To study the effectiveness of Reputation Agent, we conducted a controlled experiment over different gig markets. Our experiment illustrates that across markets, Reputation Agent, in contrast with traditional approaches, motivates requesters to review gig workers' performance more fairly. We discuss how tools that bring more transparency to employers about the policies of a gig market can help build empathy thus resulting in reasoned discussions around potential injustices towards workers generated by these interfaces. Our vision is that with tools that promote truth and transparency we can bring fairer treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202

    Automated Crowdturfing Attacks and Defenses in Online Review Systems

    Full text link
    Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on "usefulness" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers

    Peer review for the evaluation of the academic research: the Italian experience

    Get PDF
    Peer review, that is the evaluation process based on judgments formulated by independent experts, is generally used for different goals: the allocation of research funding, the review of the research results submitted for publication in scientific journals, and the assessment of the quality of research conducted by Universities and university-related Institutes. The paper deals with the latter type of peer review. The aim is to understand how the characteristics of the Italian experience provide useful lessons for improving peer review effectiveness for evaluating the academic research. More specifically, the paper investigates the peer review process developed within the Three-Year Research Assessment Exercise (VTR) in Italy. Our analysis covers four disciplinary sectors: chemistry, biology, humanities and economics. Thus, the choice includes two “hard science” sectors, which have similar type of research output submitted for the three-year evaluation process, and two sectors with different types of output. The results provide evidences, which highlight the important role played by peer review for judging the quality of the academic research in different fields of science, and for comparing different institutions’ performance. Moreover, some basic features of the evaluation process are discussed, in order to understand their usefulness for reinforcing the effectiveness of the peers’ final outcome.Scientific research, Evaluation, Peer review, University, Academic institutions
    • …
    corecore