17,360 research outputs found
A network centrality method for the rating problem
We propose a new method for aggregating the information of multiple reviewers
rating multiple products. Our approach is based on the network relations
induced between products by the rating activity of the reviewers. We show that
our method is algorithmically implementable even for large numbers of both
products and consumers, as is the case for many online sites. Moreover,
comparing it with the simple average, which is mostly used in practice, and
with other methods previously proposed in the literature, it performs very well
under various dimension, proving itself to be an optimal trade--off between
computational efficiency, accordance with the reviewers original orderings, and
robustness with respect to the inclusion of systematically biased reports.Comment: 25 pages, 8 figure
Reviews, Reputation, and Revenue: The Case of Yelp.com
Do online consumer reviews affect restaurant demand? I investigate this question using a novel dataset combining reviews from the website Yelp.com and restaurant data from the Washington State Department of Revenue. Because Yelp prominently displays a restaurant's rounded average rating, I can identify the causal impact of Yelp ratings on demand with a regression discontinuity framework that exploits Yelp's rounding thresholds. I present three findings about the impact of consumer reviews on the restaurant industry: (1) a one-star increase in Yelp rating leads to a 5% to 9% increase in revenue, (2) this effect is driven by independent restaurants; ratings do not affect restaurants with chain affiliation, and (3) chain restaurants have declined in market share as Yelp penetration has increased. This suggests that online consumer reviews substitute for more traditional forms of reputation. I then test whether consumers use these reviews in a way that is consistent with standard learning models. I present two additional findings: (4) consumers do not use all available information and are more responsive to quality changes that are more visible and (5) consumers respond more strongly when a rating contains more information. Consumer response to a restaurant's average rating is affected by the number of reviews and whether the reviewers are certified as "elite" by Yelp, but is unaffected by the size of the reviewers' Yelp friends network.
Recommended from our members
Project Retrosight. Understanding the returns from cardiovascular and stroke research: Policy Report
Copyright @ 2011 RAND Europe. All rights reserved. The full text article and the summary of the article are both available via the links below.This project explores the impacts arising from cardiovascular and stroke research funded 15-20 years ago and attempts to draw out aspects of the research, researcher or environment that are associated with high or low impact.
The project is a case study-based review of 29 cardiovascular and stroke research grants, funded in Australia, Canada and UK between 1989 and 1993. The case studies focused on the individual grants but considered the development of the investigators and ideas involved in the research projects from initiation to the present day. Grants were selected through a stratified random selection approach that aimed to include both high- and low-impact grants. The key messages are as follows: 1) The cases reveal that a large and diverse range of impacts arose from the 29 grants studied. 2) There are variations between the impacts derived from basic biomedical and clinical research. 3) There is no correlation between knowledge production and wider impacts 4) The majority of economic impacts identified come from a minority of projects. 5) We identified factors that appear to be associated with high and low impact.
This report presents the key observations of the study and an overview of the methods involved. It has been written for funders of biomedical and health research and health services, health researchers, and policy makers in those fields. It will also be of interest to those involved in research and impact evaluation.This study was initiated with internal funding from RAND Europe and HERG, with continuing funding from the UK National Institute for Health Research, the Canadian Institutes of Health Research, the Heart and Stroke Foundation of Canada and the National Heart Foundation of Australia. The UK Stroke Association and the British Heart Foundation provided support in kind through access to their archives
Reputation Agent: Prompting Fair Reviews in Gig Markets
Our study presents a new tool, Reputation Agent, to promote fairer reviews
from requesters (employers or customers) on gig markets. Unfair reviews,
created when requesters consider factors outside of a worker's control, are
known to plague gig workers and can result in lost job opportunities and even
termination from the marketplace. Our tool leverages machine learning to
implement an intelligent interface that: (1) uses deep learning to
automatically detect when an individual has included unfair factors into her
review (factors outside the worker's control per the policies of the market);
and (2) prompts the individual to reconsider her review if she has incorporated
unfair factors. To study the effectiveness of Reputation Agent, we conducted a
controlled experiment over different gig markets. Our experiment illustrates
that across markets, Reputation Agent, in contrast with traditional approaches,
motivates requesters to review gig workers' performance more fairly. We discuss
how tools that bring more transparency to employers about the policies of a gig
market can help build empathy thus resulting in reasoned discussions around
potential injustices towards workers generated by these interfaces. Our vision
is that with tools that promote truth and transparency we can bring fairer
treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202
Automated Crowdturfing Attacks and Defenses in Online Review Systems
Malicious crowdsourcing forums are gaining traction as sources of spreading
misinformation online, but are limited by the costs of hiring and managing
human workers. In this paper, we identify a new class of attacks that leverage
deep learning language models (Recurrent Neural Networks or RNNs) to automate
the generation of fake online reviews for products and services. Not only are
these attacks cheap and therefore more scalable, but they can control rate of
content output to eliminate the signature burstiness that makes crowdsourced
campaigns easy to detect.
Using Yelp reviews as an example platform, we show how a two phased review
generation and customization attack can produce reviews that are
indistinguishable by state-of-the-art statistical detectors. We conduct a
survey-based user study to show these reviews not only evade human detection,
but also score high on "usefulness" metrics by users. Finally, we develop novel
automated defenses against these attacks, by leveraging the lossy
transformation introduced by the RNN training and generation cycle. We consider
countermeasures against our mechanisms, show that they produce unattractive
cost-benefit tradeoffs for attackers, and that they can be further curtailed by
simple constraints imposed by online service providers
Peer review for the evaluation of the academic research: the Italian experience
Peer review, that is the evaluation process based on judgments formulated by independent experts, is generally used for different goals: the allocation of research funding, the review of the research results submitted for publication in scientific journals, and the assessment of the quality of research conducted by Universities and university-related Institutes. The paper deals with the latter type of peer review. The aim is to understand how the characteristics of the Italian experience provide useful lessons for improving peer review effectiveness for evaluating the academic research. More specifically, the paper investigates the peer review process developed within the Three-Year Research Assessment Exercise (VTR) in Italy. Our analysis covers four disciplinary sectors: chemistry, biology, humanities and economics. Thus, the choice includes two âhard scienceâ sectors, which have similar type of research output submitted for the three-year evaluation process, and two sectors with different types of output. The results provide evidences, which highlight the important role played by peer review for judging the quality of the academic research in different fields of science, and for comparing different institutionsâ performance. Moreover, some basic features of the evaluation process are discussed, in order to understand their usefulness for reinforcing the effectiveness of the peersâ final outcome.Scientific research, Evaluation, Peer review, University, Academic institutions
- âŠ