3 research outputs found

    Provably Manipulation-Resistant Reputation Systems

    Full text link
    We consider a community of users who must make periodic decisions about whether to interact with one another. We propose a protocol which allows honest users to reliably interact with each other, while limiting the damage done by each malicious or incompetent user. The worst-case cost per user is sublinear in the average number of interactions per user and is independent of the number of users. Our guarantee holds simultaneously for every group of honest users. For example, multiple groups of users with incompatible tastes or preferences can coexist. As a motivating example, we consider a game where players have periodic opportunities to do one another favors but minimal ability to determine when a favor was done. In this setting, our protocol achieves nearly optimal collective welfare while remaining resistant to exploitation. Our results also apply to a collaborative filtering setting where users must make periodic decisions about whether to interact with resources such as movies or restaurants. In this setting, we guarantee that any set of honest users achieves a payoff nearly as good as if they had identified the optimal set of items in advance and then chosen to interact only with resources from that set

    Collaborative prediction with expert advice

    Full text link
    Many practical learning systems aggregate data across many users, while learning theory traditionally considers a single learner who trusts all of their observations. A case in point is the foundational learning problem of prediction with expert advice. To date, there has been no theoretical study of the general collaborative version of prediction with expert advice, in which many users face a similar problem and would like to share their experiences in order to learn faster. A key issue in this collaborative framework is robustness: generally algorithms that aggregate data are vulnerable to manipulation by even a small number of dishonest users. We exhibit the first robust collaborative algorithm for prediction with expert advice. When all users are honest and have similar tastes our algorithm matches the performance of pooling data and using a traditional algorithm. But our algorithm also guarantees that adding users never significantly degrades performance, even if the additional users behave adversarially. We achieve strong guarantees even when the overwhelming majority of users behave adversarially. As a special case, our algorithm is extremely robust to variation amongst the users

    Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction

    Full text link
    We consider a crowdsourcing model in which nn workers are asked to rate the quality of nn items previously generated by other workers. An unknown set of αn\alpha n workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an ϵ\epsilon fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with nn: the dataset can be curated with O~(1βα3ϵ4)\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big) ratings per worker, and O~(1βϵ2)\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big) ratings by the manager, where β\beta is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.Comment: 18 page
    corecore