3,334 research outputs found

    Collaborative Scoring with Dishonest Participants

    Get PDF
    Consider a set of players that are interested in collectively evaluating a set of objects. We develop a collaborative scori ng protocol in which each player evaluates a subset of the objects, after which we can accurately predict each players’ individual opinion of the remainingobjects. The accuracyof thepredictionsisnearoptimal,depending onthenumberof objects evaluated by each player and the correlation among the players’ preferences. A key novelty is the ability to tolerate malicious playe rs. Surprisingly, the malicious players cause no (asympt otic) loss of accuracy in the predictions. In fact, our algor ithmimprovesinbothperformance and accuracy overprior state-of-the-art collaborative scoringprotocolsthatprovided no robustness to malicious disruption

    Rule Following Mitigates Collaborative Cheating and Facilitates the Spreading of Honesty Within Groups

    Full text link
    Compared with working alone, interacting in groups can increase dishonesty and give rise to collaborative cheating-the joint violation of honesty. At the same time, collaborative cheating emerges some but not all of the time, even when dishonesty is not sanctioned and economically rational. Here, we address this conundrum. We show that people differ in their extent to follow arbitrary and costly rules and observe that "rule-followers" behave more honestly than "rule-violators." Because rule-followers also resist the temptation to engage in collaborative cheating, dyads and groups with at least one high rule-follower have fewer instances of coordinated violations of honesty. Whereas social interaction can lead to a "social slippery slope" of increased cheating, rule-abiding individuals mitigate the emergence and spreading of collaborative cheating, leading to a transmission advantage of honesty. Accordingly, interindividual differences in rule following provide a basis through which honest behavior can persist

    How to Incentivize Data-Driven Collaboration Among Competing Parties

    Full text link
    The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, and develop educational strategies. In some settings such as Genome Wide Association Studies or deep learning, sheer size of data seems critical. When data is held distributedly by many parties, they must share it to reap its full benefits. One obstacle to this revolution is the lack of willingness of different parties to share data, due to reasons such as loss of privacy or competitive edge. Cryptographic works address privacy aspects, but shed no light on individual parties' losses/gains when access to data carries tangible rewards. Even if it is clear that better overall conclusions can be drawn from collaboration, are individual collaborators better off by collaborating? Addressing this question is the topic of this paper. * We formalize a model of n-party collaboration for computing functions over private inputs in which participants receive their outputs in sequence, and the order depends on their private inputs. Each output "improves" on preceding outputs according to a score function. * We say a mechanism for collaboration achieves collaborative equilibrium if it ensures higher reward for all participants when collaborating (rather than working alone). We show that in general, computing a collaborative equilibrium is NP-complete, yet we design efficient algorithms to compute it in a range of natural model settings. Our collaboration mechanisms are in the standard model, and thus require a central trusted party; however, we show this assumption is unnecessary under standard cryptographic assumptions. We show how to implement the mechanisms in a decentralized way with new extensions of secure multiparty computation that impose order/timing constraints on output delivery to different players, as well as privacy and correctness

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    Alignment Problems With Current Forecasting Platforms

    Full text link
    We present alignment problems in current forecasting platforms, such as Good Judgment Open, CSET-Foretell or Metaculus. We classify those problems as either reward specification problems or principal-agent problems, and we propose solutions. For instance, the scoring rule used by Good Judgment Open is not proper, and Metaculus tournaments disincentivize sharing information and incentivize distorting one's true probabilities to maximize the chances of placing in the top few positions which earn a monetary reward. We also point out some partial similarities between the problem of aligning forecasters and the problem of aligning artificial intelligence systems.Comment: 39 pages, 13 figure

    On The (honest) truth about dishonesty: How we lie to everyone: Especially ourselves by Dan Ariely

    Get PDF
    In this book Dan Ariely follows the topic he started to discuss in his prior book, the Predictably Irrational: stating that there is logic and consistency behind irrational human thinking and actions. Ariely goes into more details and leads the general topic of irrationality through a narrow-down approach to the topic of cheating, one of the fields we could observe to work irrationally in some cases, and even within that to cheating within organizational environment

    Rating Fraud Detection---Towards Designing a Trustworthy Reputation Systems

    Get PDF
    Reputation systems could help consumers avoid transaction risk by providing historical consumers’ feedback. But, traditional reputation systems are vulnerable to the rating manipulation. It will undermine the trustworthiness of the reputation systems and users’ satisfaction will be lost. To address the issue, this study uses the real-world rating data from two travel website: Tripadvisor.com and Expedia.com and one e-commerce website Amazon.com to empirically exploit the features of fraudulent raters. Based on those features, it proposes the new method for fraudulent rater detection. First, it examines the received rating series of each entity and filter out the entity which is under attack (termed as target entity). Second, the clustering based method is applied to discriminate fraudulent raters. Experimental studies have shown that the proposed method is effective in detecting the fraudulent raters accurately while keeping the majority of the normal users in the systems in various attack environment settings
    • …
    corecore