36,680 research outputs found

    Reputation in multi agent systems and the incentives to provide feedback

    Get PDF
    The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field

    A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems

    Get PDF
    Trust and reputation models for distributed, collaborative systems have been studied and applied in several domains, in order to stimulate cooperation while preventing selfish and malicious behaviors. Nonetheless, such models have received less attention in the process of specifying and analyzing formally the functionalities of the systems mentioned above. The objective of this paper is to define a process algebraic framework for the modeling of systems that use (i) trust and reputation to govern the interactions among nodes, and (ii) communication models characterized by a high level of adaptiveness and flexibility. Hence, we propose a formalism for verifying, through model checking techniques, the robustness of these systems with respect to the typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    Crowd-sourcing with uncertain quality - an auction approach

    Get PDF
    This article addresses two important issues in crowd-sourcing: ex ante uncertainty about the quality and cost of different workers and strategic behaviour. We present a novel multi-dimensional auction that incentivises the workers to make partial enquiry into the task and to honestly report quality-cost estimates based on which the crowd-sourcer can choose the worker that offers the best value for money. The mechanism extends second score auction design to settings where the quality is uncertain and it provides incentives to both collect information and deliver desired qualities
    corecore