171,715 research outputs found

    Reputation in multi agent systems and the incentives to provide feedback

    Get PDF
    The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation

    Survey on social reputation mechanisms: Someone told me I can trust you

    Full text link
    Nowadays, most business and social interactions have moved to the internet, highlighting the relevance of creating online trust. One way to obtain a measure of trust is through reputation mechanisms, which record one's past performance and interactions to generate a reputational value. We observe that numerous existing reputation mechanisms share similarities with actual social phenomena; we call such mechanisms 'social reputation mechanisms'. The aim of this paper is to discuss several social phenomena and map these to existing social reputation mechanisms in a variety of scopes. First, we focus on reputation mechanisms in the individual scope, in which everyone is responsible for their own reputation. Subjective reputational values may be communicated to different entities in the form of recommendations. Secondly, we discuss social reputation mechanisms in the acquaintances scope, where one's reputation can be tied to another through vouching or invite-only networks. Finally, we present existing social reputation mechanisms in the neighbourhood scope. In such systems, one's reputation can heavily be affected by the behaviour of others in their neighbourhood or social group.Comment: 10 pages, 3 figures, 1 tabl

    Trust and reputation policy-based mechanisms for self-protection in autonomic communications

    Get PDF
    Currently, there is an increasing tendency to migrate the management of communications and information systems onto the Web. This is making many traditional service support models obsolete. In addition, current security mechanisms are not sufficiently robust to protect each management system and/or subsystem from web-based intrusions, malware, and hacking attacks. This paper presents research challenges in autonomic management to provide self-protection mechanisms and tools by using trust and reputation concepts based on policy-based management to decentralize management decisions. This work also uses user-based reputation mechanisms to help enforce trust management in pervasive and communications services. The scope of this research is founded in social models, where the application of trust and reputation applied in communication systems helps detect potential users as well as hackers attempting to corrupt management operations and services. These so-called “cheating services” act as “attacks”, altering the performance and the security in communication systems by consumption of computing or network resources unnecessarily

    Expressing Trust with Temporal Frequency of User Interaction in Online Communities

    Get PDF
    Reputation systems concern soft security dynamics in diverse areas. Trust dynamics in a reputation system should be stable and adaptable at the same time to serve the purpose. Many reputation mechanisms have been proposed and tested over time. However, the main drawback of reputation management is that users need to share private information to gain trust in a system such as phone numbers, reviews, and ratings. Recently, a novel model that tries to overcome this issue was presented: the Dynamic Interaction-based Reputation Model (DIBRM). This approach to trust considers only implicit information automatically deduced from the interactions of users within an online community. In this primary research study, the Reddit and MathOverflow online social communities have been selected for testing DIBRM. Results show how this novel approach to trust can mimic behaviors of the selected reputation systems, namely Reddit and MathOverflow, only with temporal information

    Expressing Trust with Temporal Frequency of User Interaction in Online Communities

    Get PDF
    Reputation systems concern soft security dynamics in diverse areas. Trust dynamics in a reputation system should be stable and adaptable at the same time to serve the purpose. Many reputation mechanisms have been proposed and tested over time. However, the main drawback of reputation management is that users need to share private information to gain trust in a system such as phone numbers, reviews, and ratings. Recently, a novel model that tries to overcome this issue was presented: the Dynamic Interaction-based Reputation Model (DIBRM). This approach to trust considers only implicit information automatically deduced from the interactions of users within an online community. In this primary research study, the Red-dit and MathOverflow online social communities have been selected for testing DIBRM. Results show how this novel approach to trust can mimic behaviors of the selected reputation systems, namely Reddit and MathOverflow, only with temporal information

    Trust and reputation management in decentralized systems

    Get PDF
    In large, open and distributed systems, agents are often used to represent users and act on their behalves. Agents can provide good or bad services or act honestly or dishonestly. Trust and reputation mechanisms are used to distinguish good services from bad ones or honest agents from dishonest ones. My research is focused on trust and reputation management in decentralized systems. Compared with centralized systems, decentralized systems are more difficult and inefficient for agents to find and collect information to build trust and reputation. In this thesis, I propose a Bayesian network-based trust model. It provides a flexible way to present differentiated trust and combine different aspects of trust that can meet agents’ different needs. As a complementary element, I propose a super-agent based approach that facilitates reputation management in decentralized networks. The idea of allowing super-agents to form interest-based communities further enables flexible reputation management among groups of agents. A reward mechanism creates incentives for super-agents to contribute their resources and to be honest. As a single package, my work is able to promote effective, efficient and flexible trust and reputation management in decentralized systems

    Decentralized reputation-based trust for assessing agent reliability under aggregate feedback

    Get PDF
    Reputation mechanisms allow agents to establish trust in other agents' intentions and capabilities in the absence of direct interactions. In this paper, we are concerned with establishing trust on the basis of reputation information in open, decentralized systems of interdependent autonomous agents. We present a completely decentralized reputation mechanism to increase the accuracy of agents' assessments of other agents' capabilities and allow them to develop appropriate levels of trust in each other as providers of reliable information. Computer simulations show the reputation system's ability to track an agent's actual capabilitie

    Towards a Model of Open and Reliable Cognitive Multiagent Systems: Dealing with Trust and Emotions

    Get PDF
     Open multiagent systems are those in which the agents can enter or leave the system freely. In these systems any entity with unknown intention can occupy the environment. For this scenario trust and reputation mechanisms should be used to choose partners in order to request services or delegate tasks. Trust and reputation models have been proposed in the Multiagent Systems area as a way to assist agents to select good partners in order to improve interactions between them. Most of the trust and reputation models proposed in the literature take into account their functional aspects, but not how they affect the reasoning cycle of the agent. That is, under the perspective of the agent, a trust model is usually just a “black box” and the agents usually does not take into account their emotional state to make decisions as well as humans often do. As well as trust, agent’s emotions also have been studied with the aim of making the actions and reactions of the agents more like those of humans being in order to imitate their reasoning and decision making mechanisms. In this paper we analyse some proposed models found in the literature and propose a BDI and multi-context based agent model which includes emotional reasoning to lead trust and reputation in open multiagent systems
    corecore