9,813 research outputs found
A computation trust model with trust network in multi-agent systems
Trust is a fundamental issue in multi-agent systems, especially when they are applied in e-commence. The computational models of trust play an important role in determining who and how to interact in open and dynamic environments. To this end, a computation trust model is proposed in which the confidence information based on direct prior interactions with the target agent and the reputation information from trust network are used. In this way, agents can autonomously deal with deception and identify trustworthy parties in multi-agent systems. The ontological property of trust is also considered in the model. A case study is provided to show the effectiveness of the proposed model. <br /
State of the art review of the existing soft computing based approaches to trust and reputation computation
In this paper we present a state of the art review of PageRanktrade based approaches for trust and reputation computation. We divide the approaches that make use of PageRanktrade method for trust and reputation computation, into six different classes. Each of the six classes is discussed in this paper
Recommended from our members
Generating citizen trust in e-government using a trust verification agent: A research note
Generating Citizen Trust in e-Government using a Trust Verification AgentThis is an eGISE network paper. It is motivated by a concern about the extent to which trust issues inhibit a citizenâs take-up of online public sector services or engagement with public decision and
policy making. A citizenâs decision to use online systems is influenced by their willingness to trust the environment and agency involved. This project addresses one aspect of individual âtrustâ decisions by
providing support for citizens trying to evaluate the implications of the security infrastructure provided by the agency. Based on studies of the way both groups (citizens and agencies) express their concerns and concepts in the security area, the project will develop a software tool â a trust
verification agent (TVA) - that can take an agencyâs security statements (or security audit) and infer how effectively this meets the security concerns of a particular citizen. This will enable citizens to state
their concerns and obtain an evaluation of the agencyâs provision in appropriate âcitizen friendlyâ language. Further, by employing rule-based expert systems techniques the TVA will also be able to explain its evaluation.Engineering and Physical Sciences Research Council, UK (grant GR/T27020/01
Recommended from our members
Generating citizen trust in e-government using a trust verification agent: A research note
Generating Citizen Trust in e-Government using a Trust Verification AgentThis is an eGISE network paper. It is motivated by a concern about the extent to which trust issues inhibit a citizenâs take-up of online public sector services or engagement with public decision and policy making. A citizenâs decision to use online systems is influenced by their willingness to trust the environment and agency involved. This project addresses one aspect of individual âtrustâ decisions by
providing support for citizens trying to evaluate the implications of the security infrastructure provided by the agency. Based on studies of the way both groups (citizens and agencies) express their concerns and concepts in the security area, the project will develop a software tool â a trust
verification agent (TVA) - that can take an agencyâs security statements (or security audit) and infer how effectively this meets the security concerns of a particular citizen. This will enable citizens to state
their concerns and obtain an evaluation of the agencyâs provision in appropriate âcitizen friendlyâ
language. Further, by employing rule-based expert systems techniques the TVA will also be able to explain its evaluation.Engineering and Physical Sciences Research Council-UK (grant GR/T27020/01
Repage: REPutation and ImAGE Among Limited Autonomous Partners
This paper introduces Repage, a computational system that adopts a cognitive theory of reputation. We propose a fundamental difference between image and reputation, which suggests a way out from the paradox of sociality, i.e. the trade-off between agents' autonomy and their need to adapt to social environment. On one hand, agents are autonomous if they select partners based on their social evaluations (images). On the other, they need to update evaluations by taking into account others'. Hence, social evaluations must circulate and be represented as "reported evaluations" (reputation), before and in order for agents to decide whether to accept them or not. To represent this level of cognitive detail in artificial agents' design, there is a need for a specialised subsystem, which we are in the course of developing for the public domain. In the paper, after a short presentation of the cognitive theory of reputation and its motivations, we describe the implementation of Repage.Reputation, Agent Systems, Cognitive Design, Fuzzy Evaluation
Computing confidence Values: does trust dynamics matter?
Computing Confidence Values: Does Trust Dynamics Matter?Em um Sistema Multi-agente, a selecção de outros agentes para trabalho conjunto depende da eventual confiança que Ă© esperada existir no desempenho desses outros.Propomos uma forma de agregar indĂcios para computar correctamente uma função cujo resultado seja um valor de confiançaComputational Trust and Reputation (CTR) systems are platforms capable of collecting trust information about candidate partners and of computing confidence scores for each one of these partners. These systems start to be viewed as vital elements in environments of electronic institutions, as they support fundamental decision making processes, such as the selection of business partners and the automatic and adaptive creation of contractual terms and associated enforcement methodologies. In this article, we propose a model for the aggregation of trust evidences that computes confidence scores taking into account dynamic properties of trust. We compare our model with a traditional statistical model that uses weighted means to compute trust, and show experimental results that show that in certain scenarios the consideration of the trust dynamics allows for a better estimation of confidence scores
Opinion dynamics: models, extensions and external effects
Recently, social phenomena have received a lot of attention not only from
social scientists, but also from physicists, mathematicians and computer
scientists, in the emerging interdisciplinary field of complex system science.
Opinion dynamics is one of the processes studied, since opinions are the
drivers of human behaviour, and play a crucial role in many global challenges
that our complex world and societies are facing: global financial crises,
global pandemics, growth of cities, urbanisation and migration patterns, and
last but not least important, climate change and environmental sustainability
and protection. Opinion formation is a complex process affected by the
interplay of different elements, including the individual predisposition, the
influence of positive and negative peer interaction (social networks playing a
crucial role in this respect), the information each individual is exposed to,
and many others. Several models inspired from those in use in physics have been
developed to encompass many of these elements, and to allow for the
identification of the mechanisms involved in the opinion formation process and
the understanding of their role, with the practical aim of simulating opinion
formation and spreading under various conditions. These modelling schemes range
from binary simple models such as the voter model, to multi-dimensional
continuous approaches. Here, we provide a review of recent methods, focusing on
models employing both peer interaction and external information, and
emphasising the role that less studied mechanisms, such as disagreement, has in
driving the opinion dynamics. [...]Comment: 42 pages, 6 figure
- âŠ