1,985 research outputs found
On the Simulation of Global Reputation Systems
Reputation systems evolve as a mechanism to build trust in virtual communities. In this paper we evaluate different metrics for computing reputation in multi-agent systems. We present a formal model for describing metrics in reputation systems and show how different well-known global reputation metrics are expressed by it. Based on the model a generic simulation framework for reputation metrics was implemented. We used our simulation framework to compare different global reputation systems to find their strengths and weaknesses. The strength of a metric is measured by its resistance against different threat-models, i.e. different types of hostile agents. Based on our results we propose a new metric for reputation systems.Reputation System, Trust, Formalization, Simulation
Trust beyond reputation: A computational trust model based on stereotypes
Models of computational trust support users in taking decisions. They are
commonly used to guide users' judgements in online auction sites; or to
determine quality of contributions in Web 2.0 sites. However, most existing
systems require historical information about the past behavior of the specific
agent being judged. In contrast, in real life, to anticipate and to predict a
stranger's actions in absence of the knowledge of such behavioral history, we
often use our "instinct"- essentially stereotypes developed from our past
interactions with other "similar" persons. In this paper, we propose
StereoTrust, a computational trust model inspired by stereotypes as used in
real-life. A stereotype contains certain features of agents and an expected
outcome of the transaction. When facing a stranger, an agent derives its trust
by aggregating stereotypes matching the stranger's profile. Since stereotypes
are formed locally, recommendations stem from the trustor's own personal
experiences and perspective. Historical behavioral information, when available,
can be used to refine the analysis. According to our experiments using
Epinions.com dataset, StereoTrust compares favorably with existing trust models
that use different kinds of information and more complete historical
information
Flow-based reputation with uncertainty: Evidence-Based Subjective Logic
The concept of reputation is widely used as a measure of trustworthiness
based on ratings from members in a community. The adoption of reputation
systems, however, relies on their ability to capture the actual trustworthiness
of a target. Several reputation models for aggregating trust information have
been proposed in the literature. The choice of model has an impact on the
reliability of the aggregated trust information as well as on the procedure
used to compute reputations. Two prominent models are flow-based reputation
(e.g., EigenTrust, PageRank) and Subjective Logic based reputation. Flow-based
models provide an automated method to aggregate trust information, but they are
not able to express the level of uncertainty in the information. In contrast,
Subjective Logic extends probabilistic models with an explicit notion of
uncertainty, but the calculation of reputation depends on the structure of the
trust network and often requires information to be discarded. These are severe
drawbacks.
In this work, we observe that the `opinion discounting' operation in
Subjective Logic has a number of basic problems. We resolve these problems by
providing a new discounting operator that describes the flow of evidence from
one party to another. The adoption of our discounting rule results in a
consistent Subjective Logic algebra that is entirely based on the handling of
evidence. We show that the new algebra enables the construction of an automated
reputation assessment procedure for arbitrary trust networks, where the
calculation no longer depends on the structure of the network, and does not
need to throw away any information. Thus, we obtain the best of both worlds:
flow-based reputation and consistent handling of uncertainties
Flow-based reputation: more than just ranking
The last years have seen a growing interest in collaborative systems like
electronic marketplaces and P2P file sharing systems where people are intended
to interact with other people. Those systems, however, are subject to security
and operational risks because of their open and distributed nature. Reputation
systems provide a mechanism to reduce such risks by building trust
relationships among entities and identifying malicious entities. A popular
reputation model is the so called flow-based model. Most existing reputation
systems based on such a model provide only a ranking, without absolute
reputation values; this makes it difficult to determine whether entities are
actually trustworthy or untrustworthy. In addition, those systems ignore a
significant part of the available information; as a consequence, reputation
values may not be accurate. In this paper, we present a flow-based reputation
metric that gives absolute values instead of merely a ranking. Our metric makes
use of all the available information. We study, both analytically and
numerically, the properties of the proposed metric and the effect of attacks on
reputation values
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
Safeguarding E-Commerce against Advisor Cheating Behaviors: Towards More Robust Trust Models for Handling Unfair Ratings
In electronic marketplaces, after each transaction buyers will rate the
products provided by the sellers. To decide the most trustworthy sellers to
transact with, buyers rely on trust models to leverage these ratings to
evaluate the reputation of sellers. Although the high effectiveness of
different trust models for handling unfair ratings have been claimed by their
designers, recently it is argued that these models are vulnerable to more
intelligent attacks, and there is an urgent demand that the robustness of the
existing trust models has to be evaluated in a more comprehensive way. In this
work, we classify the existing trust models into two broad categories and
propose an extendable e-marketplace testbed to evaluate their robustness
against different unfair rating attacks comprehensively. On top of highlighting
the robustness of the existing trust models for handling unfair ratings is far
from what they were claimed to be, we further propose and validate a novel
combination mechanism for the existing trust models, Discount-then-Filter, to
notably enhance their robustness against the investigated attacks
- …