284 research outputs found
Rational Trust Modeling
Trust models are widely used in various computer science disciplines. The
main purpose of a trust model is to continuously measure trustworthiness of a
set of entities based on their behaviors. In this article, the novel notion of
"rational trust modeling" is introduced by bridging trust management and game
theory. Note that trust models/reputation systems have been used in game theory
(e.g., repeated games) for a long time, however, game theory has not been
utilized in the process of trust model construction; this is where the novelty
of our approach comes from. In our proposed setting, the designer of a trust
model assumes that the players who intend to utilize the model are
rational/selfish, i.e., they decide to become trustworthy or untrustworthy
based on the utility that they can gain. In other words, the players are
incentivized (or penalized) by the model itself to act properly. The problem of
trust management can be then approached by game theoretical analyses and
solution concepts such as Nash equilibrium. Although rationality might be
built-in in some existing trust models, we intend to formalize the notion of
rational trust modeling from the designer's perspective. This approach will
result in two fascinating outcomes. First of all, the designer of a trust model
can incentivise trustworthiness in the first place by incorporating proper
parameters into the trust function, which can be later utilized among selfish
players in strategic trust-based interactions (e.g., e-commerce scenarios).
Furthermore, using a rational trust model, we can prevent many well-known
attacks on trust models. These two prominent properties also help us to predict
behavior of the players in subsequent steps by game theoretical analyses
Network-aware Evaluation Environment for Reputation Systems
Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the
most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation
systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation
A decidable policy language for history-based transaction monitoring
Online trading invariably involves dealings between strangers, so it is
important for one party to be able to judge objectively the trustworthiness of
the other. In such a setting, the decision to trust a user may sensibly be
based on that user's past behaviour. We introduce a specification language
based on linear temporal logic for expressing a policy for categorising the
behaviour patterns of a user depending on its transaction history. We also
present an algorithm for checking whether the transaction history obeys the
stated policy. To be useful in a real setting, such a language should allow one
to express realistic policies which may involve parameter quantification and
quantitative or statistical patterns. We introduce several extensions of linear
temporal logic to cater for such needs: a restricted form of universal and
existential quantification; arbitrary computable functions and relations in the
term language; and a "counting" quantifier for counting how many times a
formula holds in the past. We then show that model checking a transaction
history against a policy, which we call the history-based transaction
monitoring problem, is PSPACE-complete in the size of the policy formula and
the length of the history. The problem becomes decidable in polynomial time
when the policies are fixed. We also consider the problem of transaction
monitoring in the case where not all the parameters of actions are observable.
We formulate two such "partial observability" monitoring problems, and show
their decidability under certain restrictions
Stereotype reputation with limited observability
Assessing trust and reputation is essential in multi-agent systems where agents must decide who to interact with. Assessment typically relies on the direct experience of a trustor with a trustee agent, or on information from witnesses. Where direct or witness information is unavailable, such as when agent turnover is high, stereotypes learned from common traits and behaviour can provide this information. Such traits may be only partially or subjectively observed, with witnesses not observing traits of some trustees or interpreting their observations differently. Existing stereotype-based techniques are unable to account for such partial observability and subjectivity. In this paper we propose a method for extracting information from witness observations that enables stereotypes to be applied in partially and subjectively observable dynamic environments. Specifically, we present a mechanism for learning translations between observations made by trustor and witness agents with subjective interpretations of traits. We show through simulations that such translation is necessary for reliable reputation assessments in dynamic environments with partial and subjective observability
Detecting and reacting to changes in reputation flows
Proceeding volume: IFIP AICT 358/2011Peer reviewe
Trust and Reputation Modelling for Tourism Recommendations Supported by Crowdsourcing
Tourism crowdsourcing platforms have a profound influence
on the tourist behaviour particularly in terms of travel planning. Not
only they hold the opinions shared by other tourists concerning tourism
resources, but, with the help of recommendation engines, are the pillar
of personalised resource recommendation. However, since prospective
tourists are unaware of the trustworthiness or reputation of crowd publishers,
they are in fact taking a leap of faith when then rely on the
crowd wisdom. In this paper, we argue that modelling publisher Trust &
Reputation improves the quality of the tourism recommendations supported
by crowdsourced information. Therefore, we present a tourism
recommendation system which integrates: (i) user profiling using the
multi-criteria ratings; (ii) k-Nearest Neighbours (k-NN) prediction of the
user ratings; (iii) Trust & Reputation modelling; and (iv) incremental
model update, i.e., providing near real-time recommendations. In terms
of contributions, this paper provides two different Trust & Reputation
approaches: (i) general reputation employing the pairwise trust values
using all users; and (ii) neighbour-based reputation employing the pairwise
trust values of the common neighbours. The proposed method was
experimented using crowdsourced datasets from Expedia and TripAdvisor
platforms.info:eu-repo/semantics/publishedVersio
Evidence Propagation and Consensus Formation in Noisy Environments
We study the effectiveness of consensus formation in multi-agent systems
where there is both belief updating based on direct evidence and also belief
combination between agents. In particular, we consider the scenario in which a
population of agents collaborate on the best-of-n problem where the aim is to
reach a consensus about which is the best (alternatively, true) state from
amongst a set of states, each with a different quality value (or level of
evidence). Agents' beliefs are represented within Dempster-Shafer theory by
mass functions and we investigate the macro-level properties of four well-known
belief combination operators for this multi-agent consensus formation problem:
Dempster's rule, Yager's rule, Dubois & Prade's operator and the averaging
operator. The convergence properties of the operators are considered and
simulation experiments are conducted for different evidence rates and noise
levels. Results show that a combination of updating on direct evidence and
belief combination between agents results in better consensus to the best state
than does evidence updating alone. We also find that in this framework the
operators are robust to noise. Broadly, Yager's rule is shown to be the better
operator under various parameter values, i.e. convergence to the best state,
robustness to noise, and scalability.Comment: 13th international conference on Scalable Uncertainty Managemen
Expressing Trust with Temporal Frequency of User Interaction in Online Communities
Reputation systems concern soft security dynamics in diverse areas. Trust
dynamics in a reputation system should be stable and adaptable at the same time
to serve the purpose. Many reputation mechanisms have been proposed and tested
over time. However, the main drawback of reputation management is that users
need to share private information to gain trust in a system such as phone
numbers, reviews, and ratings. Recently, a novel model that tries to overcome
this issue was presented: the Dynamic Interaction-based Reputation Model
(DIBRM). This approach to trust considers only implicit information
automatically deduced from the interactions of users within an online
community. In this primary research study, the Reddit and MathOverflow online
social communities have been selected for testing DIBRM. Results show how this
novel approach to trust can mimic behaviors of the selected reputation systems,
namely Reddit and MathOverflow, only with temporal information
Fake News Detection Based on Subjective Opinions
Fake news fluctuates social media, leading to harmful consequences. Several types of information could be utilized to detect fake news, such as news content features and news propagation features. In this study, we focus on the user spreading news behaviors on social media platforms and aim to detect fake news more effectively with more accurate data reliability assessment. We introduce Subjective Opinions into reliability evaluation and proposed two new methods. Experiments on two popular real-world datasets, BuzzFeed and PolitiFact, validates that our proposed Subjective Opinions based method can detect fake news more accurately than all existing methods, and another proposed probability based method achieves state-of-art performance
- …