2,515 research outputs found
Consensus analysis of multiagent networks via aggregated and pinning approaches
This is the post-print version of of the Article - Copyright @ 2011 IEEEIn this paper, the consensus problem of multiagent nonlinear directed networks (MNDNs) is discussed in the case that a MNDN does not have a spanning tree to reach the consensus of all nodes. By using the Lie algebra theory, a linear node-and-node pinning method is proposed to achieve a consensus of a MNDN for all nonlinear functions satisfying a given set of conditions. Based on some optimal algorithms, large-size networks are aggregated to small-size ones. Then, by applying the principle minor theory to the small-size networks, a sufficient condition is given to reduce the number of controlled nodes. Finally, simulation results are given to illustrate the effectiveness of the developed criteria.This work was jointly supported by CityU under a research grant (7002355) and GRF funding (CityU 101109)
Context-dependent Trust Decisions with Subjective Logic
A decision procedure implemented over a computational trust mechanism aims to
allow for decisions to be made regarding whether some entity or information
should be trusted. As recognised in the literature, trust is contextual, and we
describe how such a context often translates into a confidence level which
should be used to modify an underlying trust value. J{\o}sang's Subjective
Logic has long been used in the trust domain, and we show that its operators
are insufficient to address this problem. We therefore provide a
decision-making approach about trust which also considers the notion of
confidence (based on context) through the introduction of a new operator. In
particular, we introduce general requirements that must be respected when
combining trustworthiness and confidence degree, and demonstrate the soundness
of our new operator with respect to these properties.Comment: 19 pages, 4 figures, technical report of the University of Aberdeen
(preprint version
The Emergence of Norms via Contextual Agreements in Open Societies
This paper explores the emergence of norms in agents' societies when agents
play multiple -even incompatible- roles in their social contexts
simultaneously, and have limited interaction ranges. Specifically, this article
proposes two reinforcement learning methods for agents to compute agreements on
strategies for using common resources to perform joint tasks. The computation
of norms by considering agents' playing multiple roles in their social contexts
has not been studied before. To make the problem even more realistic for open
societies, we do not assume that agents share knowledge on their common
resources. So, they have to compute semantic agreements towards performing
their joint actions. %The paper reports on an empirical study of whether and
how efficiently societies of agents converge to norms, exploring the proposed
social learning processes w.r.t. different society sizes, and the ways agents
are connected. The results reported are very encouraging, regarding the speed
of the learning process as well as the convergence rate, even in quite complex
settings
- …