30,419 research outputs found

    A model of multi-agent consensus for vague and uncertain beliefs

    Get PDF
    Consensus formation is investigated for multi-agent systems in which agents’ beliefs are both vague and uncertain. Vagueness is represented by a third truth state meaning borderline. This is combined with a probabilistic model of uncertainty. A belief combination operator is then proposed, which exploits borderline truth values to enable agents with conflicting beliefs to reach a compromise. A number of simulation experiments are carried out, in which agents apply this operator in pairwise interactions, under the bounded confidence restriction that the two agents’ beliefs must be sufficiently consistent with each other before agreement can be reached. As well as studying the consensus operator in isolation, we also investigate scenarios in which agents are influenced either directly or indirectly by the state of the world. For the former, we conduct simulations that combine consensus formation with belief updating based on evidence. For the latter, we investigate the effect of assuming that the closer an agent’s beliefs are to the truth the more visible they are in the consensus building process. In all cases, applying the consensus operators results in the population converging to a single shared belief that is both crisp and certain. Furthermore, simulations that combine consensus formation with evidential updating converge more quickly to a shared opinion, which is closer to the actual state of the world than those in which beliefs are only changed as a result of directly receiving new evidence. Finally, if agent interactions are guided by belief quality measured as similarity to the true state of the world, then applying the consensus operator alone results in the population converging to a high-quality shared belief

    Evidence Propagation and Consensus Formation in Noisy Environments

    Full text link
    We study the effectiveness of consensus formation in multi-agent systems where there is both belief updating based on direct evidence and also belief combination between agents. In particular, we consider the scenario in which a population of agents collaborate on the best-of-n problem where the aim is to reach a consensus about which is the best (alternatively, true) state from amongst a set of states, each with a different quality value (or level of evidence). Agents' beliefs are represented within Dempster-Shafer theory by mass functions and we investigate the macro-level properties of four well-known belief combination operators for this multi-agent consensus formation problem: Dempster's rule, Yager's rule, Dubois & Prade's operator and the averaging operator. The convergence properties of the operators are considered and simulation experiments are conducted for different evidence rates and noise levels. Results show that a combination of updating on direct evidence and belief combination between agents results in better consensus to the best state than does evidence updating alone. We also find that in this framework the operators are robust to noise. Broadly, Yager's rule is shown to be the better operator under various parameter values, i.e. convergence to the best state, robustness to noise, and scalability.Comment: 13th international conference on Scalable Uncertainty Managemen

    The Benefits of Interaction Constraints in Distributed Autonomous Systems

    Full text link
    The design of distributed autonomous systems often omits consideration of the underlying network dynamics. Recent works in multi-agent systems and swarm robotics alike have highlighted the impact that the interactions between agents have on the collective behaviours exhibited by the system. In this paper, we seek to highlight the role that the underlying interaction network plays in determining the performance of the collective behaviour of a system, comparing its impact with that of the physical network. We contextualise this by defining a collective learning problem in which agents must reach a consensus about their environment in the presence of noisy information. We show that the physical connectivity of the agents plays a less important role than when an interaction network of limited connectivity is imposed on the system to constrain agent communication. Constraining agent interactions in this way drastically improves the performance of the system in a collective learning context. Additionally, we provide further evidence for the idea that `less is more' when it comes to propagating information in distributed autonomous systems for the purpose of collective learning.Comment: To appear in the Proceedings of the Distributed Autonomous Robotic Systems 16th International Symposium (2022

    The Impact of Network Connectivity on Collective Learning

    Full text link
    In decentralised autonomous systems it is the interactions between individual agents which govern the collective behaviours of the system. These local-level interactions are themselves often governed by an underlying network structure. These networks are particularly important for collective learning and decision-making whereby agents must gather evidence from their environment and propagate this information to other agents in the system. Models for collective behaviours may often rely upon the assumption of total connectivity between agents to provide effective information sharing within the system, but this assumption may be ill-advised. In this paper we investigate the impact that the underlying network has on performance in the context of collective learning. Through simulations we study small-world networks with varying levels of connectivity and randomness and conclude that totally-connected networks result in higher average error when compared to networks with less connectivity. Furthermore, we show that networks of high regularity outperform networks with increasing levels of random connectivity.Comment: 13 pages, 5 figures. To appear at the 15th International Symposium on Distributed Autonomous Robotic Systems 2021. Presented at the joint DARS-SWARM 2021 symposium held (virtually) in Kyoto, Japa

    A logic for reasoning about ambiguity

    Full text link
    Standard models of multi-agent modal logic do not capture the fact that information is often \emph{ambiguous}, and may be interpreted in different ways by different agents. We propose a framework that can model this, and consider different semantics that capture different assumptions about the agents' beliefs regarding whether or not there is ambiguity. We examine the expressive power of logics of ambiguity compared to logics that cannot model ambiguity, with respect to the different semantics that we propose.Comment: Some of the material in this paper appeared in preliminary form in "Ambiguous langage and differences of belief" (see arXiv:1203.0699

    Probability, fuzziness and borderline cases

    Get PDF

    (WP 2020-01) The Sea Battle Tomorrow: The Identity of Reflexive Economic Agents

    Get PDF
    This paper develops a conception of reflexive economic agents as an alternative to the standard utility conception, and explains individual identity in terms of how agents adjust to change in a self-organizing way, an idea developed from Herbert Simon. The paper distinguishes closed equilibrium and open process conceptions of the economy, and argues the former fails to explain time in a before-and-after sense in connection with Aristotle’s sea battle problem. A causal model is developed to represent the process conception, and a structure-agency understanding of the adjustment behavior of reflexive economic agents is illustrated using Merton’s self-fulfilling prophecy analysis. Simon’s account of how adjustment behavior has stopping points is then shown to underlie how agents’ identities are disrupted and then self-organized, and the identity analysis this involves is applied to the different identity models of Merton, Ross, Arthur, and Kirman. Finally, the self-organization idea is linked to the recent ‘preference purification’ debate in bounded rationality theory regarding the ‘inner rational agent trapped in an outer psychological shell,’ and it is argued that the behavior of self-organizing agents involves them taking positions toward their own individual identities

    Nursing opinion leadership: a preliminary model derived from philosophic theories of rational belief

    Full text link
    Opinion leaders are informal leaders who have the ability to influence others' decisions about adopting new products, practices or ideas. In the healthcare setting, the importance of translating new research evidence into practice has led to interest in understanding how opinion leaders could be used to speed this process. Despite continued interest, gaps in understanding opinion leadership remain. Agent‐based models are computer models that have proven to be useful for representing dynamic and contextual phenomena such as opinion leadership. The purpose of this paper is to describe the work conducted in preparation for the development of an agent‐based model of nursing opinion leadership. The aim of this phase of the model development project was to clarify basic assumptions about opinions, the individual attributes of opinion leaders and characteristics of the context in which they are effective. The process used to clarify these assumptions was the construction of a preliminary nursing opinion leader model, derived from philosophical theories about belief formation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/100132/1/nup12008.pd
    • …
    corecore