6,258 research outputs found

    Consensus in the Presence of Multiple Opinion Leaders: Effect of Bounded Confidence

    Full text link
    The problem of analyzing the performance of networked agents exchanging evidence in a dynamic network has recently grown in importance. This problem has relevance in signal and data fusion network applications and in studying opinion and consensus dynamics in social networks. Due to its capability of handling a wider variety of uncertainties and ambiguities associated with evidence, we use the framework of Dempster-Shafer (DS) theory to capture the opinion of an agent. We then examine the consensus among agents in dynamic networks in which an agent can utilize either a cautious or receptive updating strategy. In particular, we examine the case of bounded confidence updating where an agent exchanges its opinion only with neighboring nodes possessing 'similar' evidence. In a fusion network, this captures the case in which nodes only update their state based on evidence consistent with the node's own evidence. In opinion dynamics, this captures the notions of Social Judgment Theory (SJT) in which agents update their opinions only with other agents possessing opinions closer to their own. Focusing on the two special DS theoretic cases where an agent state is modeled as a Dirichlet body of evidence and a probability mass function (p.m.f.), we utilize results from matrix theory, graph theory, and networks to prove the existence of consensus agent states in several time-varying network cases of interest. For example, we show the existence of a consensus in which a subset of network nodes achieves a consensus that is adopted by follower network nodes. Of particular interest is the case of multiple opinion leaders, where we show that the agents do not reach a consensus in general, but rather converge to 'opinion clusters'. Simulation results are provided to illustrate the main results.Comment: IEEE Transactions on Signal and Information Processing Over Networks, to appea

    I know it is not real (and that matters):Media awareness vs. presence shape the VR experience

    Get PDF
    Inspired by the widely recognized idea that in VR/XR, not only presence but also encountered plausibility is relevant (Slater, 2009), we propose a general psychological parallel processing account to explain users' VR and XR experience. The model adopts a broad psychological view by building on interdisciplinary literature on the dualistic nature of perceiving and experiencing (mediated) representations. It proposes that perceptual sensations like presence are paralleled by users' belief that "this is not really happening", which we refer to as media awareness. We review the developmental underpinnings of basic media awareness, and argue that it is triggered in users’ conscious exposure to VR/XR. During exposure the salience of media awareness can vary dynamically due to factors like encountered sensory and semantic (in)consistencies. Our account sketches media awareness and presence as two parallel processes that together define a situation as a media exposure situation. We also review potential joint effects on subsequent psychological and behavioral responses that characterize the user experience in VR/XR. We conclude the article with a programmatic outlook on testable assumptions and open questions for future research

    Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

    Get PDF
    http://ceur-ws.org/Vol-627/allproceedings.pdfInternational audienceMALLOW-2010 is a third edition of a series initiated in 2007 in Durham, and pursued in 2009 in Turin. The objective, as initially stated, is to "provide a venue where: the cost of participation was minimum; participants were able to attend various workshops, so fostering collaboration and cross-fertilization; there was a friendly atmosphere and plenty of time for networking, by maximizing the time participants spent together"

    Proceedings of the IJCAI-09 Workshop on Nonmonotonic Reasoning, Action and Change

    Full text link
    Copyright in each article is held by the authors. Please contact the authors directly for permission to reprint or use this material in any form for any purpose.The biennial workshop on Nonmonotonic Reasoning, Action and Change (NRAC) has an active and loyal community. Since its inception in 1995, the workshop has been held seven times in conjunction with IJCAI, and has experienced growing success. We hope to build on this success again this eighth year with an interesting and fruitful day of discussion. The areas of reasoning about action, non-monotonic reasoning and belief revision are among the most active research areas in Knowledge Representation, with rich inter-connections and practical applications including robotics, agentsystems, commonsense reasoning and the semantic web. This workshop provides a unique opportunity for researchers from all three fields to be brought together at a single forum with the prime objectives of communicating important recent advances in each field and the exchange of ideas. As these fundamental areas mature it is vital that researchers maintain a dialog through which they can cooperatively explore common links. The goal of this workshop is to work against the natural tendency of such rapidly advancing fields to drift apart into isolated islands of specialization. This year, we have accepted ten papers authored by a diverse international community. Each paper has been subject to careful peer review on the basis of innovation, significance and relevance to NRAC. The high quality selection of work could not have been achieved without the invaluable help of the international Program Committee. A highlight of the workshop will be our invited speaker Professor Hector Geffner from ICREA and UPF in Barcelona, Spain, discussing representation and inference in modern planning. Hector Geffner is a world leader in planning, reasoning, and knowledge representation; in addition to his many important publications, he is a Fellow of the AAAI, an associate editor of the Journal of Artificial Intelligence Research and won an ACM Distinguished Dissertation Award in 1990

    Uncertainty in Natural Language Generation: From Theory to Applications

    Full text link
    Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications. As such, it is crucial that NLG systems are trustworthy and reliable, for example by indicating when they are likely to be wrong; and supporting multiple views, backgrounds and writing styles -- reflecting diverse human sub-populations. In this paper, we argue that a principled treatment of uncertainty can assist in creating systems and evaluation protocols better aligned with these goals. We first present the fundamental theory, frameworks and vocabulary required to represent uncertainty. We then characterise the main sources of uncertainty in NLG from a linguistic perspective, and propose a two-dimensional taxonomy that is more informative and faithful than the popular aleatoric/epistemic dichotomy. Finally, we move from theory to applications and highlight exciting research directions that exploit uncertainty to power decoding, controllable generation, self-assessment, selective answering, active learning and more

    On the adaptive advantage of always being right (even when one is not)

    Get PDF
    We propose another positive illusion – overconfidence in the generalisability of one’s theory – that fits with McKay & Dennett’s (M&D’s) criteria for adaptive misbeliefs. This illusion is pervasive in adult reasoning but we focus on its prevalence in children’s developing theories. It is a strongly held conviction arising from normal functioning of the doxastic system that confers adaptive advantage on the individual

    Abductive Design of BDI Agent-based Digital Twins of Organizations

    Get PDF
    For a Digital Twin - a precise, virtual representation of a physical counterpart - of a human-like system to be faithful and complete, it must appeal to a notion of anthropomorphism (i.e., attributing human behaviour to non-human entities) to imitate (1) the externally visible behaviour and (2) the internal workings of that system. Although the Belief-Desire-Intention (BDI) paradigm was not developed for this purpose, it has been used successfully in human modeling applications. In this sense, we introduce in this thesis the notion of abductive design of BDI agent-based Digital Twins of organizations, which builds on two powerful reasoning disciplines: reverse engineering (to recreate the visible behaviour of the target system) and goal-driven eXplainable Artificial Intelligence (XAI) (for viewing the behaviour of the target system through the lens of BDI agents). Precisely speaking, the overall problem we are trying to address in this thesis is to “Find a BDI agent program that best explains (in the sense of formal abduction) the behaviour of a target system based on its past experiences . To do so, we propose three goal-driven XAI techniques: (1) abductive design of BDI agents, (2) leveraging imperfect explanations and (3) mining belief-based explanations. The resulting approach suggests that using goal-driven XAI to generate Digital Twins of organizations in the form of BDI agents can be effective, even in a setting with limited information about the target system’s behaviour
    • …
    corecore