815 research outputs found

    Beliefs and Conflicts in a Real World Multiagent System

    Get PDF
    In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the conflicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    Trust and deception in multi-agent trading systems: a logical viewpoint

    Get PDF
    Trust and deception have been of concern to researchers since the earliest research into multi-agent trading systems (MATS). In an open trading environment, trust can be established by external mechanisms e.g. using secret keys or digital signatures or by internal mechanisms e.g. learning and reasoning from experience. However, in a MATS, where distrust exists among the agents, and deception might be used between agents, how to recognize and remove fraud and deception in MATS becomes a significant issue in order to maintain a trustworthy MATS environment. This paper will propose an architecture for a multi-agent trading system (MATS) and explore how fraud and deception changes the trust required in a multi-agent trading system/environment. This paper will also illustrate several forms of logical reasoning that involve trust and deception in a MATS. The research is of significance in deception recognition and trust sustainability in e-business and e-commerce

    Performance analysis with network-enhanced complexities: On fading measurements, event-triggered mechanisms, and cyber attacks

    Get PDF
    Copyright © 2014 Derui Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Nowadays, the real-world systems are usually subject to various complexities such as parameter uncertainties, time-delays, and nonlinear disturbances. For networked systems, especially large-scale systems such as multiagent systems and systems over sensor networks, the complexities are inevitably enhanced in terms of their degrees or intensities because of the usage of the communication networks. Therefore, it would be interesting to (1) examine how this kind of network-enhanced complexities affects the control or filtering performance; and (2) develop some suitable approaches for controller/filter design problems. In this paper, we aim to survey some recent advances on the performance analysis and synthesis with three sorts of fashionable network-enhanced complexities, namely, fading measurements, event-triggered mechanisms, and attack behaviors of adversaries. First, these three kinds of complexities are introduced in detail according to their engineering backgrounds, dynamical characteristic, and modelling techniques. Then, the developments of the performance analysis and synthesis issues for various networked systems are systematically reviewed. Furthermore, some challenges are illustrated by using a thorough literature review and some possible future research directions are highlighted.This work was supported in part by the National Natural Science Foundation of China under Grants 61134009, 61329301, 61203139, 61374127, and 61374010, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    The Triangles of Dishonesty:Modelling the Evolution of Lies, Bullshit, and Deception in Agent Societies

    Get PDF
    Misinformation and disinformation in agent societies can be spread due to the adoption of dishonest communication. Recently, this phenomenon has been exacerbated by advances in AI technologies. One way to understand dishonest communication is to model it from an agent-oriented perspective. In this paper we model dishonesty games considering the existing literature on lies, bullshit, and deception, three prevalent but distinct forms of dishonesty. We use an evolutionary agent-based replicator model to simulate dishonesty games and show the differences between the three types of dishonest communication under two different sets of assumptions: agents are either self-interested (payoff maximizers) or competitive (relative payoff maximizers). We show that:(i) truth-telling is not stable in the face of lying, but that interrogation helps drive truth-telling in the self-interested case but not the competitive case;(ii) that in the competitive case, agents stop bullshitting and start truth-telling, but this is not stable;(iii) that deception can only dominate in the competitive case, and thattruth-telling is a saddle point in which agents realise deception can provide better payoffs

    Strategic Learning for Active, Adaptive, and Autonomous Cyber Defense

    Full text link
    The increasing instances of advanced attacks call for a new defense paradigm that is active, autonomous, and adaptive, named as the \texttt{`3A'} defense paradigm. This chapter introduces three defense schemes that actively interact with attackers to increase the attack cost and gather threat information, i.e., defensive deception for detection and counter-deception, feedback-driven Moving Target Defense (MTD), and adaptive honeypot engagement. Due to the cyber deception, external noise, and the absent knowledge of the other players' behaviors and goals, these schemes possess three progressive levels of information restrictions, i.e., from the parameter uncertainty, the payoff uncertainty, to the environmental uncertainty. To estimate the unknown and reduce uncertainty, we adopt three different strategic learning schemes that fit the associated information restrictions. All three learning schemes share the same feedback structure of sensation, estimation, and actions so that the most rewarding policies get reinforced and converge to the optimal ones in autonomous and adaptive fashions. This work aims to shed lights on proactive defense strategies, lay a solid foundation for strategic learning under incomplete information, and quantify the tradeoff between the security and costs.Comment: arXiv admin note: text overlap with arXiv:1906.1218
    • …
    corecore