153,622 research outputs found

    Strategic Abilities of Forgetful Agents in Stochastic Environments

    Full text link
    In this paper, we investigate the probabilistic variants of the strategy logics ATL and ATL* under imperfect information. Specifically, we present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. That is, the semantics of the logics are based on multi-agent, stochastic transition systems with imperfect information, which combine two sources of uncertainty, namely, the partial observability agents have on the environment, and the likelihood of transitions to occur from a system state. Since the model checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. The resulting setting captures the situation in which agents have qualitative uncertainty of the local state and quantitative uncertainty about the occurrence of future events. We illustrate the usefulness of this setting with meaningful examples

    Strategic dialogue management via deep reinforcement learning

    Get PDF
    Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities

    Computationally Feasible Strategies

    Full text link
    Real-life agents seldom have unlimited reasoning power. In this paper, we propose and study a new formal notion of computationally bounded strategic ability in multi-agent systems. The notion characterizes the ability of a set of agents to synthesize an executable strategy in the form of a Turing machine within a given complexity class, that ensures the satisfaction of a temporal objective in a parameterized game arena. We show that the new concept induces a proper hierarchy of strategic abilities -- in particular, polynomial-time abilities are strictly weaker than the exponential-time ones. We also propose an ``adaptive'' variant of computational ability which allows for different strategies for each parameter value, and show that the two notions do not coincide. Finally, we define and study the model-checking problem for computational strategies. We show that the problem is undecidable even for severely restricted inputs, and present our first steps towards decidable fragments

    Strategic Abilities of Asynchronous Agents: Semantic Side Effects and How to Tame Them

    Get PDF
    Recently, we have proposed a framework for verification of agents' abilities in asynchronous multi-agent systems, together with an algorithm for automated reduction of models. The semantics was built on the modeling tradition of distributed systems. As we show here, this can sometimes lead to counterintuitive interpretation of formulas when reasoning about the outcome of strategies. First, the semantics disregards finite paths, and thus yields unnatural evaluation of strategies with deadlocks. Secondly, the semantic representations do not allow to capture the asymmetry between proactive agents and the recipients of their choices. We propose how to avoid the problems by a suitable extension of the representations and change of the execution semantics for asynchronous MAS. We also prove that the model reduction scheme still works in the modified framework

    Stability and Strategic Time-Dependent Behaviour in Multiagent Systems

    Get PDF
    Temporal reasoning and strategic behaviour are important abilities of multiagent systems. We introduce a game-theoretic framework suitable for modelling selfish and rational agents which can store and reason about the evolution of an environment, and act according to their interests. Our aim is to identify stable interactions: those where no agent has a benefit from changing his behaviour to another. For this reason we deploy the game-theoretic concept of Nash equilibrium and strong Nash equilibrium. We show that not all agent interactions can be stable. Also, we investigate the computational complexity for verifying and checking the existence of stable agent interactions. This paves the way for developing agents which can take appropriate decisions in competitive and strategic situations

    Exploring Motives and Strategies in the Production of Knowledge in the University Context by the Example of Academic Career Trajectories

    Get PDF
    Current research has shown that the combination of implicit and explicit knowledge among various actors is particularly crucial to the production of knowledge and that the characteristics of social relationships and resulting networks impact on how proficienty is acquired transferred absorbed and applied Although investigations have suggested that the actors involved in knowledge production are active and strategic agents who differ considerably in their abilities to incorporate and generate knowledge they are mostly referred to in terms as nodes or black boxes In this regard relationship research has demonstrated that actors differ in terms of motivations and abilities to share information and knowledge Such motives are often strategi

    Commitment games with conditional information revelation

    Full text link
    The conditional commitment abilities of mutually transparent computer agents have been studied in previous work on commitment games and program equilibrium. This literature has shown how these abilities can help resolve Prisoner's Dilemmas and other failures of cooperation in complete information settings. But inefficiencies due to private information have been neglected thus far in this literature, despite the fact that these problems are pervasive and might also be addressed by greater mutual transparency. In this work, we introduce a framework for commitment games with a new kind of conditional commitment device, which agents can use to conditionally reveal private information. We prove a folk theorem for this setting that provides sufficient conditions for ex post efficiency, and thus represents a model of ideal cooperation between agents without a third-party mediator. Connecting our framework with the literature on strategic information revelation, we explore cases where conditional revelation can be used to achieve full cooperation while unconditional revelation cannot. Finally, extending previous work on program equilibrium, we develop an implementation of conditional information revelation. We show that this implementation forms program ϵ\epsilon-Bayesian Nash equilibria corresponding to the Bayesian Nash equilibria of these commitment games.Comment: Accepted at the Games, Agents, and Incentives Workshop at AAMAS 202
    • …
    corecore