25 research outputs found

    On the Logic of Lying

    Get PDF
    We model lying as a communicative act changing the beliefs of the agents in a multi-agent system. With Augustine, we see lying as an utterance believed to be false by the speaker and uttered with the intent to deceive the addressee. The deceit is successful if the lie is believed after the utterance by the addressee. This is our perspective. Also, as common in dynamic epistemic logics, we model the agents addressed by the lie, but we do not (necessarily) model the speaker as one of those agents. This further simplifies the picture: we do not need to model the intention of the speaker, nor do we need to distinguish between knowledge and belief of the speaker: he is the observer of the system and his beliefs are taken to be the truth by the listeners. We provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes games such as Liar's Dice interesting, and how lying is put to use in optimal strategies for playing the game. This is the opposite of the logical manipulative update: instead of always believing the utterance, now, it is never believed. We also give a matching logical analysis for the games perspective, and implement that in the model checker DEMO. Our running example of lying in games is the game of Liar's Dice

    On the Logic of Lying

    Get PDF
    We look at lying as an act of communication, where (i) the proposition that is communicated is not true, (ii) the utterer of the lie knows (or believes) that what she communicates is not true, and (iii) the utterer of the lie intends the lie to be taken as truth. Rather than dwell on the moral issues, we provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes such games interesting, and how lying is put to use in optimal strategies for playing the game. Finally, we give a matching logical analysis. Our running example of lying in games is liar's dice

    Editors' Review and Introduction:Lying in Logic, Language, and Cognition

    Get PDF
    We describe some recent trends in research on lying from a multidisciplinary perspective, including logic, philosophy, linguistics, psychology, cognitive science, behavioral economics, and artificial intelligence. Furthermore, we outline the seven contributions to this special issue of topiCS.</p

    Influencing Choices by Changing Beliefs: A Logical Theory of Influence, Persuasion, and Deception

    Get PDF
    Wemodelpersuasion,viewedasadeliberateactionthroughwhichan agent (persuader) changes the beliefs of another agent’s (persuadee). This notion of persuasion paves the way to express the idea of persuasive influence, namely inducing a change in the choices of the persuadee by changing her beliefs. It allows in turns to express different aspects of deception. To this end, we propose a logical framework that enables expressing actions and capabilities of agents, their mental states (desires, knowledge and beliefs), a variety of agency operators as well as the connection between mental states and choices. Those notions, once combined, enable us to capture, the notion of influence, persuasion and deception, as well as their relation

    Agent-update Models

    Full text link
    In dynamic epistemic logic (Van Ditmarsch et al., 2008) it is customary to use an action frame (Baltag and Moss, 2004; Baltag et al., 1998) to describe different views of a single action. In this article, action frames are extended to add or remove agents, we call these agent-update frames. This can be done selectively so that only some specified agents get information of the update, which can be used to model several interesting examples such as private update and deception, studied earlier by Baltag and Moss (2004); Sakama (2015); Van Ditmarsch et al. (2012). The product update of a Kripke model by an action frame is an abbreviated way of describing the transformed Kripke model which is the result of performing the action. This is substantially extended to a sum-product update of a Kripke model by an agent-update frame in the new setting. These ideas are applied to an AI problem of modelling a story. We show that dynamic epistemic logics, with update modalities now based on agent-update frames, continue to have sound and complete proof systems. Decision procedures for model checking and satisfiability have expected complexity. A sublanguage is shown to have polynomial space algorithms

    Arrow update logic

    Get PDF
    We present Arrow Update Logic, a theory of epistemic access elimination that can be used to reason about multi-agent belief change. While the belief-changing "arrow updates" of Arrow Update Logic can be transformed into equivalent belief-changing "action models" from the popular Dynamic Epistemic Logic approach, we prove that arrow updates are sometimes exponentially more succinct than action models. Further, since many examples of belief change are naturally thought of from Arrow Update Logic's perspective of eliminating access to epistemic possibilities, Arrow Update Logic is a valuable addition to the repertoire of logics of information change. In addition to proving basic results about Arrow Update Logic, we introduce a new notion of common knowledge that generalizes both ordinary common knowledge and the "relativized" common knowledge familiar from the Dynamic Epistemic Logic literature

    Coincidence of Bargaining Solutions and Rationalizability in Epistemic Games

    Full text link
    Chapter 1: In 1950, John Nash proposed the Bargaining Problem, for which a solution is a function that assigns to each space of possible utility assignments a single point in the space, in some sense representing the ’fair’ deal for the agents involved. Nash provided a solution of his own, and several others have been presented since then, including a notable solution by Ehud Kalai and Meir Smorodinsky. In chapter 1, a complete account is given for the conditions under which the two solutions will coincide for two player bargaining scenarios. Chapter 2: In the same year, Nash presented one of the fundamental solution concepts of game theory, the Nash Equilibrium. Subsequently this concept was generalized by Bernheim and Pearce to the solution concept of rationalizability. Each involves a consideration of the beliefs of the agents regarding the play of the other agents, though in many strategic situations, payoffs depend not only on the actions taken, but also some facts of the world. The main result of chapter 2 is to define rationalizability for a class of such games known as Epistemic Messaging Games

    Arrow update synthesis

    Get PDF
    In this contribution we present arbitrary arrow update model logic (AAUML). This is a dynamic epistemic logic or update logic. In update logics, static/basic modalities are interpreted on a given relational model whereas dynamic/update modalities induce transformations (updates) of relational models. In AAUML the update modalities formalize the execution of arrow update models, and there is also a modality for quantification over arrow update models. Arrow update models are an alternative to the well-known action models. We provide an axiomatization of AAUML. The axiomatization is a rewrite system allowing to eliminate arrow update modalities from any given formula, while preserving truth. Thus, AAUML is decidable and equally expressive as the base multi-agent modal logic. Our main result is to establish arrow update synthesis: if there is an arrow update model after which φ, we can construct (synthesize) that model from φ. We also point out some pregnant differences in update expressivity between arrow update logics, action model logics, and refinement modal logic
    corecore