9 research outputs found

    On the Logic of Lying

    Get PDF
    We look at lying as an act of communication, where (i) the proposition that is communicated is not true, (ii) the utterer of the lie knows (or believes) that what she communicates is not true, and (iii) the utterer of the lie intends the lie to be taken as truth. Rather than dwell on the moral issues, we provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes such games interesting, and how lying is put to use in optimal strategies for playing the game. Finally, we give a matching logical analysis. Our running example of lying in games is liar's dice

    On the Logic of Lying

    Get PDF
    We model lying as a communicative act changing the beliefs of the agents in a multi-agent system. With Augustine, we see lying as an utterance believed to be false by the speaker and uttered with the intent to deceive the addressee. The deceit is successful if the lie is believed after the utterance by the addressee. This is our perspective. Also, as common in dynamic epistemic logics, we model the agents addressed by the lie, but we do not (necessarily) model the speaker as one of those agents. This further simplifies the picture: we do not need to model the intention of the speaker, nor do we need to distinguish between knowledge and belief of the speaker: he is the observer of the system and his beliefs are taken to be the truth by the listeners. We provide a sketch of what goes on logically when a lie is communicated. We present a complete logic of manipulative updating, to analyse the effects of lying in public discourse. Next, we turn to the study of lying in games. First, a game-theoretical analysis is used to explain how the possibility of lying makes games such as Liar's Dice interesting, and how lying is put to use in optimal strategies for playing the game. This is the opposite of the logical manipulative update: instead of always believing the utterance, now, it is never believed. We also give a matching logical analysis for the games perspective, and implement that in the model checker DEMO. Our running example of lying in games is the game of Liar's Dice

    The language of social software

    Get PDF
    Computer software is written in languages likeC, Java orHaskell. In many cases social software is expressed in natural language. The paper explores connections between the areas of natural language analysis and analysis of social protocols, and proposes an extended program for natural language semantics, where the goals of natural language communication are derived from the demands of specific social protocols

    PDL as a Multi-Agent Strategy Logic

    Get PDF
    Propositional Dynamic Logic or PDL was invented as a logic for reasoning about regular programming constructs. We propose a new perspective on PDL as a multi-agent strategic logic (MASL). This logic for strategic reasoning has group strategies as first class citizens, and brings game logic closer to standard modal logic. We demonstrate that MASL can express key notions of game theory, social choice theory and voting theory in a natural way, we give a sound and complete proof system for MASL, and we show that MASL encodes coalition logic. Next, we extend the language to epistemic multi-agent strategic logic (EMASL), we give examples of what it can express, we propose to use it for posing new questions in epistemic social choice theory, and we give a calculus for reasoning about a natural class of epistemic game models. We end by listing avenues for future research and by tracing connections to a number of other logics for reasoning about strategies.Comment: 10 pages, Poster presentation at TARK 2013 (arXiv:1310.6382) http://www.tark.or

    A modal approach to model computational trust

    Get PDF
    Le concept de confiance est un concept sociocognitif qui adresse la question de l'interaction dans les systèmes concurrents. Quand la complexité d'un système informatique prohibe l'utilisation de solutions traditionnelles de sécurité informatique en amont du processus de développement (solutions dites de type dur), la confiance est un concept candidat, pour le développement de systèmes d'aide à l'interaction. Dans cette thèse, notre but majeur est de présenter une vue d'ensemble de la discipline de la modélisation de la confiance dans les systèmes informatiques, et de proposer quelques modèles logiques pour le développement de module de confiance. Nous adoptons comme contexte applicatif majeur, les applications basées sur les architectures orientées services, qui sont utilisées pour modéliser des systèmes ouverts telle que les applications web. Nous utiliserons pour cela une abstraction qui modélisera ce genre de systèmes comme des systèmes multi-agents. Notre travail est divisé en trois parties, la première propose une étude de la discipline, nous y présentons les pratiques utilisées par les chercheurs et les praticiens de la confiance pour modéliser et utiliser ce concept dans différents systèmes, cette analyse nous permet de définir un certain nombre de points critiques, que la discipline doit aborder pour se développer. La deuxième partie de notre travail présente notre premier modèle de confiance. Cette première solution basée sur un formalisme logique (logique dynamique épistémique), démarre d'une interprétation de la confiance comme une croyance sociocognitive, ce modèle présentera une première modélisation de la confiance. Apres avoir prouvé la décidabilité de notre formalisme. Nous proposons une méthodologie pour inférer la confiance en des actions complexes : à partir de notre confiance dans des actions atomiques, nous illustrons ensuite comment notre solution peut être mise en pratique dans un cas d'utilisation basée sur la combinaison de service dans les architectures orientées services. La dernière partie de notre travail consiste en un modèle de confiance, où cette notion sera perçue comme une spécialisation du raisonnement causal tel qu'implémenté dans le formalisme des règles de production. Après avoir adapté ce formalisme au cas épistémique, nous décrivons trois modèles basés sur l'idée d'associer la confiance au raisonnement non monotone. Ces trois modèles permettent respectivement d'étudier comment la confiance est générée, comment elle-même génère les croyances d'un agent et finalement, sa relation avec son contexte d'utilisation.The concept of trust is a socio-cognitive concept that plays an important role in representing interactions within concurrent systems. When the complexity of a computational system and its unpredictability makes standard security solutions (commonly called hard security solutions) inapplicable, computational trust is one of the most useful concepts to design protocols of interaction. In this work, our main objective is to present a prospective survey of the field of study of computational trust. We will also present two trust models, based on logical formalisms, and show how they can be studied and used. While trying to stay general in our study, we use service-oriented architecture paradigm as a context of study when examples are needed. Our work is subdivided into three chapters. The first chapter presents a general view of the computational trust studies. Our approach is to present trust studies in three main steps. Introducing trust theories as first attempts to grasp notions linked to the concept of trust, fields of application, that explicit the uses that are traditionally associated to computational trust, and finally trust models, as an instantiation of a trust theory, w.r.t. some formal framework. Our survey ends with a set of issues that we deem important to deal with in priority in order to help the advancement of the field. The next two chapters present two models of trust. Our first model is an instantiation of Castelfranchi & Falcone's socio-cognitive trust theory. Our model is implemented using a Dynamic Epistemic Logic that we propose. The main originality of our solution is the fact that our trust definition extends the original model to complex action (programs, composed services, etc.) and the use of authored assignment as a special kind of atomic actions. The use of our model is then illustrated in a case study related to service-oriented architecture. Our second model extends our socio-cognitive definition to an abductive framework that allows us to associate trust to explanations. Our framework is an adaptation of Bochman's production relations to the epistemic case. Since Bochman approach was initially proposed to study causality, our definition of trust in this second model presents trust as a special case of causal reasoning, applied to a social context. We end our manuscript with a conclusion that presents how we would like to extend our work

    Propositional Dynamic Logic as a Logic of Belief Revision

    No full text
    This paper shows how propositional dynamic logic (PDL) can be interpreted as a logic for multi-agent belief revision. For that we revise and extend the logic of communication and change (LCC) of [9]. Like LCC, our logic uses PDL as a base epistemic language. Unlike LCC, we start out from agent plausibilities, add their converses, and build knowledge and belief operators from these with the PDL constructs. We extend the update mechanism of LCC to an update mechanism that handles belief change as relation substitution, and we show that the update part of this logic is more expressive than either that of LCC or that of doxastic/epistemic PDL with a belief change modality. It is shown that the properties of knowledge and belief are preserved under any update, and that the logic is complete

    Propositional Dynamic Logic as a Logic of Belief Revision

    No full text
    This paper shows how propositional dynamic logic (PDL) can be interpreted as a logic for multi-agent belief revision. For that we revise and extend the logic of communication and change (LCC) of [9]. Like LCC, our logic uses PDL as a base epistemic language. Unlike LCC, we start out from agent plausibilities, add their converses, and build knowledge and belief operators from these with the PDL constructs. We extend the update mechanism of LCC to an update mechanism that handles belief change as relation substitution, and we show that the update part of this logic is more expressive than either that of LCC or that of doxastic/epistemic PDL with a belief change modality. It is shown that the properties of knowledge and belief are preserved under any update, and that the logic is complete

    The Philosophical Foundations of PLEN: A Protocol-theoretic Logic of Epistemic Norms

    Full text link
    In this dissertation, I defend the protocol-theoretic account of epistemic norms. The protocol-theoretic account amounts to three theses: (i) There are norms of epistemic rationality that are procedural; epistemic rationality is at least partially defined by rules that restrict the possible ways in which epistemic actions and processes can be sequenced, combined, or chosen among under varying conditions. (ii) Epistemic rationality is ineliminably defined by procedural norms; procedural restrictions provide an irreducible unifying structure for even apparently non-procedural prescriptions and normative expressions, and they are practically indispensable in our cognitive lives. (iii) These procedural epistemic norms are best analyzed in terms of the protocol (or program) constructions of dynamic logic. I defend (i) and (ii) at length and in multi-faceted ways, and I argue that they entail a set of criteria of adequacy for models of epistemic dynamics and abstract accounts of epistemic norms. I then define PLEN, the protocol-theoretic logic of epistemic norms. PLEN is a dynamic logic that analyzes epistemic rationality norms with protocol constructions interpreted over multi-graph based models of epistemic dynamics. The kernel of the overall argument of the dissertation is showing that PLEN uniquely satisfies the criteria defended; none of the familiar, rival frameworks for modeling epistemic dynamics or normative concepts are capable of satisfying these criteria to the same degree as PLEN. The overarching argument of the dissertation is thus a theory-preference argument for PLEN
    corecore