9,494 research outputs found

    Deception

    Get PDF

    Deceptive AI and Society

    Get PDF
    Deceptive artificial intelligence (AI) is a heavily loaded term. Its semantic load has become exponentially heavier in a very short period of time. Perhaps, most of this semantic load, at least in the recent public sphere, has been placed on it because of the deployment of large language models (LLMs), such as ChatGPT. Deceptive AI is very multifaceted. Different AI approaches give rise to different types of AI technologies, or, in some cases, autonomous agents. Some of these technologies already exist in practice, others exist in theory, some are transitioning between theory to implementation, and, finally, some are still only fictions of our shared imagination [62]

    The Triangles of Dishonesty:Modelling the Evolution of Lies, Bullshit, and Deception in Agent Societies

    Get PDF
    Misinformation and disinformation in agent societies can be spread due to the adoption of dishonest communication. Recently, this phenomenon has been exacerbated by advances in AI technologies. One way to understand dishonest communication is to model it from an agent-oriented perspective. In this paper we model dishonesty games considering the existing literature on lies, bullshit, and deception, three prevalent but distinct forms of dishonesty. We use an evolutionary agent-based replicator model to simulate dishonesty games and show the differences between the three types of dishonest communication under two different sets of assumptions: agents are either self-interested (payoff maximizers) or competitive (relative payoff maximizers). We show that:(i) truth-telling is not stable in the face of lying, but that interrogation helps drive truth-telling in the self-interested case but not the competitive case;(ii) that in the competitive case, agents stop bullshitting and start truth-telling, but this is not stable;(iii) that deception can only dominate in the competitive case, and thattruth-telling is a saddle point in which agents realise deception can provide better payoffs

    Online Handbook of Argumentation for AI: Volume 1

    Get PDF
    This volume contains revised versions of the papers selected for the first volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.Comment: editor: Federico Castagna and Francesca Mosca and Jack Mumford and Stefan Sarkadi and Andreas Xydi

    Consciousness complexity

    Get PDF
    Copyright © 2015 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run

    A formal account of dishonesty

    Get PDF
    International audienceThis paper provides formal accounts of dishonest attitudes of agents. We introduce a propositional multi-modal logic that can represent an agent's belief and intention as well as communication between agents. Using the language, we formulate different categories of dishonesty. We first provide two different definitions of lies and provide their logical properties. We then consider an incentive behind the act of lying and introduce lying with objectives. We subsequently define bullshit, withholding information and half-truths, and analyze their formal properties. We compare different categories of dishonesty in a systematic manner, and examine their connection to deception. We also propose maxims for dishonest communication that agents should ideally try to satisfy
    • 

    corecore