49,174 research outputs found

    Assessing the Influence of Different Types of Probing on Adversarial Decision-Making in a Deception Game

    Full text link
    Deception, which includes leading cyber-attackers astray with false information, has shown to be an effective method of thwarting cyber-attacks. There has been little investigation of the effect of probing action costs on adversarial decision-making, despite earlier studies on deception in cybersecurity focusing primarily on variables like network size and the percentage of honeypots utilized in games. Understanding human decision-making when prompted with choices of various costs is essential in many areas such as in cyber security. In this paper, we will use a deception game (DG) to examine different costs of probing on adversarial decisions. To achieve this we utilized an IBLT model and a delayed feedback mechanism to mimic knowledge of human actions. Our results were taken from an even split of deception and no deception to compare each influence. It was concluded that probing was slightly taken less as the cost of probing increased. The proportion of attacks stayed relatively the same as the cost of probing increased. Although a constant cost led to a slight decrease in attacks. Overall, our results concluded that the different probing costs do not have an impact on the proportion of attacks whereas it had a slightly noticeable impact on the proportion of probing

    Poker as a testbed for machine intelligence research

    Get PDF
    ABSTRACT For years, games researchers have used chess, checkers and other board games as a testbed for machine intelligence research. The success of world-championship-caliber programs for these games has resulted in a number of interesting games being overlooked. Specifically, we show that poker can serve as a better testbed for machine intelligence research related to decision making problems. Poker is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. The heuristic search and evaluation methods successfully employed in chess are not helpful here. This paper outlines the difficulty of playing strong poker, and describes our first steps towards building a world-class poker-playing program

    DScent Final Report

    Get PDF
    DScent was a joint project between five UK universities combining research theories in the disciplines of computational inference, forensic psychology and expert decision-making in the area of counter-terrorism. This document discusses the work carried out by Leeds Metropolitan University which covers the research, design and development work of an investigator support system in the area of deception using artificial intelligence. For the purposes of data generation along with system and hypothesis testing the project team devised two closed world games, the Cutting Corners Board Game and the Location Based Game. DScentTrail presents the investigator with a ‘scent trail’ of a suspect’s behaviour over time, allowing the investigator to present multiple challenges to a suspect from which they may prove the suspect guilty outright or receive cognitive or emotional clues of deception (Ekman 2002; Ekman & Frank 1993; Ekman & Yuille 1989; Hocking & Leathers 1980; Knapp & Comadena 1979). A scent trail is a collection of ordered, relevant behavioural information over time for a suspect. There are links into a neural network, which attempts to identify deceptive behavioural patterns of individuals. Preliminary work was carried out on a behavioural based AI module which would work separately alongside the neural network, with both identifying deception before integrating their results to update DScentTrail. Unfortunately the data that was necessary to design such a system was not provided and therefore, this section of research only reached its preliminary stages. To date research has shown that there are no specific patterns of deceptive behaviour that are consistent in all people, across all situations (Zuckerman 1981). DScentTrail is a decision support system, incorporating artificial intelligence (AI), which is intended to be used by investigators and attempts to find ways around the problem stated by Zuckerman above

    Pinocchio's Pupil: Using Eyetracking and Pupil Dilation to Understand Truth Telling and Deception in Sender-Receiver Games

    Get PDF
    We report experiments on sender-receiver games with an incentive for senders to exaggerate. Subjects "overcommunicate" -- messages are more informative of the true state than they should be, in equilibrium. Eyetracking shows that senders look at payoffs in a way that is consistent with a level-k model. A combination of sender messages and lookup patterns predicts the true state about twice as often as predicted by equilibrium. Using these measures to infer the state would enable receiver subjects to hypothetically earn 16-21 percent more than they actually do, an economic value of 60 percent of the maximum increment

    Game Theory Meets Network Security: A Tutorial at ACM CCS

    Full text link
    The increasingly pervasive connectivity of today's information systems brings up new challenges to security. Traditional security has accomplished a long way toward protecting well-defined goals such as confidentiality, integrity, availability, and authenticity. However, with the growing sophistication of the attacks and the complexity of the system, the protection using traditional methods could be cost-prohibitive. A new perspective and a new theoretical foundation are needed to understand security from a strategic and decision-making perspective. Game theory provides a natural framework to capture the adversarial and defensive interactions between an attacker and a defender. It provides a quantitative assessment of security, prediction of security outcomes, and a mechanism design tool that can enable security-by-design and reverse the attacker's advantage. This tutorial provides an overview of diverse methodologies from game theory that includes games of incomplete information, dynamic games, mechanism design theory to offer a modern theoretic underpinning of a science of cybersecurity. The tutorial will also discuss open problems and research challenges that the CCS community can address and contribute with an objective to build a multidisciplinary bridge between cybersecurity, economics, game and decision theory

    Deception in Optimal Control

    Full text link
    In this paper, we consider an adversarial scenario where one agent seeks to achieve an objective and its adversary seeks to learn the agent's intentions and prevent the agent from achieving its objective. The agent has an incentive to try to deceive the adversary about its intentions, while at the same time working to achieve its objective. The primary contribution of this paper is to introduce a mathematically rigorous framework for the notion of deception within the context of optimal control. The central notion introduced in the paper is that of a belief-induced reward: a reward dependent not only on the agent's state and action, but also adversary's beliefs. Design of an optimal deceptive strategy then becomes a question of optimal control design on the product of the agent's state space and the adversary's belief space. The proposed framework allows for deception to be defined in an arbitrary control system endowed with a reward function, as well as with additional specifications limiting the agent's control policy. In addition to defining deception, we discuss design of optimally deceptive strategies under uncertainties in agent's knowledge about the adversary's learning process. In the latter part of the paper, we focus on a setting where the agent's behavior is governed by a Markov decision process, and show that the design of optimally deceptive strategies under lack of knowledge about the adversary naturally reduces to previously discussed problems in control design on partially observable or uncertain Markov decision processes. Finally, we present two examples of deceptive strategies: a "cops and robbers" scenario and an example where an agent may use camouflage while moving. We show that optimally deceptive strategies in such examples follow the intuitive idea of how to deceive an adversary in the above settings
    corecore