3,113 research outputs found
Game-Theoretic Analysis of Cyber Deception: Evidence-Based Strategies and Dynamic Risk Mitigation
Deception is a technique to mislead human or computer systems by manipulating
beliefs and information. For the applications of cyber deception,
non-cooperative games become a natural choice of models to capture the
adversarial interactions between the players and quantitatively characterizes
the conflicting incentives and strategic responses. In this chapter, we provide
an overview of deception games in three different environments and extend the
baseline signaling game models to include evidence through side-channel
knowledge acquisition to capture the information asymmetry, dynamics, and
strategic behaviors of deception. We analyze the deception in binary
information space based on a signaling game framework with a detector that
gives off probabilistic evidence of the deception when the sender acts
deceptively. We then focus on a class of continuous one-dimensional information
space and take into account the cost of deception in the signaling game. We
finally explore the multi-stage incomplete-information Bayesian game model for
defensive deception for advanced persistent threats (APTs). We use the perfect
Bayesian Nash equilibrium (PBNE) as the solution concept for the deception
games and analyze the strategic equilibrium behaviors for both the deceivers
and the deceivees.Comment: arXiv admin note: text overlap with arXiv:1810.0075
A Game-Theoretic Taxonomy and Survey of Defensive Deception for Cybersecurity and Privacy
Cyberattacks on both databases and critical infrastructure have threatened
public and private sectors. Ubiquitous tracking and wearable computing have
infringed upon privacy. Advocates and engineers have recently proposed using
defensive deception as a means to leverage the information asymmetry typically
enjoyed by attackers as a tool for defenders. The term deception, however, has
been employed broadly and with a variety of meanings. In this paper, we survey
24 articles from 2008-2018 that use game theory to model defensive deception
for cybersecurity and privacy. Then we propose a taxonomy that defines six
types of deception: perturbation, moving target defense, obfuscation, mixing,
honey-x, and attacker engagement. These types are delineated by their
information structures, agents, actions, and duration: precisely concepts
captured by game theory. Our aims are to rigorously define types of defensive
deception, to capture a snapshot of the state of the literature, to provide a
menu of models which can be used for applied research, and to identify
promising areas for future work. Our taxonomy provides a systematic foundation
for understanding different types of defensive deception commonly encountered
in cybersecurity and privacy.Comment: To Appear in ACM Cumputing Surveys (CSUR
Reward-Based Deception with Cognitive Bias
Deception plays a key role in adversarial or strategic interactions for the
purpose of self-defence and survival. This paper introduces a general framework
and solution to address deception. Most existing approaches for deception
consider obfuscating crucial information to rational adversaries with abundant
memory and computation resources. In this paper, we consider deceiving
adversaries with bounded rationality and in terms of expected rewards. This
problem is commonly encountered in many applications especially involving human
adversaries. Leveraging the cognitive bias of humans in reward evaluation under
stochastic outcomes, we introduce a framework to optimally assign resources of
a limited quantity to optimally defend against human adversaries. Modeling such
cognitive biases follows the so-called prospect theory from behavioral
psychology literature. Then we formulate the resource allocation problem as a
signomial program to minimize the defender's cost in an environment modeled as
a Markov decision process. We use police patrol hour assignment as an
illustrative example and provide detailed simulation results based on
real-world data.Comment: Submitted to CDC 201
Cyber-Physical Systems Security: a Systematic Mapping Study
Cyber-physical systems are integrations of computation, networking, and
physical processes. Due to the tight cyber-physical coupling and to the
potentially disrupting consequences of failures, security here is one of the
primary concerns. Our systematic mapping study sheds some light on how security
is actually addressed when dealing with cyber-physical systems. The provided
systematic map of 118 selected studies is based on, for instance, application
fields, various system components, related algorithms and models, attacks
characteristics and defense strategies. It presents a powerful comparison
framework for existing and future research on this hot topic, important for
both industry and academia.Comment: arXiv admin note: text overlap with arXiv:1205.5073 by other author
Deception by Design: Evidence-Based Signaling Games for Network Defense
Deception plays a critical role in the financial industry, online markets,
national defense, and countless other areas. Understanding and harnessing
deception - especially in cyberspace - is both crucial and difficult. Recent
work in this area has used game theory to study the roles of incentives and
rational behavior. Building upon this work, we employ a game-theoretic model
for the purpose of mechanism design. Specifically, we study a defensive use of
deception: implementation of honeypots for network defense. How does the design
problem change when an adversary develops the ability to detect honeypots? We
analyze two models: cheap-talk games and an augmented version of those games
that we call cheap-talk games with evidence, in which the receiver can detect
deception with some probability. Our first contribution is this new model for
deceptive interactions. We show that the model includes traditional signaling
games and complete information games as special cases. We also demonstrate
numerically that deception detection sometimes eliminate pure-strategy
equilibria. Finally, we present the surprising result that the utility of a
deceptive defender can sometimes increase when an adversary develops the
ability to detect deception. These results apply concretely to network defense.
They are also general enough for the large and critical body of strategic
interactions that involve deception.Comment: To be presented at Workshop on the Economics of Information Security
(WEIS) 2015, Delft University of Technology, The Netherland
A Game-Theoretic Foundation of Deception: Knowledge Acquisition and Fundamental Limits
Deception is a technique to mislead human or computer systems by manipulating
beliefs and information. Successful deception is characterized by the
information-asymmetric, dynamic, and strategic behaviors of the deceiver and
the deceivee. This paper proposes a game-theoretic framework of a deception
game to model the strategic behaviors of the deceiver and deceivee and
construct strategies for both attacks and defenses over a continuous
one-dimensional information space. We use the signaling game model to capture
the information-asymmetric, dynamic, and strategic behaviors of deceptions by
modeling the deceiver as a privately-informed player called sender and the
deceivee as an uninformed player called receiver. We characterize perfect
Bayesian Nash equilibrium (PBNE) solution of the game and study the
deceivability. We highlight the condition of deceivee's knowledge enhancement
through evidences to maintain the equilibrium and analyze the impacts of direct
deception costs and players' conflict of interest on the deceivability
Proactive Defense Against Physical Denial of Service Attacks using Poisson Signaling Games
While the Internet of things (IoT) promises to improve areas such as energy
efficiency, health care, and transportation, it is highly vulnerable to
cyberattacks. In particular, distributed denial-of-service (DDoS) attacks
overload the bandwidth of a server. But many IoT devices form part of
cyber-physical systems (CPS). Therefore, they can be used to launch "physical"
denial-of-service attacks (PDoS) in which IoT devices overflow the "physical
bandwidth" of a CPS. In this paper, we quantify the population-based risk to a
group of IoT devices targeted by malware for a PDoS attack. In order to model
the recruitment of bots, we develop a "Poisson signaling game," a signaling
game with an unknown number of receivers, which have varying abilities to
detect deception. Then we use a version of this game to analyze two mechanisms
(legal and economic) to deter botnet recruitment. Equilibrium results indicate
that 1) defenders can bound botnet activity, and 2) legislating a minimum level
of security has only a limited effect, while incentivizing active defense can
decrease botnet activity arbitrarily. This work provides a quantitative
foundation for proactive PDoS defense.Comment: 2017 Conference on Decision and Game Theory for Security
(GameSec2017). arXiv admin note: text overlap with arXiv:1703.0523
A Games-in-Games Approach to Mosaic Command and Control Design of Dynamic Network-of-Networks for Secure and Resilient Multi-Domain Operations
This paper presents a games-in-games approach to provide design guidelines
for mosaic command and control that enables the secure and resilient
multi-domain operations. Under the mosaic design, pieces or agents in the
network are equipped with flexible interoperability and the capability of
self-adaptability, self-healing, and resiliency so that they can reconfigure
their responses to achieve the global mission in spite of failures of nodes and
links in the adversarial environment. The proposed games-in-games approach
provides a system-of-systems science for mosaic distributed design of
large-scale systems. Specifically, the framework integrates three layers of
design for each agent including strategic layer, tactical layer, and mission
layer. Each layer in the established model corresponds to a game of a different
scale that enables the integration of threat models and achieve self-mitigation
and resilience capabilities. The solution concept of the developed multi-layer
multi-scale mosaic design is characterized by Gestalt Nash equilibrium (GNE)
which considers the interactions between agents across different layers. The
developed approach is applicable to modern battlefield networks which are
composed of heterogeneous assets that access highly diverse and dynamic
information sources over multiple domains. By leveraging mosaic design
principles, we can achieve the desired operational goals of deployed networks
in a case study and ensure connectivity among entities for the exchange of
information to accomplish the mission.Comment: 10 page
iSTRICT: An Interdependent Strategic Trust Mechanism for the Cloud-Enabled Internet of Controlled Things
The cloud-enabled Internet of controlled things (IoCT) envisions a network of
sensors, controllers, and actuators connected through a local cloud in order to
intelligently control physical devices. Because cloud services are vulnerable
to advanced persistent threats (APTs), each device in the IoCT must
strategically decide whether to trust cloud services that may be compromised.
In this paper, we present iSTRICT, an interdependent strategic trust mechanism
for the cloud-enabled IoCT. iSTRICT is composed of three interdependent layers.
In the cloud layer, iSTRICT uses FlipIt games to conceptualize APTs. In the
communication layer, it captures the interaction between devices and the cloud
using signaling games. In the physical layer, iSTRICT uses optimal control to
quantify the utilities in the higher level games. Best response dynamics link
the three layers in an overall "game-of-games," for which the outcome is
captured by a concept called Gestalt Nash equilibrium (GNE). We prove the
existence of a GNE under a set of natural assumptions and develop an adaptive
algorithm to iteratively compute the equilibrium. Finally, we apply iSTRICT to
trust management for autonomous vehicles that rely on measurements from remote
sources. We show that strategic trust in the communication layer achieves a
worst-case probability of compromise for any attack and defense costs in the
cyber layer.Comment: To appear in IEEE Transactions on Information Forensics and Securit
Dynamic Games for Secure and Resilient Control System Design
Modern control systems are featured by their hierarchical structure composing
of cyber, physical, and human layers. The intricate dependencies among multiple
layers and units of modern control systems require an integrated framework to
address cross-layer design issues related to security and resilience
challenges. To this end, game theory provides a bottom-up modeling paradigm to
capture the strategic interactions among multiple components of the complex
system and enables a holistic view to understand and design
cyber-physical-human control systems. In this review, we first provide a
multi-layer perspective toward increasingly complex and integrated control
systems and then introduce several variants of dynamic games for modeling
different layers of control systems. We present game-theoretic methods for
understanding the fundamental tradeoffs of robustness, security, and resilience
and developing a clean-slate cross-layer approach to enhance the system
performance in various adversarial environments. This review also includes
three quintessential research problems that represent three research directions
where dynamic game approaches can bridge between multiple research areas and
make significant contributions to the design of modern control systems. The
paper is concluded with a discussion on emerging areas of research that
crosscut dynamic games and control systems.Comment: 12 pages, 8 figure
- …