1,272 research outputs found
Game Theory Relaunched
The game is on. Do you know how to play? Game theory sets out to explore what can be said about making decisions which go beyond accepting the rules of a game. Since 1942, a well elaborated mathematical apparatus has been developed to do so; but there is more. During the last three decades game theoretic reasoning has popped up in many other fields as well - from engineering to biology and psychology. New simulation tools and network analysis have made game theory omnipresent these days. This book collects recent research papers in game theory, which come from diverse scientific communities all across the world; they combine many different fields like economics, politics, history, engineering, mathematics, physics, and psychology. All of them have as a common denominator some method of game theory. Enjoy
A Comprehensive Survey of Data Mining-based Fraud Detection Research
This survey paper categorises, compares, and summarises from almost all
published technical and review articles in automated fraud detection within the
last 10 years. It defines the professional fraudster, formalises the main types
and subtypes of known fraud, and presents the nature of data evidence collected
within affected industries. Within the business context of mining the data to
achieve higher cost savings, this research presents methods and techniques
together with their problems. Compared to all related reviews on fraud
detection, this survey covers much more technical articles and is the only one,
to the best of our knowledge, which proposes alternative data and solutions
from related domains.Comment: 14 page
Operational Decision Making under Uncertainty: Inferential, Sequential, and Adversarial Approaches
Modern security threats are characterized by a stochastic, dynamic, partially observable, and ambiguous operational environment. This dissertation addresses such complex security threats using operations research techniques for decision making under uncertainty in operations planning, analysis, and assessment. First, this research develops a new method for robust queue inference with partially observable, stochastic arrival and departure times, motivated by cybersecurity and terrorism applications. In the dynamic setting, this work develops a new variant of Markov decision processes and an algorithm for robust information collection in dynamic, partially observable and ambiguous environments, with an application to a cybersecurity detection problem. In the adversarial setting, this work presents a new application of counterfactual regret minimization and robust optimization to a multi-domain cyber and air defense problem in a partially observable environment
EVALUATING ARTIFICIAL INTELLIGENCE METHODS FOR USE IN KILL CHAIN FUNCTIONS
Current naval operations require sailors to make time-critical and high-stakes decisions based on uncertain situational knowledge in dynamic operational environments. Recent tragic events have resulted in unnecessary casualties, and they represent the decision complexity involved in naval operations and specifically highlight challenges within the OODA loop (Observe, Orient, Decide, and Assess). Kill chain decisions involving the use of weapon systems are a particularly stressing category within the OODA loop—with unexpected threats that are difficult to identify with certainty, shortened decision reaction times, and lethal consequences. An effective kill chain requires the proper setup and employment of shipboard sensors; the identification and classification of unknown contacts; the analysis of contact intentions based on kinematics and intelligence; an awareness of the environment; and decision analysis and resource selection. This project explored the use of automation and artificial intelligence (AI) to improve naval kill chain decisions. The team studied naval kill chain functions and developed specific evaluation criteria for each function for determining the efficacy of specific AI methods. The team identified and studied AI methods and applied the evaluation criteria to map specific AI methods to specific kill chain functions.Civilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCaptain, United States Marine CorpsCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited
A Temporal Framework for Hypergame Analysis of Cyber Physical Systems in Contested Environments
Game theory is used to model conflicts between one or more players over resources. It offers players a way to reason, allowing rationale for selecting strategies that avoid the worst outcome. Game theory lacks the ability to incorporate advantages one player may have over another player. A meta-game, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory builds upon the utility of game theory by allowing a player to outmaneuver an opponent, thus obtaining a more preferred outcome with higher utility. Recent work in hypergame theory has focused on normal form static games that lack the ability to encode several realistic strategies. One example of this is when a player’s available actions in the future is dependent on his selection in the past. This work presents a temporal framework for hypergame models. This framework is the first application of temporal logic to hypergames and provides a more flexible modeling for domain experts. With this new framework for hypergames, the concepts of trust, distrust, mistrust, and deception are formalized. While past literature references deception in hypergame research, this work is the first to formalize the definition for hypergames. As a demonstration of the new temporal framework for hypergames, it is applied to classical game theoretical examples, as well as a complex supervisory control and data acquisition (SCADA) network temporal hypergame. The SCADA network is an example includes actions that have a temporal dependency, where a choice in the first round affects what decisions can be made in the later round of the game. The demonstration results show that the framework is a realistic and flexible modeling method for a variety of applications
Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning
Detection of malicious behavior is a fundamental problem in security. One of
the major challenges in using detection systems in practice is in dealing with
an overwhelming number of alerts that are triggered by normal behavior (the
so-called false positives), obscuring alerts resulting from actual malicious
activity. While numerous methods for reducing the scope of this issue have been
proposed, ultimately one must still decide how to prioritize which alerts to
investigate, and most existing prioritization methods are heuristic, for
example, based on suspiciousness or priority scores. We introduce a novel
approach for computing a policy for prioritizing alerts using adversarial
reinforcement learning. Our approach assumes that the attackers know the full
state of the detection system and dynamically choose an optimal attack as a
function of this state, as well as of the alert prioritization policy. The
first step of our approach is to capture the interaction between the defender
and attacker in a game theoretic model. To tackle the computational complexity
of solving this game to obtain a dynamic stochastic alert prioritization
policy, we propose an adversarial reinforcement learning framework. In this
framework, we use neural reinforcement learning to compute best response
policies for both the defender and the adversary to an arbitrary stochastic
policy of the other. We then use these in a double-oracle framework to obtain
an approximate equilibrium of the game, which in turn yields a robust
stochastic policy for the defender. Extensive experiments using case studies in
fraud and intrusion detection demonstrate that our approach is effective in
creating robust alert prioritization policies.Comment: v1.
- …