503 research outputs found

    Allocation of Resources for Protecting Public Goods against Uncertain Threats Generated by Agents

    Get PDF
    This paper analyses a framework for designing robust decisions against uncertain threats to public goods generated by multiple agents. The agents can be intentional attackers such as terrorists, agents accumulating values in flood or earthquake prone locations, or agents generating extreme events such as electricity outage and recent BP oil spill, etc. Instead of using a leader-follower game theoretic framework, this paper proposes a decision theoretic model based on two-stage stochastic optimization (STO) models for advising optimal resource allocations (or regulations) in situations characterized by uncertain perceptions of agent behaviors. In particular, the stochastic mini-max model and multi- shortfalls) is advanced in the context of quantile optimization for dealing with potential extreme events. Proposed framework can deal with both direct and indirect judgments on the decision makers perception about uncertain agent behaviors, either directly by probability density estimation, or indirectly by probabilistic inversion. The quantified distributions are treated as input to the stochastic optimization models in order to address inherent uncertainties. Robust decisions can then be obtained against all possible threats, especially with extreme consequences. This paper also introduces and compares three different computational algorithms which can be used to solve arising two-stage STO problems, including bilateral descent method, linear programming approximation and stochastic quasi-gradient method. A numerical example of high dimensionlity is presented for illustration of their performance under large number of scenarios typically required for dealing with low probability extreme events. Case studies include deensive resource allocations among cities and security of electricity networks

    Risk, Security and Robust Solutions

    Get PDF
    The aim of this paper is to develop a decision-theoretic approach to security management of uncertain multi-agent systems. Security is defined as the ability to deal with intentional and unintentional threats generated by agents. The main concern of the paper is the protection of public goods from these threats allowing explicit treatment of inherent uncertainties and robust security management solutions. The paper shows that robust solutions can be properly designed by new stochastic optimization tools applicable for multicriteria problems with uncertain probability distributions and multivariate extreme events

    Operational Decision Making under Uncertainty: Inferential, Sequential, and Adversarial Approaches

    Get PDF
    Modern security threats are characterized by a stochastic, dynamic, partially observable, and ambiguous operational environment. This dissertation addresses such complex security threats using operations research techniques for decision making under uncertainty in operations planning, analysis, and assessment. First, this research develops a new method for robust queue inference with partially observable, stochastic arrival and departure times, motivated by cybersecurity and terrorism applications. In the dynamic setting, this work develops a new variant of Markov decision processes and an algorithm for robust information collection in dynamic, partially observable and ambiguous environments, with an application to a cybersecurity detection problem. In the adversarial setting, this work presents a new application of counterfactual regret minimization and robust optimization to a multi-domain cyber and air defense problem in a partially observable environment

    Contribuciones a la Seguridad del Aprendizaje Automático

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Matemáticas, leída el 05-11-2020Machine learning (ML) applications have experienced an unprecedented growth over the last two decades. However, the ever increasing adoption of ML methodologies has revealed important security issues. Among these, vulnerabilities to adversarial examples, data instances targeted at fooling ML algorithms, are especially important. Examples abound. For instance, it is relatively easy to fool a spam detector simply misspelling spam words. Obfuscation of malware code can make it seem legitimate. Simply adding stickers to a stop sign could make an autonomous vehicle classify it as a merge sign. Consequences could be catastrophic. Indeed, ML is designed to work in stationary and benign environments. However, in certain scenarios, the presence of adversaries that actively manipulate input datato fool ML systems to attain benefits break such stationarity requirements. Training and operation conditions are not identical anymore. This creates a whole new class of security vulnerabilities that ML systems may face and a new desirable property: adversarial robustness. If we are to trust operations based on ML outputs, it becomes essential that learning systems are robust to such adversarial manipulations...Las aplicaciones del aprendizaje automático o machine learning (ML) han experimentado un crecimiento sin precedentes en las últimas dos décadas. Sin embargo, la adopción cada vez mayor de metodologías de ML ha revelado importantes problemas de seguridad. Entre estos, destacan las vulnerabilidades a ejemplos adversarios, es decir; instancias de datos destinadas a engañar a los algoritmos de ML. Los ejemplos abundan: es relativamente fácil engañar a un detector de spam simplemente escribiendo mal algunas palabras características de los correos basura. La ofuscación de código malicioso (malware) puede hacer que parezca legítimo. Agregando unos parches a una señal de stop, se podría provocar que un vehículo autónomo la reconociese como una señal de dirección obligatoria. Cómo puede imaginar el lector, las consecuencias de estas vulnerabilidades pueden llegar a ser catastróficas. Y es que el machine learning está diseñado para trabajar en entornos estacionarios y benignos. Sin embargo, en ciertos escenarios, la presencia de adversarios que manipulan activamente los datos de entrada para engañar a los sistemas de ML(logrando así beneficios), rompen tales requisitos de estacionariedad. Las condiciones de entrenamiento y operación de los algoritmos ya no son idénticas, quebrándose una de las hipótesis fundamentales del ML. Esto crea una clase completamente nueva de vulnerabilidades que los sistemas basados en el aprendizaje automático deben enfrentar y una nueva propiedad deseable: la robustez adversaria. Si debemos confiaren las operaciones basadas en resultados del ML, es esencial que los sistemas de aprendizaje sean robustos a tales manipulaciones adversarias...Fac. de Ciencias MatemáticasTRUEunpu

    Adversarial Machine Learning: Bayesian Perspectives

    Full text link
    Adversarial Machine Learning (AML) is emerging as a major field aimed at protecting machine learning (ML) systems against security threats: in certain scenarios there may be adversaries that actively manipulate input data to fool learning systems. This creates a new class of security vulnerabilities that ML systems may face, and a new desirable property called adversarial robustness essential to trust operations based on ML outputs. Most work in AML is built upon a game-theoretic modelling of the conflict between a learning system and an adversary, ready to manipulate input data. This assumes that each agent knows their opponent's interests and uncertainty judgments, facilitating inferences based on Nash equilibria. However, such common knowledge assumption is not realistic in the security scenarios typical of AML. After reviewing such game-theoretic approaches, we discuss the benefits that Bayesian perspectives provide when defending ML-based systems. We demonstrate how the Bayesian approach allows us to explicitly model our uncertainty about the opponent's beliefs and interests, relaxing unrealistic assumptions, and providing more robust inferences. We illustrate this approach in supervised learning settings, and identify relevant future research problems

    Risk analysis beyond vulnerability and resilience - characterizing the defensibility of critical systems

    Full text link
    A common problem in risk analysis is to characterize the overall security of a system of valuable assets (e.g., government buildings or communication hubs), and to suggest measures to mitigate any hazards or security threats. Currently, analysts typically rely on a combination of indices, such as resilience, robustness, redundancy, security, and vulnerability. However, these indices are not by themselves sufficient as a guide to action; for example, while it is possible to develop policies to decrease vulnerability, such policies may not always be cost-effective. Motivated by this gap, we propose a new index, defensibility. A system is considered defensible to the extent that a modest investment can significantly reduce the damage from an attack or disruption. To compare systems whose performance is not readily commensurable (e.g., the electrical grid vs. the water-distribution network, both of which are critical, but which provide distinct types of services), we defined defensibility as a dimensionless index. After defining defensibility quantitatively, we illustrate how the defensibility of a system depends on factors such as the defender and attacker asset valuations, the nature of the threat (whether intelligent and adaptive, or random), and the levels of attack and defense strengths and provide analytical results that support the observations arising from the above illustrations. Overall, we argue that the defensibility of a system is an important dimension to consider when evaluating potential defensive investments, and that it can be applied in a variety of different contexts.Comment: 36 pages; Keywords: Risk Analysis, Defensibility, Vulnerability, Resilience, Counter-terroris

    Strategic Learning for Active, Adaptive, and Autonomous Cyber Defense

    Full text link
    The increasing instances of advanced attacks call for a new defense paradigm that is active, autonomous, and adaptive, named as the \texttt{`3A'} defense paradigm. This chapter introduces three defense schemes that actively interact with attackers to increase the attack cost and gather threat information, i.e., defensive deception for detection and counter-deception, feedback-driven Moving Target Defense (MTD), and adaptive honeypot engagement. Due to the cyber deception, external noise, and the absent knowledge of the other players' behaviors and goals, these schemes possess three progressive levels of information restrictions, i.e., from the parameter uncertainty, the payoff uncertainty, to the environmental uncertainty. To estimate the unknown and reduce uncertainty, we adopt three different strategic learning schemes that fit the associated information restrictions. All three learning schemes share the same feedback structure of sensation, estimation, and actions so that the most rewarding policies get reinforced and converge to the optimal ones in autonomous and adaptive fashions. This work aims to shed lights on proactive defense strategies, lay a solid foundation for strategic learning under incomplete information, and quantify the tradeoff between the security and costs.Comment: arXiv admin note: text overlap with arXiv:1906.1218
    corecore