8,610 research outputs found

    Markov modeling of moving target defense games

    Get PDF
    We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multi-level composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies

    Agent-Based Simulations of Blockchain protocols illustrated via Kadena's Chainweb

    Full text link
    While many distributed consensus protocols provide robust liveness and consistency guarantees under the presence of malicious actors, quantitative estimates of how economic incentives affect security are few and far between. In this paper, we describe a system for simulating how adversarial agents, both economically rational and Byzantine, interact with a blockchain protocol. This system provides statistical estimates for the economic difficulty of an attack and how the presence of certain actors influences protocol-level statistics, such as the expected time to regain liveness. This simulation system is influenced by the design of algorithmic trading and reinforcement learning systems that use explicit modeling of an agent's reward mechanism to evaluate and optimize a fully autonomous agent. We implement and apply this simulation framework to Kadena's Chainweb, a parallelized Proof-of-Work system, that contains complexity in how miner incentive compliance affects security and censorship resistance. We provide the first formal description of Chainweb that is in the literature and use this formal description to motivate our simulation design. Our simulation results include a phase transition in block height growth rate as a function of shard connectivity and empirical evidence that censorship in Chainweb is too costly for rational miners to engage in. We conclude with an outlook on how simulation can guide and optimize protocol development in a variety of contexts, including Proof-of-Stake parameter optimization and peer-to-peer networking design.Comment: 10 pages, 7 figures, accepted to the IEEE S&B 2019 conferenc

    Privacy Risks of Securing Machine Learning Models against Adversarial Examples

    Full text link
    The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks.Comment: ACM CCS 2019, code is available at https://github.com/inspire-group/privacy-vs-robustnes

    Advanced Cloud Privacy Threat Modeling

    Full text link
    Privacy-preservation for sensitive data has become a challenging issue in cloud computing. Threat modeling as a part of requirements engineering in secure software development provides a structured approach for identifying attacks and proposing countermeasures against the exploitation of vulnerabilities in a system . This paper describes an extension of Cloud Privacy Threat Modeling (CPTM) methodology for privacy threat modeling in relation to processing sensitive data in cloud computing environments. It describes the modeling methodology that involved applying Method Engineering to specify characteristics of a cloud privacy threat modeling methodology, different steps in the proposed methodology and corresponding products. We believe that the extended methodology facilitates the application of a privacy-preserving cloud software development approach from requirements engineering to design
    corecore