45 research outputs found

    Multi-Layer Cyber-Physical Security and Resilience for Smart Grid

    Full text link
    The smart grid is a large-scale complex system that integrates communication technologies with the physical layer operation of the energy systems. Security and resilience mechanisms by design are important to provide guarantee operations for the system. This chapter provides a layered perspective of the smart grid security and discusses game and decision theory as a tool to model the interactions among system components and the interaction between attackers and the system. We discuss game-theoretic applications and challenges in the design of cross-layer robust and resilient controller, secure network routing protocol at the data communication and networking layers, and the challenges of the information security at the management layer of the grid. The chapter will discuss the future directions of using game-theoretic tools in addressing multi-layer security issues in the smart grid.Comment: 16 page

    FlipDyn with Control: Resource Takeover Games with Dynamics

    Full text link
    We present the FlipDyn, a dynamic game in which two opponents (a defender and an adversary) choose strategies to optimally takeover a resource that involves a dynamical system. At any time instant, each player can take over the resource and thereby control the dynamical system after incurring a state-dependent and a control-dependent costs. The resulting model becomes a hybrid dynamical system where the discrete state (FlipDyn state) determines which player is in control of the resource. Our objective is to compute the Nash equilibria of this dynamic zero-sum game. Our contributions are four-fold. First, for any non-negative costs, we present analytical expressions for the saddle-point value of the FlipDyn game, along with the corresponding Nash equilibrium (NE) takeover strategies. Second, for continuous state, linear dynamical systems with quadratic costs, we establish sufficient conditions under which the game admits a NE in the space of linear state-feedback policies. Third, for scalar dynamical systems with quadratic costs, we derive the NE takeover strategies and saddle-point values independent of the continuous state of the dynamical system. Fourth and finally, for higher dimensional linear dynamical systems with quadratic costs, we derive approximate NE takeover strategies and control policies which enable the computation of bounds on the value functions of the game in each takeover state. We illustrate our findings through a numerical study involving the control of a linear dynamical system in the presence of an adversary.Comment: 17 Pages, 2 figures. Under review at IEEE TA

    Mean-field-game model for Botnet defense in Cyber-security

    Full text link
    We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customers (say, switch on or out the defence system) is much faster that the infection rates

    Multi-rate Threshold FlipThem

    Get PDF
    A standard method to protect data and secrets is to apply threshold cryptography in the form of secret sharing. This is motivated by the acceptance that adversaries will compromise systems at some point; and hence using threshold cryptography provides a defence in depth. The existence of such powerful adversaries has also motivated the introduction of game theoretic techniques into the analysis of systems, e.g. via the FlipIt game of van Dijk et al. This work further analyses the case of FlipIt when used with multiple resources, dubbed FlipThem in prior papers. We examine two key extensions of the FlipThem game to more realistic scenarios; namely separate costs and strategies on each resource, and a learning approach obtained using so-called fictitious play in which players do not know about opponent costs, or assume rationality

    Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing

    Get PDF
    Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.Comment: Published in ACM CCS 2017, this is the full version with all appendice

    A Logic for the Compliance Budget

    Get PDF
    Security breaches often arise as a result of usersā€™ failure to comply with security policies. Such failures to comply may simply be innocent mistakes. However, there is evidence that, in some circumstances, users choose not to comply because they perceive that the security benefit of compliance is outweighed by the cost that is the impact of compliance on their abilities to complete their operational tasks. That is, they perceive security compliance as hindering their work. The ā€˜compliance budgetā€™ is a concept in information security that describes how the users of an organizationā€™s systems determine the extent to which they comply with the specified security policy. The purpose of this paper is to initiate a qualitative logical analysis of, and so provide reasoning tools for, this important concept in security economics for which quantitative analysis is difficult to establish. We set up a simple temporal logic of preferences, with a semantics given in terms of histories and sets of preferences, and explain how to use it to model and reason about the compliance budget. The key ingredients are preference update, to account for behavioural change in response to policy change, and an ability to handle uncertainty, to account for the lack of quantitative measures

    Artificial intelligence for social impact: Learning and planning in the data-to-deployment pipeline

    Get PDF
    With the maturing of artificial intelligence (AI) and multiagent systems research, we have a tremendous opportunity to direct these advances toward addressing complex societal problems. In pursuit of this goal of AI for social impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for social impact. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society
    corecore