401 research outputs found

    Probabilistic Opacity for Markov Decision Processes

    Full text link
    Opacity is a generic security property, that has been defined on (non probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω\omega-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non opaque becomes decidable for a restricted class of ω\omega-regular secrets, as well as for all ω\omega-regular secrets under finite-memory schedulers

    Asymptotic information leakage under one-try attacks

    Get PDF
    We study the asymptotic behaviour of (a) information leakage and (b) adversary’s error probability in information hiding systems modelled as noisy channels. Specifically, we assume the attacker can make a single guess after observing n independent executions of the system, throughout which the secret information is kept fixed. We show that the asymptotic behaviour of quantities (a) and (b) can be determined in a simple way from the channel matrix. Moreover, simple and tight bounds on them as functions of n show that the convergence is exponential. We also discuss feasible methods to evaluate the rate of convergence. Our results cover both the Bayesian case, where a prior probability distribution on the secrets is assumed known to the attacker, and the maximum-likelihood case, where the attacker does not know such distribution. In the Bayesian case, we identify the distributions that maximize the leakage. We consider both the min-entropy setting studied by Smith and the additive form recently proposed by Braun et al., and show the two forms do agree asymptotically. Next, we extend these results to a more sophisticated eavesdropping scenario, where the attacker can perform a (noisy) observation at each state of the computation and the systems are modelled as hidden Markov models

    Secure Control of Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS) are smart co-engineered interacting networks of physical and computational components. They refer to a large class of technologies and infrastructure in almost all life aspects including, for example, smart grids, autonomous vehicles, Internet of Things (IoT), advanced medical devices, and water supply systems. The development of CPS aims to improve the capabilities of traditional engineering systems by introducing advanced computational capacity and communications among system entities. On the other hand, the adoption of such technologies introduces a threat and exposes the system to cyber-attacks. Given the unique properties of CPSs, i.e. physically interacting with its environment, malicious parties might be interested in exploiting the physical properties of the system in the form of a cyber-physical attack. In a large class of CPSs, the physical systems are controlled using a feedback control loop. In this thesis, we investigate, from many angles, how CPSs' control systems can be prone to cyber-physical attacks and how to defend them against such attacks using arguments drawn from control theory. In our first contribution, by considering Smart Grid applications, we address the problem of designing a Denial of Service (DoS)-resilient controller for recovering the system's transient stability robustly. We propose a Model Predictive Control (MPC) controller based on the set-theoretic (ST) arguments, which is capable of dealing with both model uncertainties, actuator limitations, and DoS. Unlike traditional MPC solutions, the proposed controller has the capability of moving most of the required computations into an offline phase. The online phase requires the solution of a quadratic programming problem, which can be efficiently solved in real-time. Then, stemming from the same ST based MPC controller idea, we propose a novel physical watermarking technique for the active detection of replay attacks in CPSs. The proposed strategy exploits the ST-MPC paradigm to design control inputs that, whenever needed, can be safely and continuously applied to the system for an apriori known number of steps. Such a control scheme enables the design of a physical watermarked control signal. We prove that, in the attack-free case, the generators' transient stability is achieved for all admissible watermarking signals and that the closed-loop system enjoys uniformly ultimately bounded stability. In our second contribution, we address the attacker's ability to collect useful information about the control system in the reconnaissance phase of a cyber-physical attack. By using existing system identification tools, an attacker who has access to the control loop can identify the dynamics of the underlying control system. We develop a decoy-based moving target defense mechanism by leveraging an auxiliary set of virtual state-based decoy systems. Simulation results show that the provided solution degrades the attacker's ability to identify the underlying state-space model of the considered system from the intercepted control inputs and sensor measurements. It also does not impose any penalty on the control performance of the underlying system. Finally, in our third contribution, we introduce a covert channel technique, enabling a compromised networked controller to leak information to an eavesdropper who has access to the measurement channel. We show that this can be achieved without establishing any additional explicit communication channels by properly altering the control logic and exploiting robust reachability arguments. A dual-mode receding horizon MPC strategy is used as an illustrative example to show how such an undetectable covert channel can be established

    Mitigating Access-Driven Timing Channels in Clouds using StopWatch

    Get PDF
    NS

    Modeling Deception for Cyber Security

    Get PDF
    In the era of software-intensive, smart and connected systems, the growing power and so- phistication of cyber attacks poses increasing challenges to software security. The reactive posture of traditional security mechanisms, such as anti-virus and intrusion detection systems, has not been sufficient to combat a wide range of advanced persistent threats that currently jeopardize systems operation. To mitigate these extant threats, more ac- tive defensive approaches are necessary. Such approaches rely on the concept of actively hindering and deceiving attackers. Deceptive techniques allow for additional defense by thwarting attackers’ advances through the manipulation of their perceptions. Manipu- lation is achieved through the use of deceitful responses, feints, misdirection, and other falsehoods in a system. Of course, such deception mechanisms may result in side-effects that must be handled. Current methods for planning deception chiefly portray attempts to bridge military deception to cyber deception, providing only high-level instructions that largely ignore deception as part of the software security development life cycle. Con- sequently, little practical guidance is provided on how to engineering deception-based techniques for defense. This PhD thesis contributes with a systematic approach to specify and design cyber deception requirements, tactics, and strategies. This deception approach consists of (i) a multi-paradigm modeling for representing deception requirements, tac- tics, and strategies, (ii) a reference architecture to support the integration of deception strategies into system operation, and (iii) a method to guide engineers in deception mod- eling. A tool prototype, a case study, and an experimental evaluation show encouraging results for the application of the approach in practice. Finally, a conceptual coverage map- ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan- tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien- tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula- ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca- nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda- gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re- presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó- tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos
    corecore