401 research outputs found
Probabilistic Opacity for Markov Decision Processes
Opacity is a generic security property, that has been defined on (non
probabilistic) transition systems and later on Markov chains with labels. For a
secret predicate, given as a subset of runs, and a function describing the view
of an external observer, the value of interest for opacity is a measure of the
set of runs disclosing the secret. We extend this definition to the richer
framework of Markov decision processes, where non deterministic choice is
combined with probabilistic transitions, and we study related decidability
problems with partial or complete observation hypotheses for the schedulers. We
prove that all questions are decidable with complete observation and
-regular secrets. With partial observation, we prove that all
quantitative questions are undecidable but the question whether a system is
almost surely non opaque becomes decidable for a restricted class of
-regular secrets, as well as for all -regular secrets under
finite-memory schedulers
Asymptotic information leakage under one-try attacks
We study the asymptotic behaviour of (a) information leakage and (b) adversary’s error probability in information hiding systems modelled as noisy channels. Specifically, we assume the attacker can make a single guess after observing n independent executions of the system, throughout which the secret information is kept fixed. We show that the asymptotic behaviour of quantities (a) and (b) can be determined in a simple way from the channel matrix. Moreover, simple and tight bounds on them as functions of n show that the convergence is exponential. We also discuss feasible methods to evaluate the rate of convergence. Our results cover both the Bayesian case, where a prior probability distribution on the secrets is assumed known to the attacker, and the maximum-likelihood case, where the attacker does not know such distribution. In the Bayesian case, we identify the distributions that maximize the leakage. We consider both the min-entropy setting studied by Smith and the additive form recently proposed by Braun et al., and show the two forms do agree asymptotically. Next, we extend these results to a more sophisticated eavesdropping scenario, where the attacker can perform a (noisy) observation at each state of the computation and the systems are modelled as hidden Markov models
Secure Control of Cyber-Physical Systems
Cyber-Physical Systems (CPS) are smart co-engineered interacting networks of physical and computational components. They refer to a large class of technologies and infrastructure in almost all life aspects including, for example, smart grids, autonomous vehicles, Internet of Things (IoT), advanced medical devices, and water supply systems. The development of CPS aims to improve the capabilities of traditional engineering systems by introducing advanced computational capacity and communications among system entities. On the other hand, the adoption of such technologies introduces a threat and exposes the system to cyber-attacks. Given the unique properties of CPSs, i.e. physically interacting with its environment, malicious parties might be interested in exploiting the physical properties of the system in the form of a cyber-physical attack. In a large class of CPSs, the physical systems are controlled using a feedback control loop. In this thesis, we investigate, from many angles, how CPSs' control systems can be prone to cyber-physical attacks and how to defend them against such attacks using arguments drawn from control theory.
In our first contribution, by considering Smart Grid applications, we address the problem of designing a Denial of Service (DoS)-resilient controller for recovering the system's transient stability robustly. We propose a Model Predictive Control (MPC) controller based on the set-theoretic (ST) arguments, which is capable of dealing with both model uncertainties, actuator limitations, and DoS. Unlike traditional MPC solutions, the proposed controller has the capability of moving most of the required computations into an offline phase. The online phase requires the solution of a quadratic programming problem, which can be efficiently solved in real-time. Then, stemming from the same ST based MPC controller idea, we propose a novel physical watermarking technique for the active detection of replay attacks in CPSs. The proposed strategy exploits the ST-MPC paradigm to design control inputs that, whenever needed, can be safely and continuously applied to the system for an apriori known number of steps. Such a control scheme enables the design of a physical watermarked control signal. We prove that, in the attack-free case, the generators' transient stability is achieved for all admissible watermarking signals and that the closed-loop system enjoys uniformly ultimately bounded stability.
In our second contribution, we address the attacker's ability to collect useful information about the control system in the reconnaissance phase of a cyber-physical attack. By using existing system identification tools, an attacker who has access to the control loop can identify the dynamics of the underlying control system. We develop a decoy-based moving target defense mechanism by leveraging an auxiliary set of virtual state-based decoy systems. Simulation results show that the provided solution degrades the attacker's ability to identify the underlying state-space model of the considered system from the intercepted control inputs and sensor measurements. It also does not impose any penalty on the control performance of the underlying system.
Finally, in our third contribution, we introduce a covert channel technique, enabling a compromised networked controller to leak information to an eavesdropper who has access to the measurement channel. We show that this can be achieved without establishing any additional explicit communication channels by properly altering the control logic and exploiting robust reachability arguments. A dual-mode receding horizon MPC strategy is used as an illustrative example to show how such an undetectable covert channel can be established
Modeling Deception for Cyber Security
In the era of software-intensive, smart and connected systems, the growing power and so-
phistication of cyber attacks poses increasing challenges to software security. The reactive
posture of traditional security mechanisms, such as anti-virus and intrusion detection
systems, has not been sufficient to combat a wide range of advanced persistent threats
that currently jeopardize systems operation. To mitigate these extant threats, more ac-
tive defensive approaches are necessary. Such approaches rely on the concept of actively
hindering and deceiving attackers. Deceptive techniques allow for additional defense by
thwarting attackers’ advances through the manipulation of their perceptions. Manipu-
lation is achieved through the use of deceitful responses, feints, misdirection, and other
falsehoods in a system. Of course, such deception mechanisms may result in side-effects
that must be handled. Current methods for planning deception chiefly portray attempts
to bridge military deception to cyber deception, providing only high-level instructions
that largely ignore deception as part of the software security development life cycle. Con-
sequently, little practical guidance is provided on how to engineering deception-based
techniques for defense. This PhD thesis contributes with a systematic approach to specify
and design cyber deception requirements, tactics, and strategies. This deception approach
consists of (i) a multi-paradigm modeling for representing deception requirements, tac-
tics, and strategies, (ii) a reference architecture to support the integration of deception
strategies into system operation, and (iii) a method to guide engineers in deception mod-
eling. A tool prototype, a case study, and an experimental evaluation show encouraging
results for the application of the approach in practice. Finally, a conceptual coverage map-
ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan-
tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais
de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien-
tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas
de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de
defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os
adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito
de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa
frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula-
ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas
e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca-
nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os
métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas
técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em
grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software
seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de
defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda-
gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação
cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re-
presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência
para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um
método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó-
tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores
para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem
de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
- …