50 research outputs found
Probabilistic Opacity for Markov Decision Processes
Opacity is a generic security property, that has been defined on (non
probabilistic) transition systems and later on Markov chains with labels. For a
secret predicate, given as a subset of runs, and a function describing the view
of an external observer, the value of interest for opacity is a measure of the
set of runs disclosing the secret. We extend this definition to the richer
framework of Markov decision processes, where non deterministic choice is
combined with probabilistic transitions, and we study related decidability
problems with partial or complete observation hypotheses for the schedulers. We
prove that all questions are decidable with complete observation and
-regular secrets. With partial observation, we prove that all
quantitative questions are undecidable but the question whether a system is
almost surely non opaque becomes decidable for a restricted class of
-regular secrets, as well as for all -regular secrets under
finite-memory schedulers
Probabilistic Disclosure: Maximisation vs. Minimisation
We consider opacity questions where an observation function provides
to an external attacker a view of the states along executions and
secret executions are those visiting some state from a fixed
subset. Disclosure occurs when the observer can deduce from a finite
observation that the execution is secret, the epsilon-disclosure
variant corresponding to the execution being secret with probability
greater than 1 - epsilon. In a probabilistic and non deterministic
setting, where an internal agent can choose between actions, there
are two points of view, depending on the status of this agent: the
successive choices can either help the attacker trying to disclose
the secret, if the system has been corrupted, or they can prevent
disclosure as much as possible if these choices are part of the
system design. In the former situation, corresponding to a worst
case, the disclosure value is the supremum over the strategies of
the probability to disclose the secret (maximisation), whereas in
the latter case, the disclosure is the infimum (minimisation). We
address quantitative problems (comparing the optimal value with a
threshold) and qualitative ones (when the threshold is zero or one)
related to both forms of disclosure for a fixed or finite
horizon. For all problems, we characterise their decidability status
and their complexity. We discover a surprising asymmetry: on the one
hand optimal strategies may be chosen among deterministic ones in
maximisation problems, while it is not the case for minimisation. On
the other hand, for the questions addressed here, more minimisation
problems than maximisation ones are decidable
Probabilistic Opacity in Refinement-Based Modeling
Given a probabilistic transition system (PTS) partially observed by
an attacker, and an -regular predicate over the traces of
, measuring the disclosure of the secret in means
computing the probability that an attacker who observes a run of can
ascertain that its trace belongs to . In the context of refinement, we
consider specifications given as Interval-valued Discrete Time Markov Chains
(IDTMCs), which are underspecified Markov chains where probabilities on edges
are only required to belong to intervals. Scheduling an IDTMC produces
a concrete implementation as a PTS and we define the worst case disclosure of
secret in as the maximal disclosure of over all
PTSs thus produced. We compute this value for a subclass of IDTMCs and we prove
that refinement can only improve the opacity of implementations
Probabilistic Opacity for a Passive Adversary and its Application to Chaum\u27s Voting Scheme
A predicate is opaque for a given system, if an adversary will never
be able to establish truth or falsehood of the predicate for any
observed computation. This notion has been essentially introduced and
studied in the context of transition systems whether describing the
semantics of programs, security protocols or other systems. In this
paper, we are interested in studying opacity in the probabilistic
computational world.
Indeed, in other settings, as in the Dolev-Yao model for instance, even
if an adversary is sure of the truth of the predicate, it
remains opaque as the adversary cannot conclude for sure.
In this paper, we introduce a computational version of opacity in the case of
passive adversaries called cryptographic opacity.
Our main result is a composition theorem: if a system is secure in an
abstract formalism and the cryptographic primitives used to implement
it are secure, then this system is secure in a
computational formalism. Security of the abstract system is the usual
opacity and security of the cryptographic primitives is IND-CPA security.
To illustrate our result, we give two applications:
a short and elegant proof of the classical Abadi-Rogaway result and
the first computational proof of Chaum\u27s visual electronic
voting scheme
Verifying Opacity Properties in Security Systems
We delineate a methodology for the specification and verification of flow security properties expressible in the opacity framework. We propose a logic, opacTL, for straightforwardly expressing such properties in systems that can be modelled as partially observable labelled transition systems. We develop verification techniques for analysing property opacity with respect to observation notions. Adding a probabilistic operator to the specification language enables quantitative analysis and verification. This analysis is implemented as an extension to the PRISM model checker and illustrated via a number of examples. Finally, an alternative approach to quantifying the opacity property based on entropy is sketched
Proceedings of the 3rd International Workshop on Formal Aspects in Security and Trust (FAST2005)
The present report contains the pre-proceedings of the third international Workshop on Formal Aspects in Security and Trust (FAST2005), held in Newcastle upon Tyne, 18-19 July 2005. FAST is an event affliated with the Formal Methods 2005 Congress (FM05). The third international Workshop on Formal Aspects in Security and Trust (FAST2005) aims at continuing the successful effort of the previous two FAST workshop editions for fostering the cooperation among researchers in the areas of security and trust. The new challenges offered by the so-called ambient intelligence space, as a future paradigm in the information society, demand for a coherent and rigorous framework of concepts, tools and methodologies to provide user\u27s trust&confidence on the underlying communication/interaction infrastructure. It is necessary to address issues relating to both guaranteeing security of the infrastructure and the perception of the infrastructure being secure. In addition, user confidence on what is happening must be enhanced by developing trust models effective but also easily comprehensible and manageable by users
Verifying Opacity Properties in Security Systems
We delineate a methodology for the specification and verification of flow security properties expressible in the opacity framework. We propose a logic, opacTL, for straightforwardly expressing such properties in systems that can be modelled as partially observable labelled transition systems. We develop verification techniques for analysing property opacity with respect to observation notions. Adding a probabilistic operator to the specification language enables quantitative analysis and verification. This analysis is implemented as an extension to the PRISM model checker and illustrated via a number of examples. Finally, an alternative approach to quantifying the opacity property based on entropy is sketched