433 research outputs found
Complexity and Unwinding for Intransitive Noninterference
The paper considers several definitions of information flow security for
intransitive policies from the point of view of the complexity of verifying
whether a finite-state system is secure. The results are as follows. Checking
(i) P-security (Goguen and Meseguer), (ii) IP-security (Haigh and Young), and
(iii) TA-security (van der Meyden) are all in PTIME, while checking TO-security
(van der Meyden) is undecidable, as is checking ITO-security (van der Meyden).
The most important ingredients in the proofs of the PTIME upper bounds are new
characterizations of the respective security notions, which also lead to new
unwinding proof techniques that are shown to be sound and complete for these
notions of security, and enable the algorithms to return simple
counter-examples demonstrating insecurity. Our results for IP-security improve
a previous doubly exponential bound of Hadj-Alouane et al
Formal Verification of Differential Privacy for Interactive Systems
Differential privacy is a promising approach to privacy preserving data
analysis with a well-developed theory for functions. Despite recent work on
implementing systems that aim to provide differential privacy, the problem of
formally verifying that these systems have differential privacy has not been
adequately addressed. This paper presents the first results towards automated
verification of source code for differentially private interactive systems. We
develop a formal probabilistic automaton model of differential privacy for
systems by adapting prior work on differential privacy for functions. The main
technical result of the paper is a sound proof technique based on a form of
probabilistic bisimulation relation for proving that a system modeled as a
probabilistic automaton satisfies differential privacy. The novelty lies in the
way we track quantitative privacy leakage bounds using a relation family
instead of a single relation. We illustrate the proof technique on a
representative automaton motivated by PINQ, an implemented system that is
intended to provide differential privacy. To make our proof technique easier to
apply to realistic systems, we prove a form of refinement theorem and apply it
to show that a refinement of the abstract PINQ automaton also satisfies our
differential privacy definition. Finally, we begin the process of automating
our proof technique by providing an algorithm for mechanically checking a
restricted class of relations from the proof technique.Comment: 65 pages with 1 figur
Opacity with Orwellian Observers and Intransitive Non-interference
Opacity is a general behavioural security scheme flexible enough to account
for several specific properties. Some secret set of behaviors of a system is
opaque if a passive attacker can never tell whether the observed behavior is a
secret one or not. Instead of considering the case of static observability
where the set of observable events is fixed off line or dynamic observability
where the set of observable events changes over time depending on the history
of the trace, we consider Orwellian partial observability where unobservable
events are not revealed unless a downgrading event occurs in the future of the
trace. We show how to verify that some regular secret is opaque for a regular
language L w.r.t. an Orwellian projection while it has been proved undecidable
even for a regular language L w.r.t. a general Orwellian observation function.
We finally illustrate relevancy of our results by proving the equivalence
between the opacity property of regular secrets w.r.t. Orwellian projection and
the intransitive non-interference property
A Verified Information-Flow Architecture
SAFE is a clean-slate design for a highly secure computer system, with
pervasive mechanisms for tracking and limiting information flows. At the lowest
level, the SAFE hardware supports fine-grained programmable tags, with
efficient and flexible propagation and combination of tags as instructions are
executed. The operating system virtualizes these generic facilities to present
an information-flow abstract machine that allows user programs to label
sensitive data with rich confidentiality policies. We present a formal,
machine-checked model of the key hardware and software mechanisms used to
dynamically control information flow in SAFE and an end-to-end proof of
noninterference for this model.
We use a refinement proof methodology to propagate the noninterference
property of the abstract machine down to the concrete machine level. We use an
intermediate layer in the refinement chain that factors out the details of the
information-flow control policy and devise a code generator for compiling such
information-flow policies into low-level monitor code. Finally, we verify the
correctness of this generator using a dedicated Hoare logic that abstracts from
low-level machine instructions into a reusable set of verified structured code
generators
Verifying Noninterference in a Cyber-Physical System the Advanced Electric Power Grid
The advanced electric power grid is a complex real-time system having both cyber and physical components. While each component may function correctly, independently, their composition may yield incorrectness due to interference. One specific type of interference is in the frequency domain, essentially, violations of the Nyquist rate. The challenge is to encode these signal processing problem characteristics into a form that can be model checked. To verify the correctness of the cyber-physical composition using model-checking techniques requires that a model be constructed that can represent frequency interference. In this paper, RT-PROMELA was used to construct the model, which was checked in RT-SPIN. In order to reduce the state explosion problem, the model was decomposed into multiple sub-models, each with a smaller state space that can be checked individually, and then the proofs checked for noninterference. Cooperation among multiple clock variables due to their lack of notion of urgency and their asynchronous interactions, are also addressed
Verifying That a Compiler Preserves Concurrent Value-Dependent Information-Flow Security
It is common to prove by reasoning over source code that programs do not leak sensitive data. But doing so leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. This task is complicated when programs enforce value-dependent information-flow security properties (in which classification of locations can vary depending on values in other locations) and complicated further when programs exploit shared-variable concurrency.
Prior work has formally defined a notion of concurrency-aware refinement for preserving value-dependent security properties. However, that notion is considerably more complex than standard refinement definitions typically applied in the verification of semantics preservation by compilers. To date it remains unclear whether it can be applied to a realistic compiler, because there exist no general decomposition principles for separating it into smaller, more familiar, proof obligations.
In this work, we provide such a decomposition principle, which we show can almost halve the complexity of proving secure refinement. Further, we demonstrate its applicability to secure compilation, by proving in Isabelle/HOL the preservation of value-dependent security by a proof-of-concept compiler from an imperative While language to a generic RISC-style assembly language, for programs with shared-memory concurrency mediated by locking primitives. Finally, we execute our compiler in Isabelle on a While language model of the Cross Domain Desktop Compositor, demonstrating to our knowledge the first use of a compiler verification result to carry an information-flow security property down to the assembly-level model of a non-trivial concurrent program
Compositional closure for Bayes Risk in probabilistic noninterference
We give a sequential model for noninterference security including probability
(but not demonic choice), thus supporting reasoning about the likelihood that
high-security values might be revealed by observations of low-security
activity. Our novel methodological contribution is the definition of a
refinement order and its use to compare security measures between
specifications and (their supposed) implementations. This contrasts with the
more common practice of evaluating the security of individual programs in
isolation.
The appropriateness of our model and order is supported by our showing that
our refinement order is the greatest compositional relation --the compositional
closure-- with respect to our semantics and an "elementary" order based on
Bayes Risk --- a security measure already in widespread use. We also relate
refinement to other measures such as Shannon Entropy.
By applying the approach to a non-trivial example, the anonymous-majority
Three-Judges protocol, we demonstrate by example that correctness arguments can
be simplified by the sort of layered developments --through levels of
increasing detail-- that are allowed and encouraged by compositional semantics
- …