214 research outputs found

    A Temporal Logic for Hyperproperties

    Full text link
    Hyperproperties, as introduced by Clarkson and Schneider, characterize the correctness of a computer program as a condition on its set of computation paths. Standard temporal logics can only refer to a single path at a time, and therefore cannot express many hyperproperties of interest, including noninterference and other important properties in security and coding theory. In this paper, we investigate an extension of temporal logic with explicit path variables. We show that the quantification over paths naturally subsumes other extensions of temporal logic with operators for information flow and knowledge. The model checking problem for temporal logic with path quantification is decidable. For alternation depth 1, the complexity is PSPACE in the length of the formula and NLOGSPACE in the size of the system, as for linear-time temporal logic

    Complexity and Unwinding for Intransitive Noninterference

    Full text link
    The paper considers several definitions of information flow security for intransitive policies from the point of view of the complexity of verifying whether a finite-state system is secure. The results are as follows. Checking (i) P-security (Goguen and Meseguer), (ii) IP-security (Haigh and Young), and (iii) TA-security (van der Meyden) are all in PTIME, while checking TO-security (van der Meyden) is undecidable, as is checking ITO-security (van der Meyden). The most important ingredients in the proofs of the PTIME upper bounds are new characterizations of the respective security notions, which also lead to new unwinding proof techniques that are shown to be sound and complete for these notions of security, and enable the algorithms to return simple counter-examples demonstrating insecurity. Our results for IP-security improve a previous doubly exponential bound of Hadj-Alouane et al

    Formally Verified Algorithmic Fairness using Information-Flow Tools (Extended Abstract)

    Get PDF
    This work presents results on the use of Information-Flow tools for the formal verification of algorithmic fairness properties. The problem of enforcing secure information-flow was originally studied in the context of information security: If secret information may “flow” through an algorithm in such a way that it can influence the program’s output, we consider that to be insecure information-flow as attackers could potentially observe (parts of) the secret. Due to its wide-spread use, there exist numerous tools for analyzing secure information-flow properties. Recent work showed that there exists a strong correspondence between secure information-flow and algorithmic fairness: If protected group attributes are treated as secret program inputs, then secure information-flow means that these “secret” attributes cannot influence the result of a program. We demonstrate that off-the-shelf tools for information-flow can be used to formally analyze algorithmic fairness properties including established notions such as (conditional) demographic parity as well as a new quantitative notion named fairness spread

    Information flow analysis for mobile code in dynamic security environments

    Get PDF
    With the growing amount of data handled by Internet-enabled mobile devices, the task of preventing software from leaking confidential information is becoming increasingly important. At the same time, mobile applications are typically executed on different devices whose users have varying requirements for the privacy of their data. Users should be able to define their personal information security settings, and they should get a reliable assurance that the installed software respects these settings. Language-based information flow security focuses on the analysis of programs to determine information flows among accessed data resources of different security levels, and to verify and formally certify that these flows follow a given policy. In the mobile code scenario, however, both the dynamic aspect of the security environment and the fact that mobile software is distributed as bytecode pose a challenge for existing static analysis approaches. This thesis presents a language-based mechanism to certify information flow security in the presence of dynamic environments. An object-oriented high-level language as well as a bytecode language are equipped with facilities to inspect user-defined information flow security settings at runtime. This way, the software developer can create privacy-aware programs that can adapt their behaviour to arbitrary security environments, a property that is formalized as "universal noninterference". This property is statically verified by an information flow type system that uses restrictive forms of dependent types to judge abstractly on the concrete security policy that is effective at runtime. To verify compiled bytecode programs, a low-level version of the type system is presented that works on an intermediate code representation in which the original program structure is partially restored. Rigorous soundness proofs and a type-preserving compilation enable the generation of certified bytecode programs in the style of proof-carrying code. To show the practical feasibility of the approach, the system is implemented and demonstrated on a concrete application scenario, where personal data are sent from a mobile device to a server on the Internet

    Opacity with Orwellian Observers and Intransitive Non-interference

    Full text link
    Opacity is a general behavioural security scheme flexible enough to account for several specific properties. Some secret set of behaviors of a system is opaque if a passive attacker can never tell whether the observed behavior is a secret one or not. Instead of considering the case of static observability where the set of observable events is fixed off line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we consider Orwellian partial observability where unobservable events are not revealed unless a downgrading event occurs in the future of the trace. We show how to verify that some regular secret is opaque for a regular language L w.r.t. an Orwellian projection while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. We finally illustrate relevancy of our results by proving the equivalence between the opacity property of regular secrets w.r.t. Orwellian projection and the intransitive non-interference property

    Effective verification of confidentiality for multi-threaded programs

    Get PDF
    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that characterizes secure information flow for multi-threaded programs under a given scheduler. Scheduler-specificness allows us to reason about refinement attacks, an important and tricky class of attacks that are notorious in practice. SSOD imposes two conditions: (SSOD-1)~all individual public variables have to evolve deterministically, expressed by requiring stuttering equivalence between the traces of each individual public variable, and (SSOD-2)~the relative order of updates of public variables is coincidental, i.e., there always exists a matching trace. \ud \ud We verify the first condition by reducing it to the question whether all traces of \ud each public variable are stuttering equivalent. \ud To verify the second condition, we show how\ud the condition can be translated, via a series of steps, \ud into a standard strong bisimulation problem. \ud Our verification techniques can be easily\ud adapted to verify other formalizations of similar information flow properties.\ud \ud We also exploit counter example generation techniques to synthesize attacks for insecure programs that fail either SSOD-1 or SSOD-2, i.e., showing how confidentiality \ud of programs can be broken

    Formal Verification of Differential Privacy for Interactive Systems

    Full text link
    Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.Comment: 65 pages with 1 figur
    • …
    corecore