15 research outputs found

    Generalizing Permissive-Upgrade in Dynamic Information Flow Analysis

    Get PDF
    Preventing implicit information flows by dynamic program analysis requires coarse approximations that result in false positives, because a dynamic monitor sees only the executed trace of the program. One widely deployed method is the no-sensitive-upgrade check, which terminates a program whenever a variable's taint is upgraded (made more sensitive) due to a control dependence on tainted data. Although sound, this method is restrictive, e.g., it terminates the program even if the upgraded variable is never used subsequently. To counter this, Austin and Flanagan introduced the permissive-upgrade check, which allows a variable upgrade due to control dependence, but marks the variable "partially-leaked". The program is stopped later if it tries to use the partially-leaked variable. Permissive-upgrade handles the dead-variable assignment problem and remains sound. However, Austin and Flanagan develop permissive-upgrade only for a two-point (low-high) security lattice and indicate a generalization to pointwise products of such lattices. In this paper, we develop a non-trivial and non-obvious generalization of permissive-upgrade to arbitrary lattices. The key difficulty lies in finding a suitable notion of partial leaks that is both sound and permissive and in developing a suitable definition of memory equivalence that allows an inductive proof of soundness

    Information Flow Safety in Multiparty Sessions

    Get PDF
    We consider a calculus for multiparty sessions enriched with security levels for messages. We propose a monitored semantics for this calculus, which blocks the execution of processes as soon as they attempt to leak information. We illustrate the use of our monitored semantics with various examples, and show that the induced safety property implies a noninterference property studied previously.Comment: In Proceedings EXPRESS 2011, arXiv:1108.407

    The Cardinal Abstraction for Quantitative Information Flow

    Get PDF
    International audienceQualitative information flow aims at detecting information leaks, whereas the emerging quantitative techniques target the estimation of information leaks. Quantifying information flow in the presence of low inputs is challenging, since the traditional techniques of approximating and counting the reachable states of a program no longer suffice. This paper proposes an automated quantitative information flow analysis for imperative deterministic programs with low inputs. The approach relies on a novel abstract domain, the cardinal abstraction, in order to compute a precise upper-bound over the maximum leakage of batch-job programs. We prove the soundness of the cardinal abstract domain by relying on the framework of abstract interpretation. We also prove its precision with respect to a flow-sensitive type system for the two-point security lattice

    Self-Adaptation and Secure Information Flow in Multiparty Communications

    Get PDF
    International audienceWe present a comprehensive model of structured communications in which self-adaptation and security concerns are jointly addressed. More specifically, we propose a model of multiparty, self-adaptive communications with access control and secure information flow guarantees. In our model, multiparty protocols (choreographies) are described as global types; security violations occur when process implementations of protocol participants attempt to read or write messages of inappropriate security levels within directed exchanges. Such violations trigger adaptation mechanisms that prevent the violations to occur and/or to propagate their effect in the choreography. Our model is equipped with local and global adaptation mechanisms for reacting to security violations of different gravity; type soundness results ensure that the overall multiparty protocol is still correctly executed while the system adapts itself to preserve the participants' security

    A Methodology For Micro-Policies

    Get PDF
    This thesis proposes a formal methodology for defining, specifying, and reasoning about micro-policies — security policies based on fine-grained tagging that include forms of access control, memory safety, compartmentalization, and information-flow control. Our methodology is based on a symbolic machine that extends a conventional RISC-like architecture with tags. Tags express security properties of parts of the program state ( this is an instruction, this is secret, etc.), and are checked and propagated on every instruction according to flexible user-supplied rules. We apply this methodology to two widely studied policies, information-flow control and heap memory safety, implementing them with the symbolic machine and formally characterizing their security guarantees: for information-flow control, we prove a classic notion of termination-insensitive noninterference; for memory safety, a novel property that protects memory regions that a program cannot validly reach through the pointers it possesses — which, we believe, provides a useful criterion for evaluating and comparing different flavors of memory safety. We show how the symbolic machine can be realized with a more practical processor design, where a software monitor takes advantage of a hardware cache to speed up its execution while protecting itself from potentially malicious user-level code. Our development has been formalized and verified in the Coq proof assistant, attesting that our methodology can provide rigorous security guarantees

    Information flow and declassification analysis for legacy and untrusted programs

    Get PDF
    Standard access control mechanisms are often insufficient to enforce compliance of programs with security policies. For this reason, information flow analysis has become a topic of increasing interest. In such type of analysis, the main property to be checked is called non-interference, which basically states that the publicly observable behaviour of a program is entirely independent of its secret, secure input values. However, simple non-interference is too restrictive for specifying and enforcing in- formation flow policies in most programs. Exceptions to non-interference are provided using declassification policies. Several approaches for enforcing declassification have been proposed in the literature. In most of these approaches, the declassification policies are embedded in the program itself or heavily tied to the variables in the program being analyzed, thereby providing at best little separation between the code and the policy. Consequently, the previous approaches essentially require that the code be trusted, since to trust that the correct policy is being enforced, we need to trust the source code. In this thesis, we propose a novel framework for information flow analysis, with support to declassification policies, related to the source code being analyzed via its I/O channels. The framework supports many of the of declassification policies identified in the literature. Based on flow-based static analysis, it represents a first step towards a new approach that can be applied to untrusted and legacy source code to automatically verify that the analyzed program complies with the specified declassification policies. We present a framework in which expressions over input channel values that could be output by the program are compared to a set of declassification requirements. We build an implementation of such framework, which works by constructing a conservative approximation of the such expressions, and by determining whether all of them satisfy the declassification requirements stated in the policy. We introduce a representation of such expressions that resembles tree automata. We prove that if a program is considered safe according to our analysis then it satisfies a property we call Policy Controlled Release, which formalizes information-flow correctness according to our notion of declassification policy. We demonstrate, through examples, that our approach works for several interesting and useful declassification policies, including one involving declassification of the average of several confidential values. Finally, we extend the static analyzer to build a practical hybrid static-runtime enforcement mechanism, consisting of 3 steps: static analysis, preload checking, and runtime enforcement. We demonstrate how the hybrid mechanism is able to enforce real-world policies which are unable to be treated by standard approaches from industry. Also, we show how this goal is achieved by keeping the static analysis step system independent, and the runtime enforcement with minimum runtime overhead
    corecore