5,876 research outputs found

    A static analysis for quantifying information flow in a simple imperative language

    Get PDF
    We propose an approach to quantify interference in a simple imperative language that includes a looping construct. In this paper we focus on a particular case of this definition of interference: leakage of information from private variables to public ones via a Trojan Horse attack. We quantify leakage in terms of Shannon's information theory and we motivate our definition by proving a result relating this definition of leakage and the classical notion of programming language interference. The major contribution of the paper is a quantitative static analysis based on this definition for such a language. The analysis uses some non-trivial information theory results like Fano's inequality and L1 inequalities to provide reasonable bounds for conditional statements. While-loops are handled by integrating a qualitative flow-sensitive dependency analysis into the quantitative analysis

    Proving uniformity and independence by self-composition and coupling

    Full text link
    Proof by coupling is a classical proof technique for establishing probabilistic properties of two probabilistic processes, like stochastic dominance and rapid mixing of Markov chains. More recently, couplings have been investigated as a useful abstraction for formal reasoning about relational properties of probabilistic programs, in particular for modeling reduction-based cryptographic proofs and for verifying differential privacy. In this paper, we demonstrate that probabilistic couplings can be used for verifying non-relational probabilistic properties. Specifically, we show that the program logic pRHL---whose proofs are formal versions of proofs by coupling---can be used for formalizing uniformity and probabilistic independence. We formally verify our main examples using the EasyCrypt proof assistant

    Synthesizing Probabilistic Invariants via Doob's Decomposition

    Full text link
    When analyzing probabilistic computations, a powerful approach is to first find a martingale---an expression on the program variables whose expectation remains invariant---and then apply the optional stopping theorem in order to infer properties at termination time. One of the main challenges, then, is to systematically find martingales. We propose a novel procedure to synthesize martingale expressions from an arbitrary initial expression. Contrary to state-of-the-art approaches, we do not rely on constraint solving. Instead, we use a symbolic construction based on Doob's decomposition. This procedure can produce very complex martingales, expressed in terms of conditional expectations. We show how to automatically generate and simplify these martingales, as well as how to apply the optional stopping theorem to infer properties at termination time. This last step typically involves some simplification steps, and is usually done manually in current approaches. We implement our techniques in a prototype tool and demonstrate our process on several classical examples. Some of them go beyond the capability of current semi-automatic approaches
    • …
    corecore