3,038 research outputs found

    Quantifying Information Leakage in Finite Order Deterministic Programs

    Full text link
    Information flow analysis is a powerful technique for reasoning about the sensitive information exposed by a program during its execution. While past work has proposed information theoretic metrics (e.g., Shannon entropy, min-entropy, guessing entropy, etc.) to quantify such information leakage, we argue that some of these measures not only result in counter-intuitive measures of leakage, but also are inherently prone to conflicts when comparing two programs P1 and P2 -- say Shannon entropy predicts higher leakage for program P1, while guessing entropy predicts higher leakage for program P2. This paper presents the first attempt towards addressing such conflicts and derives solutions for conflict-free comparison of finite order deterministic programs.Comment: 14 pages, 1 figure. A shorter version of this paper is submitted to ICC 201

    A static analysis for quantifying information flow in a simple imperative language

    Get PDF
    We propose an approach to quantify interference in a simple imperative language that includes a looping construct. In this paper we focus on a particular case of this definition of interference: leakage of information from private variables to public ones via a Trojan Horse attack. We quantify leakage in terms of Shannon's information theory and we motivate our definition by proving a result relating this definition of leakage and the classical notion of programming language interference. The major contribution of the paper is a quantitative static analysis based on this definition for such a language. The analysis uses some non-trivial information theory results like Fano's inequality and L1 inequalities to provide reasonable bounds for conditional statements. While-loops are handled by integrating a qualitative flow-sensitive dependency analysis into the quantitative analysis

    Quantifying Timing Leaks and Cost Optimisation

    Full text link
    We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.Comment: 16 pages, 2 figures, 4 tables. A shorter version is included in the proceedings of ICICS'08 - 10th International Conference on Information and Communications Security, 20-22 October, 2008 Birmingham, U

    Quantitative information flow under generic leakage functions and adaptive adversaries

    Full text link
    We put forward a model of action-based randomization mechanisms to analyse quantitative information flow (QIF) under generic leakage functions, and under possibly adaptive adversaries. This model subsumes many of the QIF models proposed so far. Our main contributions include the following: (1) we identify mild general conditions on the leakage function under which it is possible to derive general and significant results on adaptive QIF; (2) we contrast the efficiency of adaptive and non-adaptive strategies, showing that the latter are as efficient as the former in terms of length up to an expansion factor bounded by the number of available actions; (3) we show that the maximum information leakage over strategies, given a finite time horizon, can be expressed in terms of a Bellman equation. This can be used to compute an optimal finite strategy recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title appeared in Proc. of FORTE 2014, LNC

    The Complexity of Quantitative Information Flow in Recursive Programs

    Get PDF
    Information-theoretic measures based upon mutual information can be employed to quantify the information that an execution of a program reveals about its secret inputs. The information leakage bounding problem asks whether the information leaked by a program does not exceed a given threshold. We consider this problem for two scenarios: a) the outputs of the program are revealed, and b)the timing (measured in the number of execution steps) of the program is revealed. For both scenarios, we establish complexity results in the context of deterministic boolean programs, both for programs with and without recursion. In particular, we prove that for recursive programs the information leakage bounding problem is no harder than checking reachability

    Squeeziness: An information theoretic measure for avoiding fault masking

    Get PDF
    Copyright @ 2012 ElsevierFault masking can reduce the effectiveness of a test suite. We propose an information theoretic measure, Squeeziness, as the theoretical basis for avoiding fault masking. We begin by explaining fault masking and the relationship between collisions and fault masking. We then define Squeeziness and demonstrate by experiment that there is a strong correlation between Squeeziness and the likelihood of collisions. We conclude with comments on how Squeeziness could be the foundation for generating test suites that minimise the likelihood of fault masking
    corecore