3 research outputs found

    Quantitative Information Flow and Applications to Differential Privacy

    Get PDF
    International audienceSecure information flow is the problem of ensuring that the information made publicly available by a computational system does not leak information that should be kept secret. Since it is practically impossible to avoid leakage entirely, in recent years there has been a growing interest in considering the quantitative aspects of information flow, in order to measure and compare the amount of leakage. Information theory is widely regarded as a natural framework to provide firm foundations to quantitative information flow. In this notes we review the two main information-theoretic approaches that have been investigated: the one based on Shannon entropy, and the one based on Rényi min-entropy. Furthermore, we discuss some applications in the area of privacy. In particular, we consider statistical databases and the recently-proposed notion of differential privacy. Using the information-theoretic view, we discuss the bound that differential privacy induces on leakage, and the trade-off between utility and privac

    Privacy Games: Optimal User-Centric Data Obfuscation

    Full text link
    In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user's privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm

    Effective verification of confidentiality for multi-threaded programs

    Get PDF
    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that characterizes secure information flow for multi-threaded programs under a given scheduler. Scheduler-specificness allows us to reason about refinement attacks, an important and tricky class of attacks that are notorious in practice. SSOD imposes two conditions: (SSOD-1)~all individual public variables have to evolve deterministically, expressed by requiring stuttering equivalence between the traces of each individual public variable, and (SSOD-2)~the relative order of updates of public variables is coincidental, i.e., there always exists a matching trace. \ud \ud We verify the first condition by reducing it to the question whether all traces of \ud each public variable are stuttering equivalent. \ud To verify the second condition, we show how\ud the condition can be translated, via a series of steps, \ud into a standard strong bisimulation problem. \ud Our verification techniques can be easily\ud adapted to verify other formalizations of similar information flow properties.\ud \ud We also exploit counter example generation techniques to synthesize attacks for insecure programs that fail either SSOD-1 or SSOD-2, i.e., showing how confidentiality \ud of programs can be broken
    corecore