16,185 research outputs found

    Formal Verification of Differential Privacy for Interactive Systems

    Full text link
    Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.Comment: 65 pages with 1 figur

    Advanced Probabilistic Couplings for Differential Privacy

    Get PDF
    Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called \emph{optimal subset coupling} that may be of independent interest

    A program logic for union bounds

    Get PDF
    International audienceWe propose a probabilistic Hoare logic aHL based on the union bound, a tool from basic probability theory. While the union bound is simple, it is an extremely common tool for analyzing randomized algorithms. In formal verification terms, the union bound allows flexible and compos-itional reasoning over possible ways an algorithm may go wrong. It also enables a clean separation between reasoning about probabilities and reasoning about events, which are expressed as standard first-order formulas in our logic. Notably, assertions in our logic are non-probabilistic, even though we can conclude probabilistic facts from the judgments. Our logic can also prove accuracy properties for interactive programs, where the program must produce intermediate outputs as soon as pieces of the input arrive, rather than accessing the entire input at once. This setting also enables adaptivity, where later inputs may depend on earlier intermediate outputs. We show how to prove accuracy for several examples from the differential privacy literature, both interactive and non-interactive. 1998 ACM Subject Classification D.2.4 Software/Program Verification 1 Introduction Probabilistic computations arise naturally in many areas of computer science. For instance, they are widely used in cryptography, privacy, and security for achieving goals that lie beyond the reach of deterministic programs. However, the correctness of probabilistic programs can be quite subtle, often relying on complex reasoning about probabilistic events. Accordingly, probabilistic computations present an attractive target for formal verification. A long line of research, spanning more than four decades, has focused on expressive formalisms for reasoning about general probabilistic properties both for purely probabilistic programs and for programs that combine probabilistic and non-deterministic choice (see, e.g., [29, 34, 35]). More recent research investigates specialized formalisms that work with more restricted assertions and proof techniques, aiming to simplify formal verification. As perhaps the purest examples of this approach, some program logics prove probabilistic properties by working purely with non-probabilistic assertions; we call such systems lightweight logics. Examples include probabilistic relational Hoare logic [3] for proving the reductionist security of cryptographic constructions, and the related approximate probabilistic relational Hoare logic [4] for reasoning about differential privacy. These logics rely on the powerful abstraction of probabilistic couplings to derive probabilistic facts from non-probabilistic assertions [7]

    Computer-aided verification in mechanism design

    Full text link
    In mechanism design, the gold standard solution concepts are dominant strategy incentive compatibility and Bayesian incentive compatibility. These solution concepts relieve the (possibly unsophisticated) bidders from the need to engage in complicated strategizing. While incentive properties are simple to state, their proofs are specific to the mechanism and can be quite complex. This raises two concerns. From a practical perspective, checking a complex proof can be a tedious process, often requiring experts knowledgeable in mechanism design. Furthermore, from a modeling perspective, if unsophisticated agents are unconvinced of incentive properties, they may strategize in unpredictable ways. To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions. As an immediate consequence, our work also formalizes Bayesian incentive compatibility for the entire family of mechanisms derived via this reduction. Finally, as an intermediate step in our formalization, we provide the first formal verification of incentive compatibility for the celebrated Vickrey-Clarke-Groves mechanism

    Relational Symbolic Execution

    Full text link
    Symbolic execution is a classical program analysis technique used to show that programs satisfy or violate given specifications. In this work we generalize symbolic execution to support program analysis for relational specifications in the form of relational properties - these are properties about two runs of two programs on related inputs, or about two executions of a single program on related inputs. Relational properties are useful to formalize notions in security and privacy, and to reason about program optimizations. We design a relational symbolic execution engine, named RelSym which supports interactive refutation, as well as proving of relational properties for programs written in a language with arrays and for-like loops
    • …
    corecore