15,226 research outputs found

    Differential Privacy for Sequential Algorithms

    Full text link
    We study the differential privacy of sequential statistical inference and learning algorithms that are characterized by random termination time. Using the two examples: sequential probability ratio test and sequential empirical risk minimization, we show that the number of steps such algorithms execute before termination can jeopardize the differential privacy of the input data in a similar fashion as their outputs, and it is impossible to use the usual Laplace mechanism to achieve standard differentially private in these examples. To remedy this, we propose a notion of weak differential privacy and demonstrate its equivalence to the standard case for large i.i.d. samples. We show that using the Laplace mechanism, weak differential privacy can be achieved for both the sequential probability ratio test and the sequential empirical risk minimization with proper performance guarantees. Finally, we provide preliminary experimental results on the Breast Cancer Wisconsin (Diagnostic) and Landsat Satellite Data Sets from the UCI repository

    Advanced Probabilistic Couplings for Differential Privacy

    Get PDF
    Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called \emph{optimal subset coupling} that may be of independent interest
    corecore