3,839 research outputs found

    Proving Differential Privacy with Shadow Execution

    Full text link
    Recent work on formal verification of differential privacy shows a trend toward usability and expressiveness -- generating a correctness proof of sophisticated algorithm while minimizing the annotation burden on programmers. Sometimes, combining those two requires substantial changes to program logics: one recent paper is able to verify Report Noisy Max automatically, but it involves a complex verification system using customized program logics and verifiers. In this paper, we propose a new proof technique, called shadow execution, and embed it into a language called ShadowDP. ShadowDP uses shadow execution to generate proofs of differential privacy with very few programmer annotations and without relying on customized logics and verifiers. In addition to verifying Report Noisy Max, we show that it can verify a new variant of Sparse Vector that reports the gap between some noisy query answers and the noisy threshold. Moreover, ShadowDP reduces the complexity of verification: for all of the algorithms we have evaluated, type checking and verification in total takes at most 3 seconds, while prior work takes minutes on the same algorithms.Comment: 23 pages, 12 figures, PLDI'1

    Coupled Relational Symbolic Execution for Differential Privacy

    Get PDF
    Differential privacy is a de facto standard in data privacy with applications in the private and public sectors. Most of the techniques that achieve differential privacy are based on a judicious use of randomness. However, reasoning about randomized programs is difficult and error prone. For this reason, several techniques have been recently proposed to support designer in proving programs differentially private or in finding violations to it. In this work we propose a technique based on symbolic execution for reasoning about differential privacy. Symbolic execution is a classic technique used for testing, counterexample generation and to prove absence of bugs. Here we use symbolic execution to support these tasks specifically for differential privacy. To achieve this goal, we leverage two ideas that have been already proven useful in formal reasoning about differential privacy: relational reasoning and probabilistic coupling. Our technique integrates these two ideas and shows how such a combination can be used to both verify and find violations to differential privacy

    Computationally Bounded Robust Compilation and Universally Composable Security

    Full text link
    Universal Composability (UC) is the gold standard for cryptographic security, but mechanizing proofs of UC is notoriously difficult. A recently-discovered connection between UC and Robust Compilation (RC)\unicode{x2014}a novel theory of secure compilation\unicode{x2014}provides a means to verify UC proofs using tools that mechanize equality results. Unfortunately, the existing methods apply only to perfect UC security, and real-world protocols relying on cryptography are only computationally secure. This paper addresses this gap by lifting the connection between UC and RC to the computational setting, extending techniques from the RC setting to apply to computational UC security. Moreover, it further generalizes the UC\unicode{x2013}RC connection beyond computational security to arbitrary equalities, providing a framework to subsume the existing perfect case, and to instantiate future theories with more complex notions of security. This connection allows the use of tools for proofs of computational indistinguishability to properly mechanize proofs of computational UC security. We demonstrate this power by using CryptoVerif to mechanize a proof that parts of the Wireguard protocol are computationally UC secure. Finally, all proofs of the framework itself are verified in Isabelle/HOL

    Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks

    Full text link
    Deep learning classifiers are crucial in the age of artificial intelligence. The device-edge-based collaborative inference has been widely adopted as an efficient framework for promoting its applications in IoT and 5G/6G networks. However, it suffers from accuracy degradation under non-i.i.d. data distribution and privacy disclosure. For accuracy degradation, direct use of transfer learning and split learning is high cost and privacy issues remain. For privacy disclosure, cryptography-based approaches lead to a huge overhead. Other lightweight methods assume that the ground truth is non-sensitive and can be exposed. But for many applications, the ground truth is the user's crucial privacy-sensitive information. In this paper, we propose a framework of Roulette, which is a task-oriented semantic privacy-preserving collaborative inference framework for deep learning classifiers. More than input data, we treat the ground truth of the data as private information. We develop a novel paradigm of split learning where the back-end DNN is frozen and the front-end DNN is retrained to be both a feature extractor and an encryptor. Moreover, we provide a differential privacy guarantee and analyze the hardness of ground truth inference attacks. To validate the proposed Roulette, we conduct extensive performance evaluations using realistic datasets, which demonstrate that Roulette can effectively defend against various attacks and meanwhile achieve good model accuracy. In a situation where the non-i.i.d. is very severe, Roulette improves the inference accuracy by 21\% averaged over benchmarks, while making the accuracy of discrimination attacks almost equivalent to random guessing

    Automating Program Analysis For Differential Privacy

    Get PDF
    This dissertation explores techniques for automating program analysis, with a focus on validating and securely executing differentially private programs. Differential privacy allows analysts to study general patterns among individuals, while providing strong protections against identity leakage. To automatically check differential privacy for programs, we develop Fuzzi: a three-level logic for differential privacy. Fuzzi’s lowest level is a general-purpose logic; its middle level is apRHL, a program logic for mechanical construction of differential privacy proofs; and its top level is a novel sensitivity logic for tracking sensitivity bounds, a fundamental building block of differential privacy. Some differentially private algorithms have sophisticated proofs that cannot be derived by a compositional typechecking process. To detect incorrect implementations for these algorithms, we develop DPCheck for testing differential privacy automatically. Adapting a well-known “pointwise” proof technique for differential privacy, DPCheck observes runtime program behaviors, and derives formulas that constrain potential privacy proofs. Once we are convinced that a program is differentially private, we often still have to trust that the machine executing the program does not misbehave and leak sensitive results. For analytics at scale, computation is often delegated to networked computers that may become compromised. To securely run differentially private analytics at scale, we develop Orchard, a system that can answer many differentially private queries over data distributed among millions of user devices. Orchard leverages cryptographic primitives to employ untrusted computers, while preventing untrusted computers from observing sensitive results

    Language-Based Differential Privacy with Accuracy Estimations and Sensitivity Analyses

    Get PDF
    This thesis focuses on the development of programming frameworks to enforce, by construction, desirable properties of software systems. Particularly, we are interested in enforcing differential privacy -- a mathematical notion of data privacy -- while statically reasoning about the accuracy of computations, along with deriving the sensitivity of arbitrary functions to further strengthen the expressiveness of these systems. To this end, we first introduce DPella, a programming framework for differentially-private queries that allows reasoning about the privacy and accuracy of data analyses. DPella provides a novel component that statically tracks the accuracy of different queries. This component leverages taint analysis to infer statistical independence of the different noises that were added to ensure the privacy of the overall computation. As a result, DPella allows analysts to implement privacy-preserving queries and adjust the privacy parameters to meet accuracy targets or vice-versa.In the context of differentially-private systems, the sensitivity of a function determines the amount of noise needed to achieve a desired level of privacy. However, establishing the sensitivity of arbitrary functions is non-trivial. Consequently, systems such as DPella provided a limited set of functions -- whose sensitivity is known -- to apply over sensitive data, thus hindering the expressiveness of the language. To overcome this limitation, we propose a new approach to derive proofs of sensitivity in programming languages with support for polymorphism. Our approach enriches base types with information about the metric relation between values and applies parametricity to derive proof of a function\u27s sensitivity. These ideas are formalized in a sound calculus and implemented as a Haskell library called Spar, enabling programmers to prove the sensitivity of their functions through type-checking alone.Overall, this thesis contributes to the development of expressive programming frameworks for data analysis with privacy and accuracy guarantees. The proposed approaches are feasible and effective, as demonstrated through the implementation of DPella and Spar

    Raziel: Private and Verifiable Smart Contracts on Blockchains

    Get PDF
    Raziel combines secure multi-party computation and proof-carrying code to provide privacy, correctness and verifiability guarantees for smart contracts on blockchains. Effectively solving DAO and Gyges attacks, this paper describes an implementation and presents examples to demonstrate its practical viability (e.g., private and verifiable crowdfundings and investment funds). Additionally, we show how to use Zero-Knowledge Proofs of Proofs (i.e., Proof-Carrying Code certificates) to prove the validity of smart contracts to third parties before their execution without revealing anything else. Finally, we show how miners could get rewarded for generating pre-processing data for secure multi-party computation.Comment: Support: cothority/ByzCoin/OmniLedge
    corecore