756 research outputs found

    How to Securely Compute the Modulo-Two Sum of Binary Sources

    Full text link
    In secure multiparty computation, mutually distrusting users in a network want to collaborate to compute functions of data which is distributed among the users. The users should not learn any additional information about the data of others than what they may infer from their own data and the functions they are computing. Previous works have mostly considered the worst case context (i.e., without assuming any distribution for the data); Lee and Abbe (2014) is a notable exception. Here, we study the average case (i.e., we work with a distribution on the data) where correctness and privacy is only desired asymptotically. For concreteness and simplicity, we consider a secure version of the function computation problem of K\"orner and Marton (1979) where two users observe a doubly symmetric binary source with parameter p and the third user wants to compute the XOR. We show that the amount of communication and randomness resources required depends on the level of correctness desired. When zero-error and perfect privacy are required, the results of Data et al. (2014) show that it can be achieved if and only if a total rate of 1 bit is communicated between every pair of users and private randomness at the rate of 1 is used up. In contrast, we show here that, if we only want the probability of error to vanish asymptotically in block length, it can be achieved by a lower rate (binary entropy of p) for all the links and for private randomness; this also guarantees perfect privacy. We also show that no smaller rates are possible even if privacy is only required asymptotically.Comment: 6 pages, 1 figure, extended version of submission to IEEE Information Theory Workshop, 201

    A Shannon Approach to Secure Multi-party Computations

    Full text link
    In secure multi-party computations (SMC), parties wish to compute a function on their private data without revealing more information about their data than what the function reveals. In this paper, we investigate two Shannon-type questions on this problem. We first consider the traditional one-shot model for SMC which does not assume a probabilistic prior on the data. In this model, private communication and randomness are the key enablers to secure computing, and we investigate a notion of randomness cost and capacity. We then move to a probabilistic model for the data, and propose a Shannon model for discrete memoryless SMC. In this model, correlations among data are the key enablers for secure computing, and we investigate a notion of dependency which permits the secure computation of a function. While the models and questions are general, this paper focuses on summation functions, and relies on polar code constructions

    Goppa Codes and Their Use in the McEliece Cryptosystems

    Get PDF
    We explore the topic of Goppa codes and how they are used in the McEliece Cryptosystem. We first cover basic terminology that is needed to understand the rest of the paper. Then we explore the definition and limitations of a Goppa code along with how such codes can be used in a general cryptosystem. Then we go in depth on the McEliece Cryptosystem in particular and explain how the security of this method works

    When is a Function Securely Computable?

    Full text link
    A subset of a set of terminals that observe correlated signals seek to compute a given function of the signals using public communication. It is required that the value of the function be kept secret from an eavesdropper with access to the communication. We show that the function is securely computable if and only if its entropy is less than the "aided secret key" capacity of an associated secrecy generation model, for which a single-letter characterization is provided

    Privacy-preserving scoring of tree ensembles : a novel framework for AI in healthcare

    Get PDF
    Machine Learning (ML) techniques now impact a wide variety of domains. Highly regulated industries such as healthcare and finance have stringent compliance and data governance policies around data sharing. Advances in secure multiparty computation (SMC) for privacy-preserving machine learning (PPML) can help transform these regulated industries by allowing ML computations over encrypted data with personally identifiable information (PII). Yet very little of SMC-based PPML has been put into practice so far. In this paper we present the very first framework for privacy-preserving classification of tree ensembles with application in healthcare. We first describe the underlying cryptographic protocols that enable a healthcare organization to send encrypted data securely to a ML scoring service and obtain encrypted class labels without the scoring service actually seeing that input in the clear. We then describe the deployment challenges we solved to integrate these protocols in a cloud based scalable risk-prediction platform with multiple ML models for healthcare AI. Included are system internals, and evaluations of our deployment for supporting physicians to drive better clinical outcomes in an accurate, scalable, and provably secure manner. To the best of our knowledge, this is the first such applied framework with SMC-based privacy-preserving machine learning for healthcare
    corecore