7 research outputs found

    Reading in the Dark: Classifying Encrypted Digits with Functional Encryption

    Get PDF
    As machine learning grows into a ubiquitous technology that finds many interesting applications, the privacy of data is becoming a major concern. This paper deals with machine learning and encrypted data. Namely, our contribution is twofold: we first build a new Functional Encryption scheme for quadratic multi-variate polynomials, which outperforms previous schemes. It enables the efficient computation of quadratic polynomials on encrypted vectors, so that only the result is in clear. We then turn to quadratic networks, a class of machine learning models, and show that their design makes them particularly suited to our encryption scheme. This synergy yields a technique for efficiently recovering a plaintext classification of encrypted data. Eventually, we prototype our construction and run it on the MNIST dataset to demonstrate practical relevance. We obtain 97.54% accuracy, with decryption and encryption taking few seconds

    OCEAN: A Built-In Replacement for Mining Pools

    Get PDF
    We propose OCEAN, an alternative miner reward system for blockchains that seeks to discourage pooling by providing a pool\u27s variance mitigation functionality as a blockchain built-in. Our proposal relies on Succinct, Non-interactive Arguments of Knowledge (SNARKs), an advanced modern cryptographic tool that enables anyone to prove complex statements with the proof not growing in size with the complexity of the statement. We expect that blockchains that implement our proposal would see less pooling centralization than what is currently observable in deployed cryptocurrencies

    Decentralized Multi-Client Functional Encryption for Inner Product

    Get PDF
    We consider a situation where multiple parties, owning data that have to be frequently updated, agree to share weighted sums of these data with some aggregator, but where they do not wish to reveal their individual data, and do not trust each other. We combine techniques from Private Stream Aggregation (PSA) and Functional Encryption (FE), to introduce a primitive we call Decentralized Multi-Client Functional Encryption (DMCFE), for which we give a practical instantiation for Inner Product functionalities. This primitive allows various senders to non-interactively generate ciphertexts which support inner-product evaluation, with functional decryption keys that can also be generated non-interactively, in a distributed way, among the senders. Interactions are required during the setup phase only. We prove adaptive security of our constructions, while allowing corruptions of the clients, in the random oracle model

    Multi-Client Functional Encryption with Repetition for Inner Product

    Get PDF
    Recently, Chotard et al. proposed a variant of functional encryption for Inner Product, where several parties can independently encrypt inputs, for a specific time-period or label, such that functional decryption keys exactly reveal the aggregations for the specific functions they are associated with. This was introduced as Multi-Client Functional Encryption (MCFE). In addition, they formalized a Decentralized version (DMCFE), where all the clients must agree and contribute to generate the functional decryption keys: there is no need of central authority anymore, and the key generation process is non-interactive between the clients. Eventually, they designed concrete constructions, for both the centralized and decentralized settings, for the inner-product function family. Unfortunately, there were a few limitations for practical use, in the security model: (1) the clients were assumed not to encrypt two messages under the same label. Then, nothing was known about the security when this restriction was not satisfied; (2) more dramatically, the adversary was assumed to ask for the ciphertexts coming from all the clients or none, for a given label. In case of partial ciphertexts, nothing was known about the security either. In this paper, our contributions are three-fold: we describe two conversions that enhance any MCFE or DMCFE for Inner Product secure in their security model to (1) handle repetitions under the same label and (2) deal with partial ciphertexts. In addition, these conversions can be applied sequentially in any order. The latter conversion exploits a new tool, which we call Secret Sharing Layer (SSL). Eventually, we propose a new efficient technique to generate the functional decryption keys in a decentralized way, in the case of Inner Product, solely relying on plain DDH, as opposed to prior work of Chotard et al. which relies on pairings. As a consequence, from the weak MCFE for Inner Product proposed by Chotard et al., one can obtain an efficient Decentralized MCFE for Inner Product that handles repetitions and partial ciphertexts. Keywords. Functional Encryption, Inner Product, Multi-Client, Decentralized

    Partially Encrypted Machine Learning using Functional Encryption

    No full text
    International audienceMachine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time, and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since most encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking, which adversarially optimizes the network against an adversary trying to identify these features. This is interesting for several existing works using partially encrypted machine learning as it comes with little reduction on the model's accuracy and significantly improves data privacy

    Partially Encrypted Machine Learning using Functional Encryption

    No full text
    International audienceMachine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time, and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since most encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking, which adversarially optimizes the network against an adversary trying to identify these features. This is interesting for several existing works using partially encrypted machine learning as it comes with little reduction on the model's accuracy and significantly improves data privacy
    corecore