5 research outputs found

    Private Functional Encryption – Hiding What Cannot Be Learned Through Function Evaluation

    Get PDF
    Functional encryption (FE) is a generalization of many commonly employed crypto- graphic primitives, such as keyword search encryption (KS), identity-based encryption (IBE), inner-product encryption (IPE) and attribute-based encryption (ABE). In an FE scheme, the holder of a master secret key can issue tokens associated with functions of its choice. Possessing a token for f allows one to recover f(m), given an encryption of m. As it is important that ciphertexts preserve data privacy, in various scenarios it is also important that tokens do not expose their associated function. A notable example being the usage of FE to search over encrypted data without revealing the search query. Function privacy is an emerging new notion that aims to address this problem. The difficulty of formalizing it lies in the verification functionality, as the holder of a token for function f may encrypt arbitrary messages using the public key, and obtain a large number of evaluations of f. Prior privacy models in the literature were fine-tuned for specific functionalities, did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. Our first contribution is a new indistinguishability-based privacy notion that overcomes these limitations and is flexible enough to capture all previously proposed indistinguishability-based definitions as particular cases. The second contribution of this thesis is five constructions of private functional encryption supporting different classes of functions and meeting varying degrees of security: (1) a white-box construction of an Anonymous IBE scheme based on composite-order groups, shown to be secure in the absence of correlated messages; (2) a simple and functionality- agnostic black-box construction from obfuscation, also shown to be secure in the absence of correlated messages; (3) a more evolved and still functionality-agnostic construction that achieves a form of function privacy that tolerates limited correlations between messages and functions; (4) a KS scheme achieving privacy in the presence of correlated messages beyond all previously proposed indistinguishability-based security definitions; (5) a KS construction that achieves our strongest notion of privacy (but relies on a more expressive form of obfuscation than the previous construction). The standard approach in FE is to model complex functions as circuits, which yields inefficient evaluations over large inputs. As our third contribution, we propose a new primitive that we call “updatable functional encryption” (UFE), where instead of circuits we deal with RAM programs, which are closer to how programs are expressed in von Neumann architecture. We impose strict efficiency constrains and we envision tokens that are capable of updating the ciphertext, over which other tokens can be subsequently executed. We define a security notion for our primitive and propose a candidate construction from obfuscation, which serves as a starting point towards the realization of other schemes and contributes to the study on how to compute RAM programs over public-key encrypted data

    Updatable Functional Encryption

    No full text
    Functional encryption (FE) allows an authority to issue tokens associated with various functions, allowing the holder of some token for function f to learn only f(D) from a ciphertext that encrypts D. The standard approach is to model f as a circuit, which yields inefficient evaluations over large inputs. Here, we propose a new primitive that we call updatable functional encryption (UFE), where instead of circuits we deal with RAM programs, which are closer to how programs are expressed in von Neumann architecture. We impose strict efficiency constrains in that the run-time of a token P' on ciphertext CT is proportional to the run-time of its clear-form counterpart (program P on memory D) up to a polylogarithmic factor in the size of D, and we envision tokens that are capable to update the ciphertext, over which other tokens can be subsequently executed. We define a security notion for our primitive and propose a candidate construction from obfuscation, which serves as a starting point towards the realization of other schemes and contributes to the study on how to compute RAM programs over public-key encrypted data

    Private Functional Encryption: Indistinguishability-Based Definitions and Constructions from Obfuscation

    Get PDF
    Private functional encryption guarantees that not only the information in ciphertexts is hidden but also the circuits in decryption tokens are protected. A notable use case of this notion is query privacy in searchable encryption. Prior privacy models in the literature were fine-tuned for specific functionalities (namely, identity-based encryption and inner-product encryption), did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. We develop a new indistinguishability-based privacy notion that overcomes these limitations and give constructions supporting different circuit classes and meeting varying degrees of security. Obfuscation is a common building block that these constructions share, albeit the obfuscators necessary for each construction are based on different assumptions. In particular, we develop a composable and distributionally secure hyperplane membership obfuscator and use it to build an inner-product encryption scheme that achieves an unprecedented level of privacy, positively answering a question left open by Boneh, Raghunathan and Segev (ASIACRYPT 2013) concerning the extension and realization of enhanced security for schemes supporting this functionality

    Facilitating Privacy-preserving Recommendation-as-a-Service with Machine Learning

    No full text
    Machine-Learning-as-a-Service has become increasingly popular, with Recommendation-as-a-Service as one of the representative examples. In such services, providing privacy protection for users is an important topic. Reviewing privacy-preserving solutions which were proposed in the past decade, privacy and machine learning are often seen as two competing goals at stake. Though improving cryptographic primitives (e.g., secure multi-party computation (SMC) or homomorphic encryption (HE)) or devising sophisticated secure protocols has made a remarkable achievement, but in conjunction with state-of-the-art recommender systems often yields far-from-practical solutions. We tackle this problem from the direction of machine learning. We aim to design crypto-friendly recommendation algorithms, thus to obtain efficient solutions by directly using existing cryptographic tools. In particular, we propose an HE-friendly recommender system, refer to as CryptoRec, which (1) decouples user features from latent feature space, avoiding training the recommendation model on encrypted data; (2) only relies on addition and multiplication operations, making the model straightforwardly compatible with HE schemes. The properties turn recommendation-computations into a simple matrix-multiplication operation. To further improve efficiency, we introduce a sparse-quantization-reuse method which reduces the recommendation-computation time by 9Ă—9\times (compared to using CryptoRec directly), without compromising the accuracy. We demonstrate the efficiency and accuracy of CryptoRec on three real-world datasets. CryptoRec allows a server to estimate a user's preferences on thousands of items within a few seconds on a single PC, with the user's data homomorphically encrypted, while its prediction accuracy is still competitive with state-of-the-art recommender systems computing over clear data. Our solution enables Recommendation-as-a-Service on large datasets in a nearly real-time (seconds) level

    Novel Collaborative Filtering Recommender Friendly to Privacy Protection

    No full text
    Nowadays, recommender system is an indispensable tool in many information services, and a large number of algorithms have been designed and implemented. However, fed with very large datasets, state-of-the-art recommendation algorithms often face an efficiency bottleneck, i.e., it takes huge amount of computing resources to train a recommendation model. In order to satisfy the needs of privacy-savvy users who do not want to disclose their information to the service provider, the complexity of most existing solutions becomes prohibitive. As such, it is an interesting research question to design simple and efficient recommendation algorithms that achieve reasonable accuracy and facilitate privacy protection at the same time. In this paper, we propose an efficient recommendation algorithm, named CryptoRec, which has two nice properties: (1) can estimate a new user's preferences by directly using a model pre-learned from an expert dataset, and the new user's data is not required to train the model; (2) can compute recommendations with only addition and multiplication operations. As to the evaluation, we first test the recommendation accuracy on three real-world datasets and show that CryptoRec is competitive with state-of-the-art recommenders. Then, we evaluate the performance of the privacy-preserving variants of CryptoRec and show that predictions can be computed in seconds on a PC. In contrast, existing solutions will need tens or hundreds of hours on more powerful computers
    corecore