52 research outputs found

    Information-Theoretic Privacy in Verifiable Outsourced Computation

    Get PDF
    Today, it is common practice to outsource time-consuming computations to the cloud. Using the cloud allows anyone to process large quantities of data without having to invest in the necessary hardware, significantly lowering cost requirements. In this thesis we will consider the following work flow for outsourced computations: A data owner uploads data to a server. The server then computes some function on the data and sends the result to a third entity, which we call verifier. In this scenario, two fundamental security challenges arise. A malicious server may not perform the computation correctly, leading to an incorrect result. Verifiability allows for the detection of such results. In order for this to be practical, the verification procedure needs to be efficient. The other major challenge is privacy. If sensitive data, for example medical data is processed it is important to prevent unauthorized access to such sensitive information. Particularly sensitive data has to be kept confidential even in the long term. The field of verifiable computing provides solutions for the first challenge. In this scenario, the verifier can check that the result that was given was computed correctly. However, simultaneously addressing privacy leads to new challenges. In the scenario of outsourced computation, privacy comes in different flavors. One is privacy with respect to the server, where the goal is to prevent the server from learning about the data processed. The other is privacy with respect to the verifier. Without using verifiable computation the verifier obviously has less information about the original data than the data owner - it only knows the output of the computation but not the input to the computation. If this third party verifier however, is given additional cryptographic data to verify the result of the computation, it might use this additional information to learn information about the inputs. To prevent that a different privacy property we call privacy with respect to the verifier is required. Finally, particularly sensitive data has to be kept confidential even in the long term, when computational privacy is not suitable any more. Thus, information-theoretic measures are required. These measures offer protection even against computationally unbounded adversaries. Two well-known approaches to these challenges are homomorphic commitments and homomorphic authenticators. Homomorphic commitments can provide even information-theoretic privacy, thus addressing long-term security, but verification is computationally expensive. Homomorphic authenticators on the other hand can provide efficient verification, but do not provide information-theoretic privacy. This thesis provides solutions to these research challenges -- efficient verifiability, input-output privacy and in particular information-theoretic privacy. We introduce a new classification for privacy properties in verifiable computing. We propose function-dependent commitment, a novel framework which combines the advantages of homomorphic commitments and authenticators with respect to verifiability and privacy. We present several novel homomorphic signature schemes that can be used to solve verifiability and already address privacy with respect to the verifier. In particular we construct one such scheme fine-tailored towards multivariate polynomials of degree two as well as another fine-tailored towards linear functions over multi-sourced data. The latter solution provides efficient verifiability even for computations over data authenticated by different cryptographic keys. Furthermore, we provide transformations for homomorphic signatures that add privacy. We first show how to add computational privacy and later on even information-theoretic privacy. In this way, we turn homomorphic signatures into function-dependent commitments. By applying this transformation to our homomorphic signature schemes we construct verifiable computing schemes with information-theoretic privacy

    The Simplest Multi-key Linearly Homomorphic Signature Scheme

    Get PDF
    We consider the problem of outsourcing computation on data authenticated by different users. Our aim is to describe and implement the simplest possible solution to provide data integrity in cloud-based scenarios. Concretely, our multi-key linearly homomorphic signature scheme (mklhs) allows users to upload signed data on a server, and at any later point in time any third party can query the server to compute a linear combination of data authenticated by different users and check the correctness of the returned result. Our construction generalizes Boneh et al.\u27s linearly homomorphic signature scheme (PKC\u2709) to the multi-key setting and relies on basic tools of pairing-based cryptography. Compared to existing multi-key homomorphic signature schemes, our mklhs is a conceptually simple and elegant direct construction, which trades-off privacy for efficiency. The simplicity of our approach leads us to a very efficient construction that enjoys significantly shorter signatures and higher performance than previous proposals. Finally, we implement mklhs using two different pairing-friendly curves at the 128-bit security level, a Barreto-Lynn-Scott curve and a Barreto-Naehrig curve. Our benchmarks illustrate interesting performance trade-offs between these parameters, involving the cost of exponentiation and hashing in pairing groups. We provide a discussion on such trade-offs that can be useful to other implementers of pairing-based protocols

    Be More and be Merry: Enhancing Data and User Authentication in Collaborative Settings

    Get PDF
    Cryptography is the science and art of keeping information secret to un-intended parties. But, how can we determine who is an intended party and who is not? Authentication is the branch of cryptography that aims at confirming the source of data or at proving the identity of a person. This Ph.D. thesis is a study of different ways to perform cryptographic authentication of data and users. The main contributions are contained in the six papers included in this thesis and cover the following research areas: (i) homomorphic authentication; (ii) server-aided verification of signatures; (iii) distance-bounding authentication; and (iv) biometric authentication. The investigation flow is towards collaborative settings, that is, application scenarios where different and mutually distrustful entities work jointly for a common goal. The results presented in this thesis allow for secure and efficient authentication when more entities are involved, thus the title “be more and be merry”. Concretely, the first two papers in the collection are on homomorphic authenticators and provide an in-depth study on how to enhance existing primitives with multi- key functionalities. In particular, the papers extend homomorphic signatures and homomorphic message authentication codes to support computations on data authenticated using different secret keys. The third paper explores signer anonymity in the area of server-aided verification and provides new secure constructions. The fourth paper is in the area of distance-bounding authentication and describes a generic method to make existing protocols not only authenticate direct-neighbors, but also entities located two-hop away. The last two papers investigate the leakage of information that affects a special family of biometric authentication systems and how to combine verifiable computation techniques with biometric authentication in order to mitigate known attacks

    Homomorphic Proxy Re-Authenticators and Applications to Verifiable Multi-User Data Aggregation

    Get PDF
    We introduce the notion of homomorphic proxy re-authenticators, a tool that adds security and verifiability guarantees to multi-user data aggregation scenarios. It allows distinct sources to authenticate their data under their own keys, and a proxy can transform these single signatures or message authentication codes (MACs) to a MAC under a receiver\u27s key without having access to it. In addition, the proxy can evaluate arithmetic circuits (functions) on the inputs so that the resulting MAC corresponds to the evaluation of the respective function. As the messages authenticated by the sources may represent sensitive information, we also consider hiding them from the proxy and other parties in the system, except from the receiver. We provide a general model and two modular constructions of our novel primitive, supporting the class of linear functions. On our way, we establish various novel building blocks. Most interestingly, we formally define the notion and present a construction of homomorphic proxy re-encryption, which may be of independent interest. The latter allows users to encrypt messages under their own public keys, and a proxy can re-encrypt them to a receiver\u27s public key (without knowing any secret key), while also being able to evaluate functions on the ciphertexts. The resulting re-encrypted ciphertext then holds an evaluation of the function on the input messages

    Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions

    Get PDF
    In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO\u2718) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead Cpoly(κ)|C| \cdot \mathsf{poly}(\kappa), where C|C| is the size of the circuit that computes the NP\mathbf{NP} relation. In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows: 1. DV-NIZKs for NP\mathbf{NP} from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO\u2718). 2. DP-NIZKs for NP\mathbf{NP} with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead C+poly(κ)|C|+\mathsf{poly}(\kappa) rather then an multiplicative-overhead Cpoly(κ)|C| \cdot \mathsf{poly}(\kappa). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions. 3. PP-NIZK for NP\mathbf{NP} with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is C+poly(κ)|C|+\mathsf{poly}(\kappa). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO\u2718). Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest

    Labeled Homomorphic Encryption: Scalable and Privacy-Preserving Processing of Outsourced Data

    Get PDF
    We consider the problem of privacy-preserving processing of outsourced data, where a Cloud server stores data provided by one or multiple data providers and then is asked to compute several functions over it. We propose an efficient methodology that solves this problem with the guarantee that a honest-but-curious Cloud learns no information about the data and the receiver learns nothing more than the results. Our main contribution is the proposal and efficient instantiation of a new cryptographic primitive called Labeled Homomorphic Encryption (labHE). The fundamental insight underlying this new primitive is that homomorphic computation can be significantly accelerated whenever the program that is being computed over the encrypted data is known to the decrypter and is not secret---previous approaches to homomorphic encryption do not allow for such a trade-off. Our realization and implementation of labHE targets computations that can be described by degree-two multivariate polynomials, which capture an important range of statistical functions. As a specific application, we consider the problem of privacy preserving Genetic Association Studies (GAS), which require computing risk estimates for given traits from statistically relevant features in the human genome. Our approach allows performing GAS efficiently, non interactively and without compromising neither the privacy of patients nor potential intellectual property that test laboratories may want to protect

    Multi-Key Homomorphic Signatures Unforgeable under Insider Corruption

    Get PDF
    Homomorphic signatures (HS) allows the derivation of the signature of the message-function pair (m,g)(m, g), where m=g(m1,,mK)m = g(m_1, \ldots, m_K), given the signatures of each of the input messages mkm_k signed under the same key. Multi-key HS (M-HS) introduced by Fiore et al. (ASIACRYPT\u2716) further enhances the utility by allowing evaluation of signatures under different keys. While the unforgeability of existing M-HS notions unrealistically assumes that all signers are honest, we consider the setting where an arbitrary number of signers can be corrupted, which is typical in natural applications (e.g., verifiable multi-party computation) of M-HS. Surprisingly, there is a huge gap between M-HS with and without unforgeability under corruption: While the latter can be constructed from standard lattice assumptions (ASIACRYPT\u2716), we show that the former must rely on non-falsifiable assumptions. Specifically, we propose a generic construction of M-HS with unforgeability under corruption from adaptive zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARK) (and other standard assumptions), and then show that such M-HS implies adaptive zero-knowledge succinct non-interactive arguments (ZK-SNARG). Our results leave open the pressing question of what level of authenticity can be guaranteed in the multi-key setting under standard assumptions

    Asymmetric Trapdoor Pseudorandom Generators: Definitions, Constructions, and Applications to Homomorphic Signatures with Shorter Public Keys

    Get PDF
    We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers pr1,,prNpr_1,\dots,pr_N for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number pripr_i to recover the whole prpr sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use prpr sequence to generate a sequence sr1,,srNsr_1,\dots,sr_N of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system. As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only O(T)O(T) that is independent of the dataset size. As a comparison, the shortest size of the existing public key is O(N+T)O(\sqrt{N}+\sqrt{T}), proposed by Catalano et al. (CRYPTO\u2715), where NN is the dataset size and TT is the dimension of the message. In other words, we provide the first homomorphic signature scheme with O(1)O(1)-sized public keys for the one-dimension messages

    Adaptively Secure Fully Homomorphic Signatures Based on Lattices

    Get PDF
    In a homomorphic signature scheme, given the public key and a vector of signatures σ:=(σ1,,σl)\vec{\sigma}:= (\sigma_1, \ldots, \sigma_l) over ll messages μ:=(μ1,,μl)\vec{\mu}:= (\mu_1, \ldots, \mu_l), there exists an efficient algorithm to produce a signature σ2˘7\sigma\u27 for μ=f(μ)\mu = f(\vec{\mu}). Given the tuple (σ2˘7,μ,f)(\sigma\u27, \mu, f), anyone can then publicly verify the validity of the signature σ2˘7\sigma\u27. Inspired by the recent (selectively secure) key-homomorphic functional encryption for circuits, recent works propose fully homomorphic signature schemes in the selective security model. However, in order to gain adaptive security, one must rely on generic complexity leveraging, which is not only very inefficient but also leads to reductions that are ``unfalsifiable\u27\u27. In this paper, we construct the first \emph{adaptively secure} homomorphic signature scheme that can evaluate any circuit over signed data. For {\it poly-logarithmic depth} circuits, our scheme achieves adaptive security under the standard {\it Small Integer Solution} (SIS) assumption. For {\it polynomial depth} circuits, the security of our scheme relies on sub-exponential SIS --- but unlike complexity leveraging, the security loss in our reduction depends only on circuit depth and on neither message length nor dataset size
    corecore