1,557 research outputs found

    Revisiting Shared Data Protection Against Key Exposure

    Full text link
    This paper puts a new light on secure data storage inside distributed systems. Specifically, it revisits computational secret sharing in a situation where the encryption key is exposed to an attacker. It comes with several contributions: First, it defines a security model for encryption schemes, where we ask for additional resilience against exposure of the encryption key. Precisely we ask for (1) indistinguishability of plaintexts under full ciphertext knowledge, (2) indistinguishability for an adversary who learns: the encryption key, plus all but one share of the ciphertext. (2) relaxes the "all-or-nothing" property to a more realistic setting, where the ciphertext is transformed into a number of shares, such that the adversary can't access one of them. (1) asks that, unless the user's key is disclosed, noone else than the user can retrieve information about the plaintext. Second, it introduces a new computationally secure encryption-then-sharing scheme, that protects the data in the previously defined attacker model. It consists in data encryption followed by a linear transformation of the ciphertext, then its fragmentation into shares, along with secret sharing of the randomness used for encryption. The computational overhead in addition to data encryption is reduced by half with respect to state of the art. Third, it provides for the first time cryptographic proofs in this context of key exposure. It emphasizes that the security of our scheme relies only on a simple cryptanalysis resilience assumption for blockciphers in public key mode: indistinguishability from random, of the sequence of diferentials of a random value. Fourth, it provides an alternative scheme relying on the more theoretical random permutation model. It consists in encrypting with sponge functions in duplex mode then, as before, secret-sharing the randomness

    Quantum Simulation Logic, Oracles, and the Quantum Advantage

    Full text link
    Query complexity is a common tool for comparing quantum and classical computation, and it has produced many examples of how quantum algorithms differ from classical ones. Here we investigate in detail the role that oracles play for the advantage of quantum algorithms. We do so by using a simulation framework, Quantum Simulation Logic (QSL), to construct oracles and algorithms that solve some problems with the same success probability and number of queries as the quantum algorithms. The framework can be simulated using only classical resources at a constant overhead as compared to the quantum resources used in quantum computation. Our results clarify the assumptions made and the conditions needed when using quantum oracles. Using the same assumptions on oracles within the simulation framework we show that for some specific algorithms, like the Deutsch-Jozsa and Simon's algorithms, there simply is no advantage in terms of query complexity. This does not detract from the fact that quantum query complexity provides examples of how a quantum computer can be expected to behave, which in turn has proved useful for finding new quantum algorithms outside of the oracle paradigm, where the most prominent example is Shor's algorithm for integer factorization.Comment: 48 pages, 46 figure

    Password-based group key exchange in a constant number of rounds

    Get PDF
    Abstract. With the development of grids, distributed applications are spread across multiple computing resources and require efficient security mechanisms among the processes. Although protocols for authenticated group Diffie-Hellman key exchange protocols seem to be the natural mechanisms for supporting these applications, current solutions are either limited by the use of public key infrastructures or by their scalability, requiring a number of rounds linear in the number of group members. To overcome these shortcomings, we propose in this paper the first provably-secure password-based constant-round group key exchange protocol. It is based on the protocol of Burmester and Desmedt and is provably-secure in the random-oracle and ideal-cipher models, under the Decisional Diffie-Hellman assumption. The new protocol is very efficient and fully scalable since it only requires four rounds of communication and four multi-exponentiations per user. Moreover, the new protocol avoids intricate authentication infrastructures by relying on passwords for authentication.

    POPE: Partial Order Preserving Encoding

    Get PDF
    Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(nϵ)O(n^\epsilon) non-persistent client storage for 0<ϵ<10<\epsilon<1, our POPE scheme provides extremely fast batch insertion consisting of a single round, and efficient search with O(1) amortized cost for up to O(n1−ϵ)O(n^{1-\epsilon}) search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.Comment: Appears in ACM CCS 2016 Proceeding

    Making an Asymmetric PAKE Quantum-Annoying by Hiding Group Elements

    Get PDF
    • …
    corecore