27 research outputs found

    On the Hardness of Almost All Subset Sum Problems by Ordinary Branch-and-Bound

    Full text link
    Given nn positive integers a1,a2,…,ana_1,a_2,\dots,a_n, and a positive integer right hand side β\beta, we consider the feasibility version of the subset sum problem which is the problem of determining whether a subset of a1,a2,…,ana_1,a_2,\dots,a_n adds up to β\beta. We show that if the right hand side β\beta is chosen as ⌊r∑j=1naj⌋\lfloor r\sum_{j=1}^n a_j \rfloor for a constant 0<r<10 < r < 1 and if the aja_j's are independentand identically distributed from a discrete uniform distribution taking values 1,2,…,⌊10n/2⌋{1,2,\dots,\lfloor 10^{n/2} \rfloor }, then the probability that the instance of the subset sum problem generated requires the creation of an exponential number of branch-and-bound nodes when one branches on the individual variables in any order goes to 11 as nn goes to infinity.Comment: 5 page

    Lattice-based cryptography

    Get PDF

    An Implementation of the Chor-Rivest Knapsack Type Public Key Cryptosystem

    Get PDF
    The Chor-Rivest cryptosystem is a public key cryptosystem first proposed by MIT cryptographers Ben Zion Chor and Ronald Rivest [Chor84]. More recently Chor has imple mented the cryptosystem as part of his doctoral thesis [Chor85]. Derived from the knapsack problem, this cryptosystem differs from earlier knapsack public key systems in that computa tions to create the knapsack are done over finite algebraic fields. An interesting result of Bose and Chowla supplies a method of constructing higher densities than previously attain able [Bose62]. Not only does an increased information rate arise, but the new system so far is immune to the low density attacks levied against its predecessors, notably those of Lagarias- Odlyzko and Radziszowski-Kreher [Laga85, Radz86]. An implementation of this cryptosystem is really an instance of the general scheme, dis tinguished by fixing a pair of parameters, p and h , at the outset. These parameters then remain constant throughout the life of the implementation (which supports a community of users). Chor has implemented one such instance of his cryptosystem, where p =197 and h =24. This thesis aspires to extend Chor\u27s work by admitting p and h as variable inputs at run time. In so doing, a cryptanalyst is afforded the means to mimic the action of arbitrary implementations. A high degree of success has been achieved with respect to this goal. There are only a few restrictions on the choice of parameters that may be selected. Unfortunately this general ity incurs a high cost in efficiency; up to thirty hours of (VAX1 1-780) processor time are needed to generate a single key pair in the desired range (p = 243 and h =18)

    Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics

    High Performance Lattice-based CCA-secure Encryption

    Get PDF
    Lattice-based encryption schemes still suffer from a low message throughput per ciphertext. This is mainly due to the fact that the underlying schemes do not tap the full potentials of LWE. Many constructions still follow the one-time-pad approach considering LWE instances as random vectors added to a message, most often encoded bit vectors. Recently, at Financial Crypto 2015 El Bansarkhani et al. proposed a novel encryption scheme based on the A-LWE assumption (Augmented LWE), where data is embedded into the error term without changing its target distributions. By this novelty it is possible to encrypt much more data as compared to the traditional one-time-pad approach and it is even possible to combine both concepts. In this paper we revisit this approach and propose amongst others several algebraic techniques in order to significantly improve the message throughput per ciphertext. Furthermore, we give a thorough security analysis as well as an efficient implementation of the CCA1-secure encryption scheme instantiated with the most efficient trapdoor construction. In particular, we attest that it even outperforms the CPA-secure encryption scheme from Lindner and Peikert presented at CT-RSA 2011

    Generalized Compact Knapsacks, Cyclic Lattices, and Efficient One-Way Functions

    Full text link

    Practical Post-Quantum Signatures for Privacy

    Get PDF
    The transition to post-quantum cryptography has been an enormous challenge and effort for cryptographers over the last decade, with impressive results such as the future NIST standards. However, the latter has so far only considered central cryptographic mechanisms (signatures or KEM) and not more advanced ones, e.g., targeting privacy-preserving applications. Of particular interest is the family of solutions called blind signatures, group signatures and anonymous credentials, for which standards already exist, and which are deployed in billions of devices. Such a family does not have, at this stage, an efficient post-quantum counterpart although very recent works improved this state of affairs by offering two different alternatives: either one gets a system with rather large elements but a security proved under standard assumptions or one gets a more efficient system at the cost of ad-hoc interactive assumptions or weaker security models. Moreover, all these works have only considered size complexity without implementing the quite complex building blocks their systems are composed of. In other words, the practicality of such systems is still very hard to assess, which is a problem if one envisions a post-quantum transition for the corresponding systems/standards. In this work, we propose a construction of so-called signature with efficient protocols (SEP), which is the core of such privacy-preserving solutions. By revisiting the approach by Jeudy et al. (Crypto 2023) we manage to get the best of the two alternatives mentioned above, namely short sizes with no compromise on security. To demonstrate this, we plug our SEP in an anonymous credential system, achieving credentials of less than 80 KB. In parallel, we fully implemented our system, and in particular the complex zero-knowledge framework of Lyubashevsky et al. (Crypto\u2722), which has, to our knowledge, not be done so far. Our work thus not only improves the state-of-the-art on privacy-preserving solutions, but also significantly improves the understanding of efficiency and implications for deployment in real-world systems

    Frontiers in Lattice Cryptography and Program Obfuscation

    Get PDF
    In this dissertation, we explore the frontiers of theory of cryptography along two lines. In the first direction, we explore Lattice Cryptography, which is the primary sub-area of post-quantum cryptographic research. Our first contribution is the construction of a deniable attribute-based encryption scheme from lattices. A deniable encryption scheme is secure against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. An attribute-based encryption scheme allows ``fine-grained'' access to ciphertexts, allowing for a decryption access policy to be embedded in ciphertexts and keys. We achieve both properties simultaneously for the first time from lattices. Our second contribution is the construction of a digital signature scheme that enjoys both short signatures and a completely tight security reduction from lattices. As a matter of independent interest, we give an improved method of randomized inversion of the G gadget matrix, which reduces the noise growth rate in homomorphic evaluations performed in a large number of lattice-based cryptographic schemes, without incurring the high cost of sampling discrete Gaussians. In the second direction, we explore Cryptographic Program Obfuscation. A program obfuscator is a type of cryptographic software compiler that outputs executable code with the guarantee that ``whatever can be hidden about the internal workings of program code, is hidden.'' Indeed, program obfuscation can be viewed as a ``universal and cryptographically-complete'' tool. Our third contribution is the first, full-scale implementation of secure program obfuscation in software. Our toolchain takes code written in a C-like programming language, specialized for cryptography, and produces secure, obfuscated software. Our fourth contribution is a new cryptanalytic attack against a variety of ``early'' program obfuscation candidates. We provide a general, efficiently-testable property for any two branching programs, called partial inequivalence, which we show is sufficient for launching an ``annihilation attack'' against several obfuscation candidates based on Garg-Gentry-Halevi multilinear maps

    Bi-Deniable Inner Product Encryption from LWE

    Get PDF
    Deniable encryption (Canetti et al. CRYPTO \u2797) is an intriguing primitive that provides a security guarantee against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. The concept of deniability has later demonstrated useful and powerful in many other contexts, such as leakage resilience, adaptive security of protocols, and security against selective opening attacks. Despite its conceptual usefulness, our understanding of how to construct deniable primitives under standard assumptions is restricted. In particular, from standard assumptions such as Learning with Errors (LWE), we have only multi-distributional or non-negligible advantage deniable encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully-secure sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question. In this work, we construct a bi-deniable inner product encryption (IPE) in the multi-distributional model without relying on obfuscation as a black box. Our techniques involve new ways of manipulating Gaussian noise, and lead to a significantly tighter analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas can give insight into achieving deniability and related properties for further, advanced cryptographic constructions under standard assumptions
    corecore