6,400 research outputs found

    On the Design of Secure and Fast Double Block Length Hash Functions

    Get PDF
    In this work the security of the rate-1 double block length hash functions, which based on a block cipher with a block length of n-bit and a key length of 2n-bit, is reconsidered. Counter-examples and new attacks are presented on this general class of double block length hash functions with rate 1, which disclose uncovered flaws in the necessary conditions given by Satoh et al. and Hirose. Preimage and second preimage attacks are presented on Hirose's two examples which were left as an open problem. Therefore, although all the rate-1 hash functions in this general class are failed to be optimally (second) preimage resistant, the necessary conditions are refined for ensuring this general class of the rate-1 hash functions to be optimally secure against the collision attack. In particular, two typical examples, which designed under the refined conditions, are proven to be indifferentiable from the random oracle in the ideal cipher model. The security results are extended to a new class of double block length hash functions with rate 1, where one block cipher used in the compression function has the key length is equal to the block length, while the other is doubled

    Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

    Full text link
    We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any nn-round interactive protocol using NN rounds over an adversarial channel that corrupts up to ĻN\rho N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate Ļ\rho, communication complexity NN, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate Ī“<1/4\delta < 1/4 with a linear N=Ī˜(n)N = \Theta(n) communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate Ļ<27\rho < \frac{2}{7} adaptively, Ļ<13\rho < \frac{1}{3} if only one party is required to decode, and Ļ<12\rho < \frac{1}{2} if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.Comment: preliminary versio

    In-packet Bloom filters: Design and networking applications

    Full text link
    The Bloom filter (BF) is a well-known space-efficient data structure that answers set membership queries with some probability of false positives. In an attempt to solve many of the limitations of current inter-networking architectures, some recent proposals rely on including small BFs in packet headers for routing, security, accountability or other purposes that move application states into the packets themselves. In this paper, we consider the design of such in-packet Bloom filters (iBF). Our main contributions are exploring the design space and the evaluation of a series of extensions (1) to increase the practicality and performance of iBFs, (2) to enable false-negative-free element deletion, and (3) to provide security enhancements. In addition to the theoretical estimates, extensive simulations of the multiple design parameters and implementation alternatives validate the usefulness of the extensions, providing for enhanced and novel iBF networking applications.Comment: 15 pages, 11 figures, preprint submitted to Elsevier COMNET Journa

    Ramanujan graphs in cryptography

    Get PDF
    In this paper we study the security of a proposal for Post-Quantum Cryptography from both a number theoretic and cryptographic perspective. Charles-Goren-Lauter in 2006 [CGL06] proposed two hash functions based on the hardness of finding paths in Ramanujan graphs. One is based on Lubotzky-Phillips-Sarnak (LPS) graphs and the other one is based on Supersingular Isogeny Graphs. A 2008 paper by Petit-Lauter-Quisquater breaks the hash function based on LPS graphs. On the Supersingular Isogeny Graphs proposal, recent work has continued to build cryptographic applications on the hardness of finding isogenies between supersingular elliptic curves. A 2011 paper by De Feo-Jao-Pl\^{u}t proposed a cryptographic system based on Supersingular Isogeny Diffie-Hellman as well as a set of five hard problems. In this paper we show that the security of the SIDH proposal relies on the hardness of the SIG path-finding problem introduced in [CGL06]. In addition, similarities between the number theoretic ingredients in the LPS and Pizer constructions suggest that the hardness of the path-finding problem in the two graphs may be linked. By viewing both graphs from a number theoretic perspective, we identify the similarities and differences between the Pizer and LPS graphs.Comment: 33 page
    • ā€¦
    corecore