738 research outputs found

    Provably Secure Double-Block-Length Hash Functions in a Black-Box Model

    Get PDF
    In CRYPTO’89, Merkle presented three double-block-length hash functions based on DES. They are optimally collision resistant in a black-box model, that is, the time complexity of any collision-finding algorithm for them is Ω(2^<l/2>) if DES is a random block cipher, where l is the output length. Their drawback is that their rates are low. In this article, new double-block-length hash functions with higher rates are presented which are also optimally collision resistant in the blackbox model. They are composed of block ciphers whose key length is twice larger than their block length

    Practical Homomorphic Evaluation of Block-Cipher-Based Hash Functions with Applications

    Get PDF
    Fully homomorphic encryption (FHE) is a powerful cryptographic technique allowing to perform computation directly over encrypted data. Motivated by the overhead induced by the homomorphic ciphertexts during encryption and transmission, the transciphering technique, consisting in switching from a symmetric encryption to FHE encrypted data was investigated in several papers. Different stream and block ciphers were evaluated in terms of their FHE-friendliness , meaning practical implementations costs while maintaining sufficient security levels. In this work, we present a first evaluation of hash functions in the homomorphic domain, based on well-chosen block ciphers. More precisely, we investigate the cost of transforming PRINCE, SIMON, SPECK, and LowMC, a set of lightweight block-ciphers into secure hash primitives using well-established hash functions constructions based on block-ciphers, and provide evaluation under bootstrappable FHE schemes. We also motivate the necessity of practical homomorphic evaluation of hash functions by providing several use cases in which the integrity of private data is also required. In particular, our hash constructions can be of significant use in a threshold-homomorphic based protocol for the single secret leader election problem occurring in blockchains with Proof-of-stake consensus. Our experiments showed that using a TFHE implementation of a hash function, we are able to achieve practical runtime, and appropriate security levels (e.g., for PRINCE it takes 1.28 minutes to obtain a 128 bits of hash)

    Quantum attacks on Bitcoin, and how to protect against them

    Get PDF
    The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk are cryptocurrencies, a market currently worth over 150 billion USD. We investigate the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications.Comment: 21 pages, 6 figures. For a rough update on the progress of Quantum devices and prognostications on time from now to break Digital signatures, see https://www.quantumcryptopocalypse.com/quantum-moores-law

    Second Preimages for Iterated Hash Functions Based on a b-Block Bypass

    Get PDF
    In this article, we present a second preimage attack on a double block-length hash proposal presented at FSE 2006. If the hash function is instantiated with DESX as underlying block cipher, we are able to construct second preimages deterministically. Nevertheless, this second preimage attack does not render the hash scheme insecure. For the hash scheme, we only show that it should not be instantiated with DESX but AES should rather be used. However, we use the instantiation of this hash scheme with DESX to introduce a new property of iterated hash functions, namely a so-called b-block bypass. We will show that if an iterated hash function possesses a b-block bypass, then this implies that second preimages can be constructed. Additionally, the attacker has more degrees of freedom for constructing the second preimage

    More Insights on Blockcipher-Based Hash Functions

    Get PDF
    In this paper we give more insights on the security of blockcipher-based hash functions. We give a very simple criterion to build a secure large class of Single-Block-Length (SBL) or double call Double-Block-Length (DBL) compression functions based on (kn,n)(kn, n) blockciphers, where knkn is the key length and nn is the block length and kk is an integer. This criterion is simpler than previous works in the literature. Based on the criterion, we can get many results from this criterion, and we can get a conclusion on such class of blockcipher-based hash functions. We solved the open problem left by Hirose. Our results show that to build a secure double call DBL compression function, it is required k>=m+1k >= m+1 where mm is the number of message blocks. Thus, we can only build rate 1/2 secure double DBL blockcipher-based compression functions if k==2k==2. At last, we pointed out flaws in Stam\u27s theorem about supercharged functions and gave a revision of this theorem and added another condition for the security of supercharged compression functions

    Preimage resistance beyond the birthday bound: Double-length hashing revisited

    Get PDF
    Security proofs are an essential part of modern cryptography. Often the challenge is not to come up with appropriate schemes but rather to technically prove that these satisfy the desired security properties. We provide for the first time techniques for proving asymptotically optimal preimage resistance bounds for block cipher based double length, double call hash functions. More precisely, we consider for some \keylength>\blocklength compression functions H:\{0,1\}^{\keylength+\blocklength} \rightarrow \{0,1\}^{2\blocklength} using two calls to an ideal block cipher with an \blocklength-bit block size. Optimally, an adversary trying to find a preimage for HH should require \Omega(2^{2\blocklength}) queries to the underlying block cipher. As a matter of fact there have been several attempts to prove the preimage resistance of such compression functions, but no proof did go beyond the \Omega(2^{\blocklength}) barrier, therefore leaving a huge gap when compared to the optimal bound. In this paper, we introduce two new techniques on how to lift this bound to \Omega(2^{2\blocklength}). We demonstrate our new techniques for a simple and natural design of HH, being the concatenation of two instances of the well-known Davies-Meyer compression function

    Security analysis of NIST-LWC contest finalists

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringTraditional cryptographic standards are designed with a desktop and server environment in mind, so, with the relatively recent proliferation of small, resource constrained devices in the Internet of Things, sensor networks, embedded systems, and more, there has been a call for lightweight cryptographic standards with security, performance and resource requirements tailored for the highly-constrained environments these devices find themselves in. In 2015 the National Institute of Standards and Technology began a Standardization Process in order to select one or more Lightweight Cryptographic algorithms. Out of the original 57 submissions ten finalists remain, with ASCON and Romulus being among the most scrutinized out of them. In this dissertation I will introduce some concepts required for easy understanding of the body of work, do an up-to-date revision on the current situation on the standardization process from a security and performance standpoint, a description of ASCON and Romulus, and new best known analysis, and a comparison of the two, with their advantages, drawbacks, and unique traits.Os padrões criptográficos tradicionais foram elaborados com um ambiente de computador e servidor em mente. Com a proliferação de dispositivos de pequenas dimensões tanto na Internet of Things, redes de sensores e sistemas embutidos, apareceu uma necessidade para se definir padrões para algoritmos de criptografia leve, com prioridades de segurança, performance e gasto de recursos equilibrados para os ambientes altamente limitados em que estes dispositivos operam. Em 2015 o National Institute of Standards and Technology lançou um processo de estandardização com o objectivo de escolher um ou mais algoritmos de criptografia leve. Das cinquenta e sete candidaturas originais sobram apenas dez finalistas, sendo ASCON e Romulus dois desses finalistas mais examinados. Nesta dissertação irei introduzir alguns conceitos necessários para uma fácil compreensão do corpo deste trabalho, assim como uma revisão atualizada da situação atual do processo de estandardização de um ponto de vista tanto de segurança como de performance, uma descrição do ASCON e do Romulus assim como as suas melhores análises recentes e uma comparação entre os dois, frisando as suas vantagens, desvantagens e aspectos únicos
    corecore