305 research outputs found
Random Oracles in a Quantum World
The interest in post-quantum cryptography - classical systems that remain
secure in the presence of a quantum adversary - has generated elegant proposals
for new cryptosystems. Some of these systems are set in the random oracle model
and are proven secure relative to adversaries that have classical access to the
random oracle. We argue that to prove post-quantum security one needs to prove
security in the quantum-accessible random oracle model where the adversary can
query the random oracle with quantum states.
We begin by separating the classical and quantum-accessible random oracle
models by presenting a scheme that is secure when the adversary is given
classical access to the random oracle, but is insecure when the adversary can
make quantum oracle queries. We then set out to develop generic conditions
under which a classical random oracle proof implies security in the
quantum-accessible random oracle model. We introduce the concept of a
history-free reduction which is a category of classical random oracle
reductions that basically determine oracle answers independently of the history
of previous queries, and we prove that such reductions imply security in the
quantum model. We then show that certain post-quantum proposals, including ones
based on lattices, can be proven secure using history-free reductions and are
therefore post-quantum secure. We conclude with a rich set of open problems in
this area.Comment: 38 pages, v2: many substantial changes and extensions, merged with a
related paper by Boneh and Zhandr
Security analysis of NIST-LWC contest finalists
Dissertação de mestrado integrado em Informatics EngineeringTraditional cryptographic standards are designed with a desktop and server environment in mind, so, with the
relatively recent proliferation of small, resource constrained devices in the Internet of Things, sensor networks,
embedded systems, and more, there has been a call for lightweight cryptographic standards with security,
performance and resource requirements tailored for the highly-constrained environments these devices find
themselves in.
In 2015 the National Institute of Standards and Technology began a Standardization Process in order to select
one or more Lightweight Cryptographic algorithms. Out of the original 57 submissions ten finalists remain, with
ASCON and Romulus being among the most scrutinized out of them.
In this dissertation I will introduce some concepts required for easy understanding of the body of work, do
an up-to-date revision on the current situation on the standardization process from a security and performance
standpoint, a description of ASCON and Romulus, and new best known analysis, and a comparison of the two,
with their advantages, drawbacks, and unique traits.Os padrões criptográficos tradicionais foram elaborados com um ambiente de computador e servidor em mente.
Com a proliferação de dispositivos de pequenas dimensões tanto na Internet of Things, redes de sensores e
sistemas embutidos, apareceu uma necessidade para se definir padrões para algoritmos de criptografia leve, com
prioridades de segurança, performance e gasto de recursos equilibrados para os ambientes altamente limitados
em que estes dispositivos operam.
Em 2015 o National Institute of Standards and Technology lançou um processo de estandardização com o
objectivo de escolher um ou mais algoritmos de criptografia leve. Das cinquenta e sete candidaturas originais
sobram apenas dez finalistas, sendo ASCON e Romulus dois desses finalistas mais examinados.
Nesta dissertação irei introduzir alguns conceitos necessários para uma fácil compreensão do corpo deste
trabalho, assim como uma revisão atualizada da situação atual do processo de estandardização de um ponto
de vista tanto de segurança como de performance, uma descrição do ASCON e do Romulus assim como as
suas melhores análises recentes e uma comparação entre os dois, frisando as suas vantagens, desvantagens e
aspectos únicos
Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes
We present here a new family of trapdoor one-way Preimage Sampleable
Functions (PSF) based on codes, the Wave-PSF family. The trapdoor function is
one-way under two computational assumptions: the hardness of generic decoding
for high weights and the indistinguishability of generalized -codes.
Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we
ensure the proper distribution for the trapdoor inverse output. The domain
sampling property of our family is ensured by using and proving a variant of
the left-over hash lemma. We instantiate the new Wave-PSF family with ternary
generalized -codes to design a "hash-and-sign" signature scheme which
achieves existential unforgeability under adaptive chosen message attacks
(EUF-CMA) in the random oracle model. For 128 bits of classical security,
signature sizes are in the order of 15 thousand bits, the public key size in
the order of 4 megabytes, and the rejection rate is limited to one rejection
every 10 to 12 signatures.Comment: arXiv admin note: text overlap with arXiv:1706.0806
Blockcipher Based Hashing Revisited
We revisit the rate-1 blockcipher based hash
functions as first studied by Preneel, Govaerts
and Vandewalle (Crypto\u2793) and later extensively analysed by Black,
Rogaway and Shrimpton (Crypto\u2702). We analyze a further generalization where any pre- and postprocessing is considered. By introducing a new
tweak to earlier proof methods, we obtain a simpler proof
that is both more general and more tight than existing
results. As added benefit, this also leads to a clearer understanding
of the current classification of rate-1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al
Design and Analysis of Multi-Block-Length Hash Functions
Cryptographic hash functions are used in many cryptographic applications, and the design of provably secure hash functions (relative to various security notions) is an active area of research. Most of the currently existing hash functions use the Merkle-Damgård paradigm, where by appropriate iteration the hash function inherits its collision and preimage resistance from the underlying compression function. Compression functions can either be constructed from scratch or be built using well-known cryptographic primitives such as a blockcipher. One classic type of primitive-based compression functions is single-block-length : It contains designs that have an output size matching the output length n of the underlying primitive. The single-block-length setting is well-understood. Yet even for the optimally secure constructions, the (time) complexity of collision- and preimage-finding attacks is at most 2n/2, respectively 2n ; when n = 128 (e.g., Advanced Encryption Standard) the resulting bounds have been deemed unacceptable for current practice. As a remedy, multi-block-length primitive-based compression functions, which output more than n bits, have been proposed. This output expansion is typically achieved by calling the primitive multiple times and then combining the resulting primitive outputs in some clever way. In this thesis, we study the collision and preimage resistance of certain types of multi-call multi-block-length primitive-based compression (and the corresponding Merkle-Damgård iterated hash) functions : Our contribution is three-fold. First, we provide a novel framework for blockcipher-based compression functions that compress 3n bits to 2n bits and that use two calls to a 2n-bit key blockcipher with block-length n. We restrict ourselves to two parallel calls and analyze the sufficient conditions to obtain close-to-optimal collision resistance, either in the compression function or in the Merkle-Damgård iteration. Second, we present a new compression function h: {0,1}3n → {0,1}2n ; it uses two parallel calls to an ideal primitive (public random function) from 2n to n bits. This is similar to MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA'11). However, unlike these constructions, already in the compression function we achieve that an adversary limited (asymptotically in n) to O (22n(1-δ)/3) queries (for any δ > 0) has a disappearing advantage to find collisions. This is the first construction of this type offering collision resistance beyond 2n/2 queries. Our final contribution is the (re)analysis of the preimage and collision resistance of the Knudsen-Preneel compression functions in the setting of public random functions. Knudsen-Preneel compression functions utilize an [r,k,d] linear error-correcting code over 𝔽2e (for e > 1) to build a compression function from underlying blockciphers operating in the Davies-Meyer mode. Knudsen and Preneel show, in the complexity-theoretic setting, that finding collisions takes time at least 2(d-1)n2. Preimage resistance, however, is conjectured to be the square of the collision resistance. Our results show that both the collision resistance proof and the preimage resistance conjecture of Knudsen and Preneel are incorrect : With the exception of two of the proposed parameters, the Knudsen-Preneel compression functions do not achieve the security level they were designed for
Security proofs for the MD6 hash function mode of operation
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 79-82).In recent years there have been a series of serious and alarming cryptanalytic attacks on several commonly-used hash functions, such as MD4, MD5, SHA-0, and SHA1 [13, 38]. These culminated with the celebrated work of Wang, Yin, and Yu from 2005, which demonstrated relatively efficient methods for finding collisions in the SHA-1 hash function [37]. Although there are several cryptographic hash functions - such as the SHA-2 family [28] - that have not yet succumbed to such attacks, the U.S. National Institute of Standards and Technology (NIST) put out a call in 2007 for candidate proposals for a new cryptographic hash function family, to be dubbed SHA-3 [29]. Hash functions are algorithms for converting an arbitrarily large input into a fixed-length message digest. They are typically composed of a compression function or block cipher that operate on fixed-length pieces of the input and a mode of operation that governs how apply the compression function or block cipher repeatedly on these pieces in order to allow for arbitrary-length inputs. Cryptographic hash functions are furthermore required to have several important and stringent security properties including (but not limited to) first-preimage resistance, second-preimage resistance, collision resistance, and for keyed hash functions, pseudorandomness. This work presents proofs of security for the mode of operation of the MD6 cryptographic hash function [32] - a candidate for the SHA-3 competition - which differs greatly from the modes of operation of many commonly-used hash functions today (MD4, MD5, as well as the SHA family of hash functions.) In particular, we demonstrate provably that the mode of operation used in MD6 preserves some cryptographic properties of the compression function - that is, assuming some ideal conditions about the compression function used, the overall MD6 hash function is secure as well.by Christopher Yale Crutchfield.S.M
Interpolation Cryptanalysis of Unbalanced Feistel Networks with Low Degree Round Functions
Arithmetisierungs-Orientierte Symmetrische Primitive (AOSPs) sprechen das bestehende Optimierungspotential bei der Auswertung von Blockchiffren und Hashfunktionen als Bestandteil von sicherer Mehrparteienberechnung, voll-homomorpher Verschlüsselung und Zero-Knowledge-Beweisen an. Die Konstruktionsweise von AOSPs unterscheidet sich von traditionellen Primitiven durch die Verwendung von algebraisch simplen Elementen. Zusätzlich sind viele Entwürfe über Primkörpern statt über Bits definiert. Aufgrund der Neuheit der Vorschläge sind eingehendes Verständnis und ausgiebige Analyse erforderlich um ihre Sicherheit zu etablieren. Algebraische Analysetechniken wie zum Beispiel Interpolationsangriffe sind die erfolgreichsten Angriffsvektoren gegen AOSPs. In dieser Arbeit generalisieren wir eine existierende Analyse, die einen Interpolationsangriff mit geringer Speicherkomplexität verwendet, um das Entwurfsmuster der neuen Chiffre GMiMC und ihrer zugehörigen Hashfunktion GMiMCHash zu untersuchen. Wir stellen eine neue Methode zur Berechnung des Schlüssels basierend auf Nullstellen eines Polynoms vor, demonstrieren Verbesserungen für die Komplexität des Angriffs durch Kombinierung mehrere Ausgaben, und wenden manche der entwickelten Techniken in einem algebraischen Korrigierender-Letzter-Block
Angriff der Schwamm-Konstruktion an. Wir beantworten die offene Frage einer früheren Arbeit, ob die verwendete Art von Interpolationsangriffen generalisierbar ist, positiv. Wir nennen konkrete empfohlene untere Schranken für Parameter in den betrachteten Szenarien. Außerdem kommen wir zu dem Schluss dass GMiMC und GMiMCHash gegen die in dieser Arbeit betrachteten Interpolationsangriffe sicher sind. Weitere kryptanalytische Anstrengungen sind erforderlich um die Sicherheitsgarantien von AOSPs zu festigen
- …