12 research outputs found

    On Composability of Game-based Password Authenticated Key Exchange

    Get PDF
    It is standard practice that the secret key derived from an execution of a Password Authenticated Key Exchange (PAKE) protocol is used to authenticate and encrypt some data payload using a Symmetric Key Protocol (SKP). Unfortunately, most PAKEs of practical interest are studied using so-called game-based models, which – unlike simulation models – do not guarantee secure composition per se. However, Brzuska et al. (CCS 2011) have shown that middle ground is possible in the case of authenticated key exchange that relies on Public- Key Infrastructure (PKI): the game-based models do provide secure composition guarantees when the class of higher-level applications is restricted to SKPs. The question that we pose in this paper is whether or not a similar result can be exhibited for PAKE. Our work answers this question positively. More specifically, we show that PAKE protocols secure according to the game-based Real-or-Random (RoR) definition with the weak forward secrecy of Abdalla et al. (S&P 2015) allow for safe composition with arbitrary, higher-level SKPs. Since there is evidence that most PAKEs secure in the Find-then-Guess (FtG) model are in fact secure according to RoR definition, we can conclude that nearly all provably secure PAKEs enjoy a certain degree of composition, one that at least covers the case of implementing secure channel

    SoK: A Stratified Approach to Blockchain Decentralization

    Full text link
    Decentralization has been touted as the principal security advantage which propelled blockchain systems at the forefront of developments in the financial technology space. Its exact semantics nevertheless remain highly contested and ambiguous, with proponents and critics disagreeing widely on the level of decentralization offered. To address this, we put forth a systematization of the current landscape with respect to decentralization and we derive a methodology that can help direct future research towards defining and measuring decentralization. Our approach dissects blockchain systems into multiple layers, or strata, each possibly encapsulating multiple categories, and enables a unified method for measuring decentralization in each one. Our layers are (1) hardware, (2) software, (3) network, (4) consensus, (5) economics ("tokenomics"), (6) API, (7) governance, and (8) geography. Armed with this stratification, we examine for each layer which pertinent properties of distributed ledgers (safety, liveness, privacy, stability) can be at risk due to centralization and in what way. Our work highlights the challenges in measuring and achieving decentralization, points to the degree of (de)centralization of various existing systems, where such assessment can be made from presently available public information, and suggests potential metrics and directions where future research is needed. We also introduce the "Minimum Decentralization Test", as a way to assess the decentralization state of a blockchain system and, as an exemplary case, we showcase how it can be applied to Bitcoin

    Envisioning the Future of Cyber Security in Post-Quantum Era: A Survey on PQ Standardization, Applications, Challenges and Opportunities

    Full text link
    The rise of quantum computers exposes vulnerabilities in current public key cryptographic protocols, necessitating the development of secure post-quantum (PQ) schemes. Hence, we conduct a comprehensive study on various PQ approaches, covering the constructional design, structural vulnerabilities, and offer security assessments, implementation evaluations, and a particular focus on side-channel attacks. We analyze global standardization processes, evaluate their metrics in relation to real-world applications, and primarily focus on standardized PQ schemes, selected additional signature competition candidates, and PQ-secure cutting-edge schemes beyond standardization. Finally, we present visions and potential future directions for a seamless transition to the PQ era

    SoK: Privacy-Preserving Signatures

    Get PDF
    Modern security systems depend fundamentally on the ability of users to authenticate their communications to other parties in a network. Unfortunately, cryptographic authentication can substantially undermine the privacy of users. One possible solution to this problem is to use privacy-preserving cryptographic authentication. These protocols allow users to authenticate their communications without revealing their identity to the verifier. In the non-interactive setting, the most common protocols include blind, ring, and group signatures, each of which has been the subject of enormous research in the security and cryptography literature. These primitives are now being deployed at scale in major applications, including Intel\u27s SGX software attestation framework. The depth of the research literature and the prospect of large-scale deployment motivate us to systematize our understanding of the research in this area. This work provides an overview of these techniques, focusing on applications and efficiency

    Steel: Composable Hardware-based Stateful and Randomised Functional Encryption

    Get PDF
    Trusted execution enviroments (TEEs) enable secure execution of program on untrusted hosts and cryptographically attest the correctness of outputs. As these are complex systems, it is hard to capture the exact security achieved by protocols employing TEEs. Crucially TEEs are typically employed in multiple protocols at the same time, thus composable security (with global subroutines) is a natural goal for such systems. We show that under an attested execution setup GattG_\mathsf{att} we can realise cryptographic functionalities that are unrealizable in the standard model. We propose a new primitive of Functional Encryption for Stateful and Randomised functionalities (FESR) and an associated protocol, Steel, that realizes it. We show that Steel UC-realises FESR in the universal composition with global subroutines model (TCC 2020). Our work is also a validation of the compositionality of earlier work (Iron), CCS 2017) capturing (non-stateful) hardware-based functional encryption. As the existing functionality for attested execution of Pass et al. (Eurocrypt 2017) is too strong for real world use, we propose a weaker functionality that allows the adversary to conduct rollback and forking attacks. We show that the stateful variant of Steel, contrary to the stateless variant corresponding to Iron, is not secure in this setting and propose several mitigation techniques

    Fully Invisible Protean Signatures Schemes

    Get PDF
    Protean Signatures (PS), recently introduced by Krenn et al. (CANS \u2718), allow a semi-trusted third party, named the sanitizer, to modify a signed message in a controlled way. The sanitizer can edit signer-chosen parts to arbitrary bitstrings, while the sanitizer can also redact admissible parts, which are also chosen by the signer. Thus, PSs generalize both redactable signature (RSS) and sanitizable signature (SSS) into a single notion. However, the current definition of invisibility does not prohibit that an outsider can decide which parts of a message are redactable - only which parts can be edited are hidden. This negatively impacts on the privacy guarantees provided by the state-of-the-art definition. We extend PSs to be fully invisible. This strengthened notion guarantees that an outsider can neither decide which parts of a message can be edited nor which parts can be redacted. To achieve our goal, we introduce the new notions of Invisible RSSs and Invisible Non-Accountable SSSs (SSS\u27), along with a consolidated framework for aggregate signatures. Using those building blocks, our resulting construction is significantly more efficient than the original scheme by Krenn et al., which we demonstrate in a prototypical implementation

    Reconsidering Generic Composition: the Tag-then-Encrypt case

    Get PDF
    Authenticated Encryption (AE\mathsf{AE}) achieves confidentiality and authenticity, the two most fundamental goals of cryptography, in a single scheme. A common strategy to obtain AE\mathsf{AE} is to combine a Message Authentication Code (MAC)(\mathsf{MAC}) and an encryption scheme, either nonce-based or iv\mathsf{iv}-based. Out of the 180 possible combinations, Namprempre et al.~[25] proved that 12 were secure, 164 insecure and 4 were left unresolved: A10, A11 and A12 which use an \iv-based encryption scheme and N4 which uses a nonce-based one. The question of the security of these composition modes is particularly intriguing as N4, A11, and A12 are more efficient than the 12 composition modes that are known to be provably secure.\\ We prove that: (i)(i) N4 is not secure in general, (ii)(ii) A10, A11 and A12 have equivalent security, (iii)(iii) A10, A11, A12 and N4 are secure if the underlying encryption scheme is either misuse-resistant or ``message malleable\u27\u27, a property that is satisfied by many classical encryption modes, (iv)(iv) A10, A11 and A12 are insecure if the underlying encryption scheme is stateful or untidy.\\ All the results are quantitative

    Lattice-based zero-knowledge proofs of knowledge

    Get PDF
    (English) The main goal of this dissertation is to develop new lattice-based cryptographic schemes. Most of the cryptographic protocols that each and every one of us use on a daily basis are only secure under the assumption that two mathematical problems, namely the discrete logarithm on elliptic curves and the factorization of products of two primes, are computationally hard. That is believed to be true for classical computers, but quantum computers would be able to solve these problems much more efficiently, demolishing the foundations of plenty of cryptographic constructions. This reveals the importance of post-quantum alternatives, cryptographic schemes whose security relies on different problems intractable for both classical and quantum computers. The most promising family of problems widely believed to be hard for quantum computers are lattice-based problems. We increase the supply of lattice-based tools providing new Zero-Knowledge Proofs of Knowledge for the Ring Learning With Errors (RLWE) problem, perhaps the most popular lattice-based problem. Zero-knowledge proofs are protocols between a prover and a verifier where the prover convinces the verifier of the validity of certain statements without revealing any additional relevant information. Our proofs extend the literature of Stern-based proofs, following the techniques presented by Jacques Stern in 1994. His original idea involved a code-based problem, but it has been reiteratedly improved and generalized to be used with lattices. We illustrate our proposal defining a variant of the commitment scheme, a cryptographic primitive that allows us to ensure some message was already determined at some point without revealing it until a future time, defined by Benhamouda et al. in ESORICS 2015, and proving in zero-knowledge the knowledge of a valid opening. Most importantly we also show how to prove that the message committed in one commitment is a linear combination, with some public coefficients, of the committed messages from two other commitments, again without revealing any further information about the messages. Finally, we also present a zero-knowledge proof analogous to the previous one but for multiplicative relations, something much more involved that allows us to prove any arithmetic circuit. We give first an interactive version of these proofs and then show how to construct a non-interactive one. We diligently prove that both the commitment and the companion Zero-Knowledge Proofs of Knowledge are secure under the assumption of the hardness of the underlying lattice problems. Furthermore, we specifically develop such proofs so that the arising conditions can be directly used to compute parameters that satisfy them. This way we provide a general method to instantiate our commitment and proofs with any desired security level. Thanks to this practical approach we have been able to implement all the proposed schemes and benchmark the prototype im-plementation with actually secure parameters, which allows us to obtain meaningful results and compare its performance with the existing alternatives. Moreover, provided that multiplication of polynomials in the quotient ring ℤₚ[]/⟨ⁿ + 1⟩, with prime and a power of two, is the most basic operation when working with ideal lattices we comprehensively study what are the necessary and sufficient conditions needed for applying (a generalized version of) the Fast Fourier Transform (FFT) to obtain an efficient multiplication algorithm in quotient rings as ℤₘ[]/⟨ⁿ − ⟩ (where we consider any positive integer and generalize the quotient), as we think it is of independent interest. We believe such a theoretical analysis is fundamental to be able to determine when a given generalization can also be applied to design an efficient multiplication algorithm when the FFT is not defined for the ring we are considering. That is the case of the rings used for the commitment and proofs described before, where only a partial FFT is available.(Español) El objetivo principal de esta tesis es obtener nuevos esquemas criptográficos basados en retículos. La mayoría de los protocolos criptográficos que usamos a diario son únicamente seguros bajo la hipótesis de que el problema del logaritmo discreto en curvas elípticas y la factorización de productos de dos primos son computacionalmente difíciles. Se cree que esto es cierto para los ordenadores clásicos, pero los ordenadores cuánticos podrían resolver estos problemas de forma mucho más eficiente, acabando con las bases sobre las que se fundamenta una multitud de construcciones criptográficas. Esto evidencia la importancia de las alternativas poscuánticas, cuya seguridad se basa en problemas diferentes que sean inasumibles tanto para los ordenadores clásicos como los cuánticos. Los problemas de retículos son los candidatos más prometedores, puesto que se considera que son problemas difíciles para los ordenadores cuánticos. Presentamos nuevas herramientas basadas en retículos con unas Pruebas de Conocimiento Nulo para el problema Ring Learning With Errors (RLWE), seguramente el problema de retículos más popular. Las pruebas de Conocimiento Nulo son protocolos entre un probador y un verificador en los que el primero convence al segundo de la validez de una proposición, sin revelar ninguna información adicional relevante. Nuestras pruebas se basan en el protocolo de Stern, siguiendo sus técnicas presentadas en 1994. Su idea original involucraba un problema de códigos, pero se ha mejorado y generalizado reiteradamente para poder aplicarse a retículos. Ilustramos nuestra propuesta definiendo una variante del esquema de compromiso, una primitiva criptográfica que nos permite asegurar que un mensaje fue determinado en cierto momento sin revelarlo hasta pasado un tiempo, definido por Benhamouda et al. en ESORICS 2015, y probando que conocemos una apertura válida. Además mostramos cómo probar que el mensaje comprometido es una combinación lineal, con coeficientes públicos, de los mensajes comprometidos en otros dos compromisos. Finalmente también presentamos una prueba de Conocimiento Nulo análoga a la anterior pero para relaciones multiplicativas, algo mucho más laborioso que nos permite realizar circuitos aritméticos. Todo esto sin revelar ninguna información adicional sobre los mensajes. Mostramos tanto una versión interactiva como una no interactiva. Probamos que tanto el compromiso como las pruebas de Conocimiento Nulo que le acompañan son seguras bajo la hipótesis de que el problema de retículos subyacente sea difícil. Además planteamos estas pruebas específicamente con el objetivo de que las condiciones que surjan puedan ser utilizadas directamente para calcular los parámetros que las satisfagan. De esta forma proporcionamos un método genérico para instanciar nuestro compromiso y pruebas con cualquier nivel de seguridad. Gracias a este enfoque práctico hemos podido implementar todos los esquemas propuestos y evaluar el rendimiento con parámetros seguros, lo que nos permite obtener resultados relevantes que poder comparar con las alternativas existentes. Por otra parte, dado que la multiplicación de polinomios en el anillo cociente ℤₚ[]/⟨ⁿ + 1⟩, con primo y una potencia de 2, es la operación más utilizada al trabajar con retículos ideales, estudiamos de forma exhaustiva cuáles son las condiciones suficientes y necesarias para aplicar (una versión generalizada de) la Transformada Rápida de Fourier (FFT, por sus siglas en inglés) para obtener algoritmos de multiplicación eficientes en anillos cociente ℤₘ[]/⟨ⁿ − ⟩, (considerando cualquier positiva y generalizando el cociente), de interés por sí mismo. Creemos que este análisis teórico es fundamental para determinar cuándo puede diseñarse un algoritmo eficiente de multiplicación si la FFT no está definida para el anillo considerado. Es el caso de los anillos que utilizamos en el compromiso y las pruebas descritas anteriormente, donde solo es posible calcular una FFT parcial.DOCTORAT EN MATEMÀTICA APLICADA (Pla 2012
    corecore