563 research outputs found

    Weakened Random Oracle Models with Target Prefix

    Full text link
    Weakened random oracle models (WROMs) are variants of the random oracle model (ROM). The WROMs have the random oracle and the additional oracle which breaks some property of a hash function. Analyzing the security of cryptographic schemes in WROMs, we can specify the property of a hash function on which the security of cryptographic schemes depends. Liskov (SAC 2006) proposed WROMs and later Numayama et al. (PKC 2008) formalized them as CT-ROM, SPT-ROM, and FPT-ROM. In each model, there is the additional oracle to break collision resistance, second preimage resistance, preimage resistance respectively. Tan and Wong (ACISP 2012) proposed the generalized FPT-ROM (GFPT-ROM) which intended to capture the chosen prefix collision attack suggested by Stevens et al. (EUROCRYPT 2007). In this paper, in order to analyze the security of cryptographic schemes more precisely, we formalize GFPT-ROM and propose additional three WROMs which capture the chosen prefix collision attack and its variants. In particular, we focus on signature schemes such as RSA-FDH, its variants, and DSA, in order to understand essential roles of WROMs in their security proofs

    Testing Strategies for Model-Based Development

    Get PDF
    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model

    Request for Review of Key Wrap Algorithms

    Get PDF
    A key wrap algorithm is a secret key algorithm for the authenticated encryption of specialized data such as cryptographic keys. Four key wrap algorithms have been proposed for the draft ASC X9 standard, ANS X9.102. NIST is serving as the editor of ANS X9.102, and, on behalf of the X9F1 working group, NIST requests a cryptographic review of the four algorithms. This document specifies the algorithms and suggests security models for their analysis. Comments will be accepted until May 21, 2005

    How to Construct Cryptosystems and Hash Functions in Weakened Random Oracle Models

    Get PDF
    In this paper, we discuss how to construct secure cryptosystems and secure hash functions in weakened random oracle models. ~~~~The weakened random oracle model (\wrom), which was introduced by Numayama et al. at PKC 2008, is a random oracle with several weaknesses. Though the security of cryptosystems in the random oracle model, \rom, has been discussed sufficiently, the same is not true for \wrom. A few cryptosystems have been proven secure in \wrom. In this paper, we will propose a new conversion that can convert \emph{any} cryptosystem secure in \rom to a new cryptosystem that is secure in the first preimage tractable random oracle model \fptrom \emph{without re-proof}. \fptrom is \rom without preimage resistance and so is the weakest of the \wrom models. Since there are many secure cryptosystems in \rom, our conversion can yield many cryptosystems secure in \fptrom. ~~~~The fixed input length weakened random oracle model, \filwrom, introduced by Liskov at SAC 2006, reflects the known weakness of compression functions. We will propose new hash functions that are indifferentiable from \ro when the underlying compression function is modeled by a two-way partially-specified preimage-tractable fixed input length random oracle model (\wfilrom). \wfilrom is \filrom without two types of preimage resistance and is the weakest of the \filwrom models. The proposed hash functions are more efficient than the existing hash functions which are indifferentiable from \ro when the underlying compression function is modeled by \wfilrom

    Hierarchical Integrated Signature and Encryption

    Get PDF
    In this work, we introduce the notion of hierarchical integrated signature and encryption (HISE), wherein a single public key is used for both signature and encryption, and one can derive a secret key used only for decryption from the signing key, which enables secure delegation of decryption capability. HISE enjoys the benefit of key reuse, and admits individual key escrow. We present two generic constructions of HISE. One is from (constrained) identity-based encryption. The other is from uniform one-way function, public-key encryption, and general-purpose public-coin zero-knowledge proof of knowledge. To further attain global key escrow, we take a little detour to revisit global escrow PKE, an object both of independent interest and with many applications. We formalize the syntax and security model of global escrow PKE, and provide two generic constructions. The first embodies a generic approach to compile any PKE into one with global escrow property. The second establishes a connection between three-party non-interactive key exchange and global escrow PKE. Combining the results developed above, we obtain HISE schemes that support both individual and global key escrow. We instantiate our generic constructions of (global escrow) HISE and implement all the resulting concrete schemes for 128-bit security. Our schemes have performance that is comparable to the best Cartesian product combined public-key scheme, and exhibit advantages in terms of richer functionality and public key reuse. As a byproduct, we obtain a new global escrow PKE scheme that is 12−30×12-30 \times faster than the best prior work, which might be of independent interest

    On Secure Ratcheting with Immediate Decryption

    Get PDF

    Hash Gone Bad: Automated discovery of protocol attacks that exploit hash function weaknesses

    Get PDF
    Most cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used. We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models. Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants

    Induction of Word and Phrase Alignments for Automatic Document Summarization

    Full text link
    Current research in automatic single document summarization is dominated by two effective, yet naive approaches: summarization by sentence extraction, and headline generation via bag-of-words models. While successful in some tasks, neither of these models is able to adequately capture the large set of linguistic devices utilized by humans when they produce summaries. One possible explanation for the widespread use of these models is that good techniques have been developed to extract appropriate training data for them from existing document/abstract and document/headline corpora. We believe that future progress in automatic summarization will be driven both by the development of more sophisticated, linguistically informed models, as well as a more effective leveraging of document/abstract corpora. In order to open the doors to simultaneously achieving both of these goals, we have developed techniques for automatically producing word-to-word and phrase-to-phrase alignments between documents and their human-written abstracts. These alignments make explicit the correspondences that exist in such document/abstract pairs, and create a potentially rich data source from which complex summarization algorithms may learn. This paper describes experiments we have carried out to analyze the ability of humans to perform such alignments, and based on these analyses, we describe experiments for creating them automatically. Our model for the alignment task is based on an extension of the standard hidden Markov model, and learns to create alignments in a completely unsupervised fashion. We describe our model in detail and present experimental results that show that our model is able to learn to reliably identify word- and phrase-level alignments in a corpus of pairs
    • …
    corecore