74,486 research outputs found

    Improved Trial Division Algorithm by Lagrange?s Interpolation Function

    Get PDF
    Nowadays data communication over the internetgrowths the security risk on the side of receiver and transmitter. To reduce risk level, cryptography technique has been used which is based on aprivate and public key in disquiet of endorsement. The process of encryption and decryption improved the capacity of data security. Asymmetric cryptography technique provides renowned RSA public key cryptography technique. The success story of RSA algorithm depends on the prime factor. For the estimation of theprime factor used various mathematical functions. In this paper,Lagrange?s interpolation derivation for the estimation of aprime factoris used. The estimated prime factor is very complex and reduces the complexity of prime factor

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Code Offset in the Exponent

    Get PDF
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable

    On the weaknesses of PBKDF2

    Get PDF
    Password-based key derivation functions are of particular interest in cryptography because they (a) input a password/passphrase (which usually is short and lacks enough entropy) and derive a cryptographic key; (b) slow down brute force and dictionary attacks as much as possible. In PKCS#5 [17], RSA Laboratories described a password based key derivation function called PBKDF2 that has been widely adopted in many security related applications [6, 7, 11]. In order to slow down brute force attacks, PBKDF2 introduce CPU-intensive operations based on an iterated pseudorandom function. Such a pseudorandom function is HMAC-SHA-1 by default. In this paper we show that, if HMAC-SHA-1 is computed in a standard mode without following the performance improvements described in the implementation note of RFC 2104 [13] and FIPS 198-1 [14], an attacker is able to avoid 50% of PBKDF2’s CPU intensive operations, by replacing them with precomputed values. We note that a number of well-known and widely-used crypto libraries are subject to this vulnerability.In addition to such a vulnerability, we describe some other minor optimizations that an attacker can exploit to reduce even more the key derivation time

    Public-Key encryption resilient against linear Related-Key attacks revisited

    Get PDF
    Wee (PKC'12) proposed a generic public-key encryption scheme in the setting of related-key attacks. Bellare, Paterson and Thomson (Asiacrypt'12) provided a framework enabling related-key attack (RKA) secure cryptographic primitives for a class of non-linear related-key derivation functions. However, in both of their constructions, the instantiations to achieve the full (not weak) RKA security are given under the scenario regarding the private key composed of single element. In other words, each element of the private key shares the same modification. However, this is impractical in real world. In this paper, we concentrate on the security of public-key encryption schemes under linear related-key attacks in the setting of multielement private keys (that is, the private key is composed of more than one element), where an adversary is allowed to tamper any part of this private key stored in a hardware device, and subsequently observes the outcome of a public key encryption system under this targeted modified private key. We define the security model for RKA secure public-key encryption schemes as chosen-cipher text and related-key attack (CC-RKA) security, which means that a public-key encryption scheme remains secure even when an adversary is allowed to issue the decryption oracle on linear shifts of any component of the private key. After that, we present a detailed public key encryption schemes with the private key formed of several elements, of which the CC-RKA security is under the decisional BDH assumption in the standard model

    Efficient non-malleable codes and key derivation for poly-size tampering circuits

    Get PDF
    Non-malleable codes, defined by Dziembowski, Pietrzak, and Wichs (ICS '10), provide roughly the following guarantee: if a codeword c encoding some message x is tampered to c' = f(c) such that c' ≠ c , then the tampered message x' contained in c' reveals no information about x. The non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks. One cannot have an efficient non-malleable code that protects against all efficient tampering functions f. However, in this paper we show 'the next best thing': for any polynomial bound s given a-priori, there is an efficient non-malleable code that protects against all tampering functions f computable by a circuit of size s. More generally, for any family of tampering functions F of size F ≤ 2s , there is an efficient non-malleable code that protects against all f in F . The rate of our codes, defined as the ratio of message to codeword size, approaches 1. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the 'common reference string' model. We also introduce a new notion of non-malleable key derivation, which uses randomness x to derive a secret key y = h(x) in such a way that, even if x is tampered to a different value x' = f(x) , the derived key y' = h(x') does not reveal any information about y. Our results for non-malleable key derivation are analogous to those for non-malleable codes. As a useful tool in our analysis, we rely on the notion of 'leakage-resilient storage' of Davì, Dziembowski, and Venturi (SCN '10), and, as a result of independent interest, we also significantly improve on the parameters of such schemes

    Backdoored Hash Functions: Immunizing HMAC and HKDF

    Get PDF
    Security of cryptographic schemes is traditionally measured as the inability of resource-constrained adversaries to violate a desired security goal. The security argument usually relies on a sound design of the underlying components. Arguably, one of the most devastating failures of this approach can be observed when considering adversaries such as intelligence agencies that can influence the design, implementation, and standardization of cryptographic primitives. While the most prominent example of cryptographic backdoors is NIST’s Dual_EC_DRBG, believing that such attempts have ended there is naive. Security of many cryptographic tasks, such as digital signatures, pseudorandom generation, and password protection, crucially relies on the security of hash functions. In this work, we consider the question of how backdoors can endanger security of hash functions and, especially, if and how we can thwart such backdoors. We particularly focus on immunizing arbitrarily backdoored versions of HMAC (RFC 2104) and the hash-based key derivation function HKDF (RFC 5869), which are widely deployed in critical protocols such as TLS. We give evidence that the weak pseudorandomness property of the compression function in the hash function is in fact robust against backdooring. This positive result allows us to build a backdoor-resistant pseudorandom function, i.e., a variant of HMAC, and we show that HKDF can be immunized against backdoors at little cost. Unfortunately, we also argue that safe-guarding unkeyed hash functions against backdoors is presumably hard
    • …
    corecore