2,151 research outputs found

    Formal verification of a software countermeasure against instruction skip attacks

    Get PDF
    Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve

    Quantum Cryptography Beyond Quantum Key Distribution

    Get PDF
    Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries---including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives. In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference

    Approximate Quantum Error-Correcting Codes and Secret Sharing Schemes

    Full text link
    It is a standard result in the theory of quantum error-correcting codes that no code of length n can fix more than n/4 arbitrary errors, regardless of the dimension of the coding and encoded Hilbert spaces. However, this bound only applies to codes which recover the message exactly. Naively, one might expect that correcting errors to very high fidelity would only allow small violations of this bound. This intuition is incorrect: in this paper we describe quantum error-correcting codes capable of correcting up to (n-1)/2 arbitrary errors with fidelity exponentially close to 1, at the price of increasing the size of the registers (i.e., the coding alphabet). This demonstrates a sharp distinction between exact and approximate quantum error correction. The codes have the property that any tt components reveal no information about the message, and so they can also be viewed as error-tolerant secret sharing schemes. The construction has several interesting implications for cryptography and quantum information theory. First, it suggests that secret sharing is a better classical analogue to quantum error correction than is classical error correction. Second, it highlights an error in a purported proof that verifiable quantum secret sharing (VQSS) is impossible when the number of cheaters t is n/4. More generally, the construction illustrates a difference between exact and approximate requirements in quantum cryptography and (yet again) the delicacy of security proofs and impossibility results in the quantum model.Comment: 14 pages, no figure

    Predictable arguments of knowledge

    Get PDF
    We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK). Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality. We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Learning with Errors is easy with quantum samples

    Full text link
    Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE

    Transparent code authentication at the processor level

    Get PDF
    The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes
    corecore