22 research outputs found

    sec-certs: Examining the security certification practice for better vulnerability mitigation

    Full text link
    Products certified under security certification frameworks such as Common Criteria undergo significant scrutiny during the costly certification process. Yet, critical vulnerabilities, including private key recovery (ROCA, Minerva, TPM-Fail...), get discovered in certified products with high assurance levels. Furthermore, assessing which certified products are impacted by such vulnerabilities is complicated due to the large amount of unstructured certification-related data and unclear relationships between the certificates. To address these problems, we conducted a large-scale automated analysis of Common Criteria and FIPS 140 certificates. We trained unsupervised models to learn which vulnerabilities from NIST's National Vulnerability Database impact existing certified products and how certified products reference each other. Our tooling automates the analysis of tens of thousands of certification-related documents, extracting machine-readable features where manual analysis is unattainable. Further, we identify the security requirements that are associated with products being affected by fewer and less severe vulnerabilities (on average). This indicates which aspects of certification correlate with higher security. We demonstrate how our tool can be used for better vulnerability mitigation on four case studies of known, high-profile vulnerabilities. All tools and continuously updated results are available at https://seccerts.org

    Minerva: The curse of ECDSA nonces

    Get PDF
    We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods\u27 sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900

    A Formula for Disaster : A Unified Approach to Elliptic Curve Special-Point-Based Attacks

    Get PDF
    The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.acceptedVersionPeer reviewe

    Optical Cryptanalysis: Recovering Cryptographic Keys from Power LED Light Fluctuations

    Get PDF
    Although power LEDs have been integrated in various devices that perform cryptographic operations for decades, the cryptanalysis risk they pose has not yet been investigated. In this paper, we present optical cryptanalysis, a new form of cryptanalytic side-channel attack, in which secret keys are extracted by using a photodiode to measure the light emitted by a device’s power LED and analyzing subtle fluctuations in the light intensity during cryptographic operations. We analyze the optical leakage of power LEDs of various consumer devices and the factors that affect the optical SNR. We then demonstrate end-to-end optical cryptanalytic attacks against a range of consumer devices (smartphone, smartcard, and Raspberry Pi, along with their USB peripherals) and recover secret keys (RSA, ECDSA, SIKE) from prior and recent versions of popular cryptographic libraries (GnuPG, Libgcrypt, PQCrypto-SIDH) from a maximum distance of 25 meter

    Fooling primality tests on smartcards

    No full text
    We analyse whether the smartcards of the JavaCard platform correctly validate primality of domain parameters. The work is inspired by Albrecht et al.[1], where the authors analysed many open-source libraries and constructed pseudoprimes fooling the primality testing functions. However, in the case of smartcards, often there is no way to invoke the primality test directly, so we trigger it by replacing (EC)DSA and (EC)DH prime domain parameters by adversarial composites. Such a replacement results in vulnerability to Pohlig-Hellman[30] style attacks, leading to private key recovery. Out of nine smartcards (produced by five major manufacturers) we tested (See https://crocs.fi.muni.cz/papers/primality_esorics20 for more information), all but one have no primality test in parameter validation. As the JavaCard platform provides no public primality testing API, the problem cannot be fixed by an extra parameter check, making it difficult to mitigate in already deployed smartcards

    Minerva: The curse of ECDSA nonces

    No full text
    We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability [MSE+20] as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods’ sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900

    Minerva: The curse of ECDSA nonces : Systematic analysis of lattice attacks on noisy leakage of bit-length of ECDSA nonces

    No full text
    We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability [MSE+20] as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods’ sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900

    Petri nets and regular processes

    Get PDF
    We consider the following problems: (a) Given a labelled Petri net and a finite automaton, are they equivalent?; (b) Given a labelled Petri net, is it equivalent to some (unspecified) finite automaton? These questions are studied within the framework of trace and bisimulation equivalences, in both their strong and weak versions. (In the weak version a special τ action—likened to an -move in automata theory—is considered to be nonobservable.) We demonstrate that (a) is decidable for strong and weak trace equivalence and for strong bisimulation equivalence, but undecidable for weak bisimulation equivalence. On the other hand, we show that (b) is decidable for strong bisimulation equivalence, and undecidable for strong and weak trace equivalence, as well as for weak bisimulation equivalence
    corecore