902 research outputs found

    A Touch of Evil: High-Assurance Cryptographic Hardware from Untrusted Components

    Get PDF
    The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure crypto-coprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added

    Quantum attacks on Bitcoin, and how to protect against them

    Get PDF
    The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk are cryptocurrencies, a market currently worth over 150 billion USD. We investigate the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications.Comment: 21 pages, 6 figures. For a rough update on the progress of Quantum devices and prognostications on time from now to break Digital signatures, see https://www.quantumcryptopocalypse.com/quantum-moores-law

    EFFICIENT AND SCALABLE NETWORK SECURITY PROTOCOLS BASED ON LFSR SEQUENCES

    Get PDF
    The gap between abstract, mathematics-oriented research in cryptography and the engineering approach of designing practical, network security protocols is widening. Network researchers experiment with well-known cryptographic protocols suitable for different network models. On the other hand, researchers inclined toward theory often design cryptographic schemes without considering the practical network constraints. The goal of this dissertation is to address problems in these two challenging areas: building bridges between practical network security protocols and theoretical cryptography. This dissertation presents techniques for building performance sensitive security protocols, using primitives from linear feedback register sequences (LFSR) sequences, for a variety of challenging networking applications. The significant contributions of this thesis are: 1. A common problem faced by large-scale multicast applications, like real-time news feeds, is collecting authenticated feedback from the intended recipients. We design an efficient, scalable, and fault-tolerant technique for combining multiple signed acknowledgments into a single compact one and observe that most signatures (based on the discrete logarithm problem) used in previous protocols do not result in a scalable solution to the problem. 2. We propose a technique to authenticate on-demand source routing protocols in resource-constrained wireless mobile ad-hoc networks. We develop a single-round multisignature that requires no prior cooperation among nodes to construct the multisignature and supports authentication of cached routes. 3. We propose an efficient and scalable aggregate signature, tailored for applications like building efficient certificate chains, authenticating distributed and adaptive content management systems and securing path-vector routing protocols. 4. We observe that blind signatures could form critical building blocks of privacypreserving accountability systems, where an authority needs to vouch for the legitimacy of a message but the ownership of the message should be kept secret from the authority. We propose an efficient blind signature that can serve as a protocol building block for performance sensitive, accountability systems. All special forms digital signatures—aggregate, multi-, and blind signatures—proposed in this dissertation are the first to be constructed using LFSR sequences. Our detailed cost analysis shows that for a desired level of security, the proposed signatures outperformed existing protocols in computation cost, number of communication rounds and storage overhead

    Improvements and New Constructions of Digital Signatures

    Get PDF
    Ein digitales Signaturverfahren, oft auch nur digitale Signatur genannt, ist ein wichtiger und nicht mehr wegzudenkender Baustein in der Kryptographie. Es stellt das digitale Äquivalent zur klassischen handschriftlichen Signatur dar und liefert darĂŒber hinaus noch weitere wĂŒnschenswerte Eigenschaften. Mit solch einem Verfahren kann man einen öffentlichen und einen geheimen SchlĂŒssel erzeugen. Der geheime SchlĂŒssel dient zur Erstellung von Signaturen zu beliebigen Nachrichten. Diese können mit Hilfe des öffentlichen SchlĂŒssels von jedem ĂŒberprĂŒft und somit verifiziert werden. Desweiteren fordert man, dass das Verfahren "sicher" sein soll. Dazu gibt es in der Literatur viele verschiedene Begriffe und Definitionen, je nachdem welche konkreten Vorstellungen beziehungsweise Anwendungsgebiete man hat. Vereinfacht gesagt, sollte es fĂŒr einen Angreifer ohne Kenntnis des geheimen SchlĂŒssels nicht möglich sein eine gĂŒltige Signatur zu einer beliebigen Nachricht zu fĂ€lschen. Ein sicheres Signaturverfahren kann somit verwendet werden um die folgenden Ziele zu realisieren: - AuthentizitĂ€t: Jeder EmpfĂ€nger kann ĂŒberprĂŒfen, ob die Nachricht von einem bestimmten Absender kommt. - IntegritĂ€t der Nachricht: Jeder EmpfĂ€nger kann feststellen, ob die Nachricht bei der Übertragung verĂ€ndert wurde. - Nicht-Abstreitbarkeit: Der Absender kann nicht abstreiten die Signatur erstellt zu haben. Damit ist der Einsatz von digitalen Signaturen fĂŒr viele Anwendungen in der Praxis sehr wichtig. Überall da, wo es wichtig ist die AuthentizitĂ€t und IntegritĂ€t einer Nachricht sicherzustellen, wie beim elektronischen Zahlungsverkehr, Softwareupdates oder digitalen Zertifikaten im Internet, kommen digitale Signaturen zum Einsatz. Aber auch fĂŒr die kryptographische Theorie sind digitale Signaturen ein unverzichtbares Hilfsmittel. Sie ermöglichen zum Beispiel die Konstruktion von stark sicheren VerschlĂŒsselungsverfahren. Eigener Beitrag: Wie bereits erwĂ€hnt gibt es unterschiedliche Sicherheitsbegriffe im Rahmen von digitalen Signaturen. Ein Standardbegriff von Sicherheit, der eine recht starke Form von Sicherheit beschreibt, wird in dieser Arbeit nĂ€her betrachtet. Die Konstruktion von Verfahren, die diese Form der Sicherheit erfĂŒllen, ist ein vielschichtiges Forschungsthema. Dazu existieren unterschiedliche Strategien in unterschiedlichen Modellen. In dieser Arbeit konzentrieren wir uns daher auf folgende Punkte. - Ausgehend von vergleichsweise realistischen Annahmen konstruieren wir ein stark sicheres Signaturverfahren im sogenannten Standardmodell, welches das realistischste Modell fĂŒr Sicherheitsbeweise darstellt. Unser Verfahren ist das bis dahin effizienteste Verfahren in seiner Kategorie. Es erstellt sehr kurze Signaturen und verwendet kurze SchlĂŒssel, beides unverzichtbar fĂŒr die Praxis. - Wir verbessern die QualitĂ€t eines Sicherheitsbeweises von einem verwandten Baustein, der identitĂ€tsbasierten VerschlĂŒsselung. Dies hat unter anderem Auswirkung auf dessen Effizienz bezĂŒglich der empfohlenen SchlĂŒssellĂ€ngen fĂŒr den sicheren Einsatz in der Praxis. Da jedes identitĂ€tsbasierte VerschlĂŒsselungsverfahren generisch in ein digitales Signaturverfahren umgewandelt werden kann ist dies auch im Kontext digitaler Signaturen interessant. - Wir betrachten Varianten von digitalen Signaturen mit zusĂ€tzlichen Eigenschaften, sogenannte aggregierbare Signaturverfahren. Diese ermöglichen es mehrere Signaturen effizient zu einer zusammenzufassen und dabei trotzdem alle zugehörigen verschiedenen Nachrichten zu verifizieren. Wir geben eine neue Konstruktion von solch einem aggregierbaren Signaturverfahren an, bei der das Verfahren eine Liste aller korrekt signierten Nachrichten in einer aggregierten Signatur ausgibt anstatt, wie bisher ĂŒblich, nur gĂŒltig oder ungĂŒltig. Wenn eine aggregierte Signatur aus vielen Einzelsignaturen besteht wird somit das erneute Berechnen und eventuell erneute Senden hinfĂ€llig und dadurch der Aufwand erheblich reduziert

    Computing 256-bit Elliptic Curve Logarithm in 9 Hours with 126133 Cat Qubits

    Full text link
    Cat qubits provide appealing building blocks for quantum computing. They exhibit a tunable noise bias yielding an exponential suppression of bit-flips with the average photon number and a protection against the remaining phase errors can be ensured by a simple repetition code. We here quantify the cost of a repetition code and provide a valuable guidance for the choice of a large scale architecture using cat qubits by realizing a performance analysis based on the computation of discrete logarithms on an elliptic curve with Shor's algorithm. By focusing on a 2D grid of cat qubits with neighboring connectivity, we propose to implement two-qubit gates via lattice surgery and Toffoli gates with off-line fault-tolerant preparation of magic states through projective measurements and subsequent gate teleportations. All-to-all connectivity between logical qubits is ensured by routing qubits. Assuming a ratio between single-photon and two-photon losses of 10−510^{-5} and a cycle time of 500 nanoseconds, we show concretely that such an architecture can compute 256256-bit elliptic curve logarithm in 99 hours with 126133 cat qubits. We give the details of the realization of Shor's algorithm so that the proposed performance analysis can be easily reused to guide the choice of architecture for others platforms.Comment: 4+34 pages, 32 figures, 5 table

    Quantum resource estimates for computing elliptic curve discrete logarithms

    Get PDF
    We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software tool suite LIQUiâˆŁâŸ©Ui|\rangle. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an nn-bit prime field can be computed on a quantum computer with at most 9n+2⌈log⁥2(n)⌉+109n + 2\lceil\log_2(n)\rceil+10 qubits using a quantum circuit of at most 448n3log⁥2(n)+4090n3448 n^3 \log_2(n) + 4090 n^3 Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also support estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.Comment: 24 pages, 2 tables, 11 figures. v2: typos fixed and reference added. ASIACRYPT 201

    Fault attacks and countermeasures for elliptic curve cryptosystems

    Get PDF
    In this thesis we have developed a new algorithmic countermeasures that protect elliptic curve computation by protecting computation of the finite binary extension field, against fault attacks. Firstly, we have proposed schemes, i.e., a Chinese Remainder Theorem based fault tolerant computation in finite field for use in ECCs, as well as Lagrange Interpolation based fault tolerant computation. Our approach is based on the error correcting codes, i.e., redundant residue polynomial codes and the use of first original approach of Reed-Solomon codes. Computation of the field elements is decomposed into parallel, mutually independent, modular/identical channels, so that in case of faults at one channel, errors will not distribute to other channels. Based on these schemes we have developed new algorithms, namely fault tolerant residue representation modular multiplication algorithm and fault tolerant Lagrange representation modular multiplication algorithm, which are immune against error propagation under the fault models that we propose: Random Fault Model, Arbitrary Fault Model, and Single Bit Fault Model. These algorithms provide fault tolerant computation in GF (2k) for use in ECCs. Our new developed algorithms where inputs, i.e., field elements, are represented by the redundant residue representation/ redundant lagrange representation enables us to overcome the problem if during computation one, or both coordinates x, y GF (2k) of the point P E/GF (2k) /Fk are corrupted. We assume that during each run of an attacked algorithm, in one single attack, an adversary can apply any of the proposed fault models, i.e., either Random Fault Model, or Arbitrary Fault Model, or Single Bit Fault Model. In this way more channels can be targeted, i.e., different fault models can be used on different channels. Also, our proposed algorithms can have masked errors and will not be immune against attacks which can create those kind of errors, but it is a difficult problem to counter masked errors, since any anti-fault attack scheme will have some masked errors. Moreover, we have derived conditions that inflicted error needs to have in order to yield undetectable faulty point on non-supersingular elliptic curve over GF(2k). Our algorithmic countermeasures can be applied to any public key cryptosystem that performs computation over the finite field GF (2k)
    • 

    corecore