91 research outputs found

    Safe-Errors on SPA Protected implementations with the Atomicity Technique

    Get PDF
    ECDSA is one of the most important public-key signature scheme, however it is vulnerable to lattice attack once a few bits of the nonces are leaked. To protect Elliptic Curve Cryptography (ECC) against Simple Power Analysis, many countermeasures have been proposed. Doubling and Additions of points on the given elliptic curve require several additions and multiplications in the base field and this number is not the same for the two operations. The idea of the atomicity protection is to use a fixed pattern, i.e. a small number of instructions and rewrite the two basic operations of ECC using this pattern. Dummy operations are introduced so that the different elliptic curve operations might be written with the same atomic pattern. In an adversary point of view, the attacker only sees a succession of patterns and is no longer able to distinguish which one corresponds to addition and doubling. Chevallier-Mames, Ciet and Joye were the first to introduce such countermeasure. In this paper, we are interested in studying this countermeasure and we show a new vulnerability since the ECDSA implementation succumbs now to C Safe-Error attacks. Then, we propose an effective solution to prevent against C Safe-Error attacks when using the Side-Channel Atomicity. The dummy operations are used in such a way that if a fault is introduced on one of them, it can be detected. Finally, our countermeasure method is generic, meaning that it can be adapted to all formulae. We apply our methods to different formulae presented for side-channel Atomicity

    Efficient and Secure ECDSA Algorithm and its Applications: A Survey

    Get PDF
    Public-key cryptography algorithms, especially elliptic curve cryptography (ECC)and elliptic curve digital signature algorithm (ECDSA) have been attracting attention frommany researchers in different institutions because these algorithms provide security andhigh performance when being used in many areas such as electronic-healthcare, electronicbanking,electronic-commerce, electronic-vehicular, and electronic-governance. These algorithmsheighten security against various attacks and the same time improve performanceto obtain efficiencies (time, memory, reduced computation complexity, and energy saving)in an environment of constrained source and large systems. This paper presents detailedand a comprehensive survey of an update of the ECDSA algorithm in terms of performance,security, and applications

    Efficient and secure ECDSA algorithm and its applications: a survey

    Get PDF
    Public-key cryptography algorithms, especially elliptic curve cryptography (ECC) and elliptic curve digital signature algorithm (ECDSA) have been attracting attention from many researchers in different institutions because these algorithms provide security and high performance when being used in many areas such as electronic-healthcare, electronic-banking, electronic-commerce, electronic-vehicular, and electronic-governance. These algorithms heighten security against various attacks and the same time improve performance to obtain efficiencies (time, memory, reduced computation complexity, and energy saving) in an environment of constrained source and large systems. This paper presents detailed and a comprehensive survey of an update of the ECDSA algorithm in terms of performance, security, and applications

    Integrity Checking For Process Hardening

    Get PDF
    Computer intrusions can occur in various ways. Many of them occur by exploiting program flaws and system configuration errors. Existing solutions that detects specific kinds of flaws are substantially different from each other, so aggregate use of them may be incompatible and require substantial changes in the current system and computing practice. Intrusion detection systems may not be the answer either, because they are inherently inaccurate and susceptible to false positives/negatives. This dissertation presents a taxonomy of security flaws that classifies program vulnerabilities into finite number of error categories, and presents a security mechanism that can produce accurate solutions for many of these error categories in a modular fashion. To be accurate, a solution should closely match the characteristic of the target error category. To ensure this, we focus only on error categories whose characteristics can be defined in terms of a violation of process integrity. The thesis of this work is that the proposed approach produces accurate solutions for many error categories. To prove the accuracy of produced solutions, we define the process integrity checking approach and analyze its properties. To prove that this approach can cover many error categories, we develop a classification of program security flaws and find error characteristics (in terms of a process integrity) from many of these categories. W

    Dynamic Runtime Methods to Enhance Private Key Blinding

    Get PDF
    In this paper we propose new methods to blind exponents used in RSA and in elliptic curves based algorithms. Due to classical differential power analysis (DPA and CPA), a lot of countermeasures to protect exponents have been proposed since 1999 Kocher [20] and by Coron [13]. However, these blinding methods present some drawbacks regarding execution time and memory cost. It also got some weaknesses. Indeed they could also be targeted by some attacks such as The Carry Leakage on the Randomized Exponent proposed by P.A. Fouque et al. in [23] or inefficient against some others analysis such as Single Power Analysis. In this article, we explain how the most used method could be exploited when an attacker can access test samples. We target here new dynamic blinding methods in order to prevent from any learning phase and also to improve the resistance against the latest side channel analyses published

    Architecting Persistent Memory Systems

    Full text link
    The imminent release of 3D XPoint memory by Intel and Micron looks set to end the long wait for affordable persistent memory. Persistent memories combine the persistence of disk with DRAM-like performance, blurring the traditional divide between a byte-addressable, volatile main memory and a block-addressable, persistent storage (e.g., SSDs). One of the most disruptive potential use cases for persistent memories is to host in-memory recoverable data structures. These recoverable data structures may be directly modified by programmers using user-level processor load and store instructions, rather than relying on performance sapping software intermediaries like the operating and file systems. Ensuring the recoverability of these data structures requires programmers to have the ability to control the order of updates to persistent memory. Current systems do not provide efficient mechanisms (if any) to enforce the order in which store instructions update the physical main memory. Recently proposed memory persistency models allow programmers to specify constraints on the order in which stores can be written-back to main memory. While ordering constraints are necessary for recoverability, they are expensive to enforce due to the high write-latencies exhibited by popular persistent memory technologies. Moreover, reasoning about recovery correctness using memory persistency models in addition to ensuring necessary concurrency control in multi-threaded programs drastically increases programming burden. This thesis aims at increasing the adoption of persistent memories through a) improving the performance of recoverable data structures and b) simplifying persistent memory programming. Software transaction abstractions developed using recently proposed memory persistency models are expected to be widely used by regular programmers to exploit the advantages of persistent memory. This thesis shows that a straightforward implementation of transactions imposes many unnecessary constraints on stores to persistent memory. This thesis also shows how to reduce these constraints through a variety of techniques, notably, deferring transaction commit until after locks are released, resulting in substantial performance improvements. Next, this thesis shows the high cost of enforcing ordering constraints using recent x86 ISA extensions to enable persistent memory programming, an ordering model referred to as synchronous ordering. Synchronous ordering tightly couples enforcing order with writing back stores to main memory, but this tight coupling is often unnecessary to ensure recoverablity. Instead, this thesis proposes delegated persist ordering, wherein ordering requirements are communicated explicitly to the persistent memory controller via novel enhancements to the cache hierarchy. Delegated persist ordering decouples store ordering from processor execution and cache management, significantly reducing processor stalls, and hence, the cost of enforcing constraints. Finally, existing memory persistency models have all been specified to be used in conjunction with ISA-level memory models. That is, programmers must reason about recovery correctness at the abstraction of assembly instructions, an approach which is error prone and places an unreasonable burden on the programmer. This thesis argues for a language-level persistency model that provides mechanisms to specify the semantics of accesses to persistent memory as an integral part of the programming language and proposes a concrete model, acquire-release persistency, that extends C++11s memory model to provide persistency semantics.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136953/1/akolli_1.pd

    HardBound: Architectural Support for Spatial Safety of the C Programming Language

    Get PDF
    The C programming language is at least as well known for its absence of spatial memory safety guarantees (i.e., lack of bounds checking) as it is for its high performance. C\u27s unchecked pointer arithmetic and array indexing allow simple programming mistakes to lead to erroneous executions, silent data corruption, and security vulnerabilities. Many prior proposals have tackled enforcing spatial safety in C programs by checking pointer and array accesses. However, existing software-only proposals have significant drawbacks that may prevent wide adoption, including: unacceptably high runtime overheads, lack of completeness, incompatible pointer representations, or need for non-trivial changes to existing C source code and compiler infrastructure. Inspired by the promise of these software-only approaches, this paper proposes a hardware bounded pointer architectural primitive that supports cooperative hardware/software enforcement of spatial memory safety for C programs. This bounded pointer is a new hardware primitive datatype for pointers that leaves the standard C pointer representation intact, but augments it with bounds information maintained separately and invisibly by the hardware. The bounds are initialized by the software, and they are then propagated and enforced transparently by the hardware, which automatically checks a pointer\u27s bounds before it is dereferenced. One mode of use requires instrumenting only malloc, which enables enforcement of per-allocation spatial safety for heap-allocated objects for existing binaries. When combined with simple intra-procedural compiler instrumentation, hardware bounded pointers enable a low-overhead approach for enforcing complete spatial memory safety in unmodified C programs

    Sécurité physique de la cryptographie sur courbes elliptiques

    Get PDF
    Elliptic Curve Cryptography (ECC) has gained much importance in smart cards because of its higher speed and lower memory needs compared with other asymmetric cryptosystems such as RSA. ECC is believed to be unbreakable in the black box model, where the cryptanalyst has access to inputs and outputs only. However, it is not enough if the cryptosystem is embedded on a device that is physically accessible to potential attackers. In addition to inputs and outputs, the attacker can study the physical behaviour of the device. This new kind of cryptanalysis is called Physical Cryptanalysis. This thesis focuses on physical cryptanalysis of ECC. The first part gives the background on ECC. From the lowest to the highest level, ECC involves a hierarchy of tools: Finite Field Arithmetic, Elliptic Curve Arithmetic, Elliptic Curve Scalar Multiplication and Cryptographie Protocol. The second part exhibits a state-of-the-art of the different physical attacks and countermeasures on ECC.For each attack, the context on which it can be applied is given while, for each countermeasure, we estimate the lime and memory cost. We propose new attacks and new countermeasures. We then give a clear synthesis of the attacks depending on the context. This is useful during the task of selecting the countermeasures. Finally, we give a clear synthesis of the efficiency of each countermeasure against the attacks.La Cryptographie sur les Courbes Elliptiques (abréviée ECC de l'anglais Elliptic Curve Cryptography) est devenue très importante dans les cartes à puces car elle présente de meilleures performances en temps et en mémoire comparée à d'autres cryptosystèmes asymétriques comme RSA. ECC est présumé incassable dans le modèle dit « Boite Noire », où le cryptanalyste a uniquement accès aux entrées et aux sorties. Cependant, ce n'est pas suffisant si le cryptosystème est embarqué dans un appareil qui est physiquement accessible à de potentiels attaquants. En plus des entrés et des sorties, l'attaquant peut étudier le comportement physique de l'appareil. Ce nouveau type de cryptanalyse est appelé cryptanalyse physique. Cette thèse porte sur les attaques physiques sur ECC. La première partie fournit les pré-requis sur ECC. Du niveau le plus bas au plus élevé, ECC nécessite les outils suivants : l'arithmétique sur les corps finis, l'arithmétique sur courbes elliptiques, la multiplication scalaire sur courbes elliptiques et enfin les protocoles cryptographiques. La deuxième partie expose un état de l'art des différentes attaques physiques et contremesures sur ECC. Pour chaque attaque, nous donnons le contexte dans lequel elle est applicable. Pour chaque contremesure, nous estimons son coût en temps et en mémoire. Nous proposons de nouvelles attaques et de nouvelles contremesures. Ensuite, nous donnons une synthèse claire des attaques suivant le contexte. Cette synthèse est utile pendant la tâche du choix des contremesures. Enfin, une synthèse claire de l'efficacité de chaque contremesure sur les attaques est donnée

    Characterizing Implementations that Preserve Properties of Concurrent Randomized Algorithms

    Get PDF
    We show that correctness criteria of concurrent algorithms are mathematically equivalent to the existence of so-called simulations between implementations of the algorithms in a well-known framework (that of input/output automata) and simple canonical automata. This equivalence allows us to frame our proofs of correctness in a language much more amenable to machine-checking than conventional proofs. We give the first demonstration that when strongly linearizable implementations of randomized concurrent algorithms are utilized, then the distributions of a well-defined class of random variables are preserved under object substitution by non-concurrent implementations of the same algorithms. We also consider weaker conditions than strong linearizability under which implementations are still correct in the presence of randomization
    corecore