13 research outputs found

    Side-channel Attacks on Blinded Scalar Multiplications Revisited

    Get PDF
    In a series of recent articles (from 2011 to 2017), Schindler et al. show that exponent/scalar blinding is not as effective a countermeasure as expected against side-channel attacks targeting RSA modular exponentiation and ECC scalar multiplication. Precisely, these works demonstrate that if an attacker is able to retrieve many randomizations of the same secret, this secret can be fully recovered even when a significative proportion of the blinded secret bits are erroneous. With a focus on ECC, this paper improves the best results of Schindler et al. in the specific case of structured-order elliptic curves. Our results show that larger blinding material and higher error rates can be successfully handled by an attacker in practice. This study also opens new directions in this line of work by the proposal of a three-steps attack process that isolates the attack critical path (in terms of complexity and success rate) and hence eases the development of future solutions

    SNR-Centric Power Trace Extractors for Side-Channel Attacks

    Get PDF
    The existing power trace extractors consider the case that the number of power traces owned by the attacker is sufficient to guarantee his successful attacks, and the goal of power trace extraction is to lower the complexity rather than increase the success rates. Although having strict theoretical proofs, they are too simple and leakage characteristics of POIs have not been thoroughly analyzed. They only maximize the variance of data-dependent power consumption component and ignore the noise component, which results in very limited SNR to improve and seriously affects the performance of extractors. In this paper, we provide a rigorous theoretical analysis of SNR of power traces, and propose a novel SNR-centric extractor, named Shortest Distance First (SDF), to extract power traces with smallest the estimated noise by taking advantage of known plaintexts. In addition, to maximize the variance of the exploitable component while minimizing the noise, we refer to the SNR estimation model and propose another novel extractor named Maximizing Estimated SNR First (MESF). Finally, we further propose an advanced extractor called Mean optimized MESF (MMESF) that exploits the mean power consumption of each plaintext byte value to more accurately and reasonably estimate the data-dependent power consumption of the corresponding samples. Experiments on both simulated power traces and measurements from an ATmega328p micro-controller demonstrate the superiority of our new extractors

    Compressive Sensing based Leakage Sampling and Reconstruction: A First Study

    Get PDF
    An important prerequisite for Side-channel Attack (SCA) is leakage sampling where the side-channel measurements (e.g. power traces) of the cryptographic device are collected for further analysis. However, as the operating frequency of cryptographic devices continues to increase due to advancing technology, leakage sampling will impose higher requirements on the sampling equipment. This paper undertakes the first study to show that effective leakage sampling can be achieved without relying on sophisticated equipments through Compressive Sensing (CS). In particular, CS can obtain low-dimensional samples from high-dimensional power traces by simply projecting the useful information onto the observation matrix. The leakage information can then be reconstructed in a workstation for further analysis. With this approach, the sampling rate to obtain the side-channel measurements is no longer limited by the operating frequency of the cryptographic device and Nyquist sampling theorem. Instead it depends on the sparsity of the leakage signal. Our study reveals that there is large amount of information redundancy in power traces obtained from the leaky device. As such, CS can employ a much lower sampling rate and yet obtain equivalent leakage sampling performance, which significantly lowers the requirement of sampling equipments. The feasibility of our approach is verified theoretically and through experiments

    Smart Card Research and Advanced Applications: 15th International Conference, CARDIS 2016, Cannes, France, November 7–9, 2016, Revised Selected Papers

    No full text
    This book constitutes the thoroughly refereed post-conference proceedings of the 15th International Conference on Smart Card Research and Advanced Applications, CARDIS 2016, held in Cannes, France, in November 2016. The 15 revised full papers presented in this book were carefully reviewed and selected from 29 submissions. The focus of the conference was on all aspects of the design, development, deployment, validation, and application of smart cards or smart personal devices

    Linear Repairing Codes and Side-Channel Attacks

    Get PDF
    International audienceTo strengthen the resistance of countermeasures based on secret sharing, several works have suggested to use the scheme introduced by Shamir in 1978, which proposes to use the evaluation of a random d-degree polynomial into n d+1 public points to share the sensitive data. Applying the same principles used against the classical Boolean sharing, all these works have assumed that the most efficient attack strategy was to exploit the minimum number of shares required to rebuild the sensitive value; which is d + 1 if the reconstruction is made with Lagrange's interpolation. In this paper, we highlight first an important difference between Boolean and Shamir's sharings which implies that, for some signal-to-noise ratio, it is more advantageous for the adversary to observe strictly more than d + 1 shares. We argue that this difference is related to the existence of so-called exact linear repairing codes, which themselves come with reconstruction formulae that need (much) less information (counted in bits) than Lagrange's interpolation. In particular, this result implies that, contrary to what was believed, the choice of the public points in Shamir's sharing has an impact on the countermeasure strength. As another contribution, we exhibit a positive impact of the existence of linear exact repairing schemes; we indeed propose to use them to improve the state-of-the-art multiplication algorithms dedicated to Shamir's sharing. We argue that the improvement can be effective when the multiplication operation in the base field is at least two times smaller than in its sub-fields

    Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database

    Get PDF
    To provide insurance on the resistance of a system against side-channel analysis, several national or private schemes are today promoting an evaluation strategy, common in classical cryptography, which is focussing on the most powerful adversary who may train to learn about the dependency between the device behaviour and the sensitive data values. Several works have shown that this kind of analysis, known as Template Attacks in the side-channel domain, can be rephrased as a classical Machine Learning classification problem with learning phase. Following the current trend in the latter area, recent works have demonstrated that deep learning algorithms were very efficient to conduct security evaluations of embedded systems and had many advantage compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies. This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in Machine Learning and (2) it does not allow for the reproducibility of the presented results. This paper aims to address theses limitations in several ways. First, completing recent works, we propose a comprehensive study of deep learning algorithms when applied in the context of side-channel analysis and we clarify the links with the classical template attacks. Secondly, we address the question of the choice of the hyper-parameters for the class of multi-layer perceptron networks and convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a masked implementation of the AES algorithm. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electro-magnetic measurements exploited in our benchmarks. This open database, named ASCAD, has been specified to serve as a common basis for further works on this subject. Our work confirms the conclusions made by Cagli et al. at CHES 2017 about the high potential of convolutional neural networks. Interestingly, it shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis

    Efficient and Secure Implementations of Lightweight Symmetric Cryptographic Primitives

    Get PDF
    This thesis is devoted to efficient and secure implementations of lightweight symmetric cryptographic primitives for resource-constrained devices such as wireless sensors and actuators that are typically deployed in remote locations. In this setting, cryptographic algorithms must consume few computational resources and withstand a large variety of attacks, including side-channel attacks. The first part of this thesis is concerned with efficient software implementations of lightweight symmetric algorithms on 8, 16, and 32-bit microcontrollers. A first contribution of this part is the development of FELICS, an open-source benchmarking framework that facilitates the extraction of comparative performance figures from implementations of lightweight ciphers. Using FELICS, we conducted a fair evaluation of the implementation properties of 19 lightweight block ciphers in the context of two different usage scenarios, which are representatives for common security services in the Internet of Things (IoT). This study gives new insights into the link between the structure of a cryptographic algorithm and the performance it can achieve on embedded microcontrollers. Then, we present the SPARX family of lightweight ciphers and describe the impact of software efficiency in the process of shaping three instances of the family. Finally, we evaluate the cost of the main building blocks of symmetric algorithms to determine which are the most efficient ones. The contributions of this part are particularly valuable for designers of lightweight ciphers, software and security engineers, as well as standardization organizations. In the second part of this work, we focus on side-channel attacks that exploit the power consumption or the electromagnetic emanations of embedded devices executing unprotected implementations of lightweight algorithms. First, we evaluate different selection functions in the context of Correlation Power Analysis (CPA) to infer which operations are easy to attack. Second, we show that most implementations of the AES present in popular open-source cryptographic libraries are vulnerable to side-channel attacks such as CPA, even in a network protocol scenario where the attacker has limited control of the input. Moreover, we describe an optimal algorithm for recovery of the master key using CPA attacks. Third, we perform the first electromagnetic vulnerability analysis of Thread, a networking stack designed to facilitate secure communication between IoT devices. The third part of this thesis lies in the area of side-channel countermeasures against power and electromagnetic analysis attacks. We study efficient and secure expressions that compute simple bitwise functions on Boolean shares. To this end, we describe an algorithm for efficient search of expressions that have an optimal cost in number of elementary operations. Then, we introduce optimal expressions for first-order Boolean masking of bitwise AND and OR operations. Finally, we analyze the performance of three lightweight block ciphers protected using the optimal expressions

    Randomness Generation for Secure Hardware Masking - Unrolled Trivium to the Rescue

    Get PDF
    Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed over the past decades. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency and performance of secure implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform all competitors when implemented in an unrolled fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle while maintaining high performance, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a high rate per cycle even more efficiently than Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking scheme, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating n fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as 20n to 30n ASIC gate equivalents (GE) or 3n to 4n FPGA look-up tables (LUTs), where n is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable and potentially moving low randomness usage in hardware masking research from a primary to secondary design goal

    Information Bounds and Convergence Rates for Side-Channel Security Evaluators

    Get PDF
    Current side-channel evaluation methodologies exhibit a gap between inefficient tools offering strong theoretical guarantees and efficient tools only offering heuristic (sometimes case-specific) guarantees. Profiled attacks based on the empirical leakage distribution correspond to the first category. Bronchain et al. showed at Crypto 2019 that they allow bounding the worst-case security level of an implementation, but the bounds become loose as the leakage dimensionality increases. Template attacks and machine learning models are examples of the second category. In view of the increasing popularity of such parametric tools in the literature, a natural question is whether the information they can extract can be bounded. In this paper, we first show that a metric conjectured to be useful for this purpose, the hypothetical information, does not offer such a general bound. It only does when the assumptions exploited by a parametric model match the true leakage distribution. We therefore introduce a new metric, the training information, that provides the guarantees that were conjectured for the hypothetical information for practically-relevant models. We next initiate a study of the convergence rates of profiled side-channel distinguishers which clarifies, to the best of our knowledge for the first time, the parameters that influence the complexity of a profiling. On the one hand, the latter has practical consequences for evaluators as it can guide them in choosing the appropriate modeling tool depending on the implementation (e.g., protected or not) and contexts (e.g., granting them access to the countermeasures’ randomness or not). It also allows anticipating the amount of measurements needed to guarantee a sufficient model quality. On the other hand, our results connect and exhibit differences between side-channel analysis and statistical learning theory
    corecore