2,288 research outputs found

    Power Side Channels in Security ICs: Hardware Countermeasures

    Full text link
    Power side-channel attacks are a very effective cryptanalysis technique that can infer secret keys of security ICs by monitoring the power consumption. Since the emergence of practical attacks in the late 90s, they have been a major threat to many cryptographic-equipped devices including smart cards, encrypted FPGA designs, and mobile phones. Designers and manufacturers of cryptographic devices have in response developed various countermeasures for protection. Attacking methods have also evolved to counteract resistant implementations. This paper reviews foundational power analysis attack techniques and examines a variety of hardware design mitigations. The aim is to highlight exposed vulnerabilities in hardware-based countermeasures for future more secure implementations

    Secure hardware design against side-channel attacks

    Get PDF
    Embedded systems such as smart card or IoT devices should be protected from side-channel analysis (SCA) attacks. For the secure hardware implementation, SCA security metrics to quantify robustness of the implementation at the abstraction level from the logic level to the layout level against SCA attacks should be considered. In our design flow, the first security test is executed at the logic level. If the implementation does not satisfy the threshold of the SCA security metric based on Kullback-Leibler divergence, the module can be re-synthesized with secure logic styles such as WDDL or t-private logic circuits. At the final security test, we use the machine learning technique such as LDA, QDA, SVM and naive Bayes to check the distinguishability of the side-channel leakage depending on inputs or outputs. These techniques apply to an ASIC in characterizing the secret data leakage. In this thesis, t-private logic circuits are implemented with the FreePDK45nm. The SCA security metric as well as the delay and power consumption is characterized. All this charac- terization data are stored in the standard liberty format(.lib) in order for general CAD tools to use this file. The t-private logic package including the general digital logics can be exploited for secure VLSI design. Also, various classifiers such as LDA, QDA, SVM or naive Bayes are used to emulate real SCA environment. Based on this SCA simulator, the threshold of the SCA security metric can be estimated and the security can be verified more accurately. The secure logic cell package and SCA simulator support the methodology of the secure hardware implementation

    SoK: Design Tools for Side-Channel-Aware Implementations

    Get PDF
    Side-channel attacks that leak sensitive information through a computing device's interaction with its physical environment have proven to be a severe threat to devices' security, particularly when adversaries have unfettered physical access to the device. Traditional approaches for leakage detection measure the physical properties of the device. Hence, they cannot be used during the design process and fail to provide root cause analysis. An alternative approach that is gaining traction is to automate leakage detection by modeling the device. The demand to understand the scope, benefits, and limitations of the proposed tools intensifies with the increase in the number of proposals. In this SoK, we classify approaches to automated leakage detection based on the model's source of truth. We classify the existing tools on two main parameters: whether the model includes measurements from a concrete device and the abstraction level of the device specification used for constructing the model. We survey the proposed tools to determine the current knowledge level across the domain and identify open problems. In particular, we highlight the absence of evaluation methodologies and metrics that would compare proposals' effectiveness from across the domain. We believe that our results help practitioners who want to use automated leakage detection and researchers interested in advancing the knowledge and improving automated leakage detection

    Reliability-energy-performance optimisation in combinational circuits in presence of soft errors

    Get PDF
    PhD ThesisThe reliability metric has a direct relationship to the amount of value produced by a circuit, similar to the performance metric. With advances in CMOS technology, digital circuits become increasingly more susceptible to soft errors. Therefore, it is imperative to be able to assess and improve the level of reliability of these circuits. A framework for evaluating and improving the reliability of combinational circuits is proposed, and an interplay between the metrics of reliability, energy and performance is explored. Reliability evaluation is divided into two levels of characterisation: stochastic fault model (SFM) of the component library and a design-specific critical vector model (CVM). The SFM captures the properties of components with regard to the interference which causes error. The CVM is derived from a limited number of simulation runs on the specific design at the design time and producing the reliability metric. The idea is to move the high-complexity problem of the stochastic characterisation of components to the generic part of the design process, and to do it just once for a large number of specific designs. The method is demonstrated on a range of circuits with various structures. A three-way trade-off between reliability, energy, and performance has been discovered; this trade-off facilitates optimisations of circuits and their operating conditions. A technique for improving the reliability of a circuit is proposed, based on adding a slow stage at the primary output. Slow stages have the ability to absorb narrow glitches from prior stages, thus reducing the error probability. Such stages, or filters, suppress most of the glitches generated in prior stages and prevent them from arriving at the primary output of the circuit. Two filter solutions have been developed and analysed. The results show a dramatic improvement in reliability at the expense of minor performance and energy penalties. To alleviate the problem of the time-consuming analogue simulations involved in the proposed method, a simplification technique is proposed. This technique exploits the equivalence between the properties of the gates within a path and the equivalence between paths. On the basis of these equivalences, it is possible to reduce the number of simulation runs. The effectiveness of the proposed technique is evaluated by applying it to different circuits with a representative variety of path topologies. The results show a significant decrease in the time taken to estimate reliability at the expense of a minor decrease in the accuracy of estimation. The simplification technique enables the use of the proposed method in applications with complex circuits.Ministry of Education and Scientific Research in Liby

    LeakyOhm: Secret Bits Extraction using Impedance Analysis

    Full text link
    The threats of physical side-channel attacks and their countermeasures have been widely researched. Most physical side-channel attacks rely on the unavoidable influence of computation or storage on current consumption or voltage drop on a chip. Such data-dependent influence can be exploited by, for instance, power or electromagnetic analysis. In this work, we introduce a novel non-invasive physical side-channel attack, which exploits the data-dependent changes in the impedance of the chip. Our attack relies on the fact that the temporarily stored contents in registers alter the physical characteristics of the circuit, which results in changes in the die's impedance. To sense such impedance variations, we deploy a well-known RF/microwave method called scattering parameter analysis, in which we inject sine wave signals with high frequencies into the system's power distribution network (PDN) and measure the echo of the signals. We demonstrate that according to the content bits and physical location of a register, the reflected signal is modulated differently at various frequency points enabling the simultaneous and independent probing of individual registers. Such side-channel leakage challenges the tt-probing security model assumption used in masking, which is a prominent side-channel countermeasure. To validate our claims, we mount non-profiled and profiled impedance analysis attacks on hardware implementations of unprotected and high-order masked AES. We show that in the case of the profiled attack, only a single trace is required to recover the secret key. Finally, we discuss how a specific class of hiding countermeasures might be effective against impedance leakage

    Explointing FPGA block memories for protected cryptographic implementations

    Get PDF
    Modern Field Programmable Gate Arrays (FPGAs) are power packed with features to facilitate designers. Availability of features like huge block memory (BRAM), Digital Signal Processing (DSP) cores, embedded CPU makes the design strategy of FPGAs quite different from ASICs. FPGA are also widely used in security-critical application where protection against known attacks is of prime importance. We focus ourselves on physical attacks which target physical implementations. To design countermeasures against such attacks, the strategy for FPGA designers should also be different from that in ASIC. The available features should be exploited to design compact and strong countermeasures. In this paper, we propose methods to exploit the BRAMs in FPGAs for designing compact countermeasures. BRAM can be used to optimize intrinsic countermeasures like masking and dual-rail logic, which otherwise have significant overhead (at least 2X). The optimizations are applied on a real AES-128 co-processor and tested for area overhead and resistance on Xilinx Virtex-5 chips. The presented masking countermeasure has an overhead of only 16% when applied on AES. Moreover Dual-rail Precharge Logic (DPL) countermeasure has been optimized to pack the whole sequential part in the BRAM, hence enhancing the security. Proper robustness evaluations are conducted to analyze the optimization for area and security

    Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems

    Get PDF
    Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption. Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance. In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets. As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented. Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K. Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node. Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked

    Circuit-Variant Moving Target Defense for Side-Channel Attacks on Reconfigurable Hardware

    Get PDF
    With the emergence of side-channel analysis (SCA) attacks, bits of a secret key may be derived by correlating key values with physical properties of cryptographic process execution. Power and Electromagnetic (EM) analysis attacks are based on the principle that current flow within a cryptographic device is key-dependent and therefore, the resulting power consumption and EM emanations during encryption and/or decryption can be correlated to secret key values. These side-channel attacks require several measurements of the target process in order to amplify the signal of interest, filter out noise, and derive the secret key through statistical analysis methods. Differential power and EM analysis attacks rely on correlating actual side-channel measurements to hypothetical models. This research proposes increasing resistance to differential power and EM analysis attacks through structural and spatial randomization of an implementation. By introducing randomly located circuit variants of encryption components, the proposed moving target defense aims to disrupt side-channel collection and correlation needed to successfully implement an attac
    corecore