18 research outputs found

    AES暗号回路に対するクロック間衝突を用いた電磁波解析

    Get PDF
    Kocher らによる差分電力解析の提案以降,サイドチャネル解析(SCA:Side Channel Analysis)が注目され,盛んに研究されている.SCA とは暗号回路が漏洩する物理情報を解析し,回路内部の秘密情報の特定する手法である.SCA で利用する物理情報は処理時間や消費電力,漏洩電磁波など様々である.中でも漏洩電磁波を用いた電磁波解析(EMA:Electro-Magnetic Analysis)は測定するプローブの位置や対象とする回路のレイアウトにより,異なる物理的特徴を有する電磁波形が取得可能である.この局所性と呼ばれる性質により,EMA は解析対象となる演算の処理に強く依存した漏洩電磁波を取得できることが知られている.そのため,回路全体の消費電力を利用する電力解析と比較して,効率的な解析が可能になると考えられている.本研究ではEMA の局所性に着目し,AES 暗号に対するクロック間衝突を用いた電磁波解析(CC-EMA:Clockwise Collision EMA)という新たな鍵復元アルゴリズムを提案する.対象とするAES 暗号実装は1 ラウンドを1 サイクルで行うループアーキテクチャ構造を有する.このような実装におけるクロック間衝突とは,連続した2 ラウンド間でAES 暗号回路内のS-box 回路への入力値のハミング距離が0 の時のことである.提案手法ではEMA の局所性を利用して解析対象となるS-box 回路で発生するクロック間衝突時の漏洩電磁波を識別し鍵復元を行なう.CC-EMA鍵復元アルゴリズムは閾値法と多数決法で構成することで,従来手法である相関電磁波解析(CEMA:Correlation EMA)と比べてAES の鍵復元時の計算量を1/256 に減らすことに成功した.さらに,本研究では,シミュレーションを用いてCC-EMA とCEMA を比較し,解析に必要な電磁波形数(解析コスト)の評価を行なう.我々は様々な環境下で取得した漏洩電磁波を想定した解析効率の評価を行なうため,サイドチャネル情報モデルを構築し,シミュレーションにより解析コストを定量化する.本シミュレーションにより,どのような環境下でCC-EMA が効率的にAES 暗号回路の鍵を復元できるかを示す.また,S-box 回路が並列に実装されたAES 暗号回路では鍵値によりクロック間衝突の発生頻度が異なることを明らかにする.我々は測定環境だけではなく鍵値に依存したCC-EMA の解析コストの定量化も行なう.これらのシミュレーション結果から,測定環境や鍵値によってはCC-EMA の解析効率がCEMA を上回ることを示す.電気通信大学201

    Explointing FPGA block memories for protected cryptographic implementations

    Get PDF
    Modern Field Programmable Gate Arrays (FPGAs) are power packed with features to facilitate designers. Availability of features like huge block memory (BRAM), Digital Signal Processing (DSP) cores, embedded CPU makes the design strategy of FPGAs quite different from ASICs. FPGA are also widely used in security-critical application where protection against known attacks is of prime importance. We focus ourselves on physical attacks which target physical implementations. To design countermeasures against such attacks, the strategy for FPGA designers should also be different from that in ASIC. The available features should be exploited to design compact and strong countermeasures. In this paper, we propose methods to exploit the BRAMs in FPGAs for designing compact countermeasures. BRAM can be used to optimize intrinsic countermeasures like masking and dual-rail logic, which otherwise have significant overhead (at least 2X). The optimizations are applied on a real AES-128 co-processor and tested for area overhead and resistance on Xilinx Virtex-5 chips. The presented masking countermeasure has an overhead of only 16% when applied on AES. Moreover Dual-rail Precharge Logic (DPL) countermeasure has been optimized to pack the whole sequential part in the BRAM, hence enhancing the security. Proper robustness evaluations are conducted to analyze the optimization for area and security

    Near Collision Side Channel Attacks

    Get PDF
    Side channel collision attacks are a powerful method to exploit side channel leakage. Otherwise than a few exceptions, collision attacks usually combine leakage from distinct points in time, making them inherently bivariate. This work introduces the notion of near collisions to exploit the fact that values depending on the same sub-key can have similar while not identical leakage. We show how such knowledge can be exploited to mount a key recovery attack. The presented approach has several desirable features when compared to other state-of-the-art collision attacks: Near collision attacks are truly univariate. They have low requirements on the leakage functions, since they work well for leakages that are linear in the bits of the targeted intermediate state. They are applicable in the presence of masking countermeasures if there exist distinguishable leakages, as in the case of leakage squeezing. Results are backed up by a broad range of simulations for unprotected and masked implementations, as well as an analysis of the measurement set provided by DPA Contest v4

    Orthogonal Direct Sum Masking: A Smartcard Friendly Computation Paradigm in a Code, with Builtin Protection against Side-Channel and Fault Attacks

    Get PDF
    Secure elements, such as smartcards or trusted platform modules (TPMs), must be protected against implementation-level attacks. Those include side-channel and fault injection attacks. We introduce ODSM, Orthogonal Direct Sum Masking, a new computation paradigm that achieves protection against those two kinds of attacks. A large vector space is structured as two supplementary orthogonal subspaces. One subspace (called a code C\mathcal{C}) is used for the functional computation, while the second subspace carries random numbers. As the random numbers are entangled with the sensitive data, ODSM ensures a protection against (monovariate) side-channel attacks. The random numbers can be checked either occasionally, or globally, thereby ensuring a fine or coarse detection capability. The security level can be formally detailed: it is proved that monovariate side-channel attacks of order up to dC1d_\mathcal{C}-1, where dCd_\mathcal{C} is the minimal distance of C\mathcal{C}, are impossible, and that any fault of Hamming weight strictly less than dCd_\mathcal{C} is detected. A complete instantiation of ODSM is given for AES. In this case, all monovariate side-channel attacks of order strictly less than 55 are impossible, and all fault injections perturbing strictly less than 55 bits are detected

    Towards Easy Key Enumeration

    Get PDF
    Key enumeration solutions are post-processing schemes for the output sequences of side channel distinguishers, the application of which are prevented by very large key candidate space and computation power requirements. The attacker may spend several days or months to enumerate a huge key space (e.g. 2402^{40}). In this paper, we aim at pre-processing and reducing the key candidate space by deleting impossible key candidates before enumeration. A new distinguisher named Group Collision Attack (GCA) is given. Moreover, we introduce key verification into key recovery and a new divide and conquer strategy named Key Grouping Enumeration (KGE) is proposed. KGE divides the huge key space into several groups and uses GCA to delete impossible key combinations and output possible ones in each group. KGE then recombines the remaining key candidates in each group using verification. The number of remaining key candidates becomes much smaller through these two impossible key candidate deletion steps with a small amount of computation. Thus, the attacker can use KGE as a pre-processing tool of key enumeration and enumerate the key more easily and fast in a much smaller candidate space

    Information Entropy Based Leakage Certification

    Get PDF
    Side-channel attacks and evaluations typically utilize leakage models to extract sensitive information from measurements of cryptographic implementations. Efforts to establish a true leakage model is still an active area of research since Kocher proposed Differential Power Analysis (DPA) in 1999. Leakage certification plays an important role in this aspect to address the following question: how good is my leakage model? . However, existing leakage certification methods still need to tolerate assumption error and estimation error of unknown leakage models. There are many probability density distributions satisfying given moment constraints. As such, finding the most unbiased and most reasonable model still remains an unresolved problem. In this paper, we address a more fundamental question: what\u27s the true leakage model of a chip? . In particular, we propose Maximum Entropy Distribution (MED) to estimate the leakage model as MED is the most unbiased, objective and theoretically the most reasonable probability density distribution conditioned upon the available information. MED can theoretically use information on arbitrary higher-order moments to infinitely approximate the true leakage model. It well compensates the theory vacancy of model profiling and evaluation. Experimental results demonstrate the superiority of our proposed method for approximating the leakage model using MED estimation

    Leakage-Resilient Symmetric Cryptography Under Empirically Verifiable Assumptions

    Get PDF
    Leakage-resilient cryptography aims at formally proving the security of cryptographic implementations against large classes of side-channel adversaries. One important challenge for such an approach to be relevant is to adequately connect the formal models used in the proofs with the practice of side-channel attacks. It raises the fundamental problem of finding reasonable restrictions of the leakage functions that can be empirically verified by evaluation laboratories. In this paper, we first argue that the previous ``bounded leakage requirements used in leakage-resilient cryptography are hard to fulfill by hardware engineers. We then introduce a new, more realistic and empirically verifiable assumption of simulatable leakage, under which security proofs in the standard model can be obtained. We finally illustrate our claims by analyzing the physical security of an efficient pseudorandom generator (for which security could only be proven under a random oracle based assumption so far). These positive results come at the cost of (algorithm-level) specialization, as our new assumption is specifically defined for block ciphers. Nevertheless, since block ciphers are the main building block of many leakage-resilient cryptographic primitives, our results also open the way towards more realistic constructions and proofs for other pseudorandom objects

    Leakage Assessment Methodology - a clear roadmap for side-channel evaluations

    Get PDF
    Evoked by the increasing need to integrate side-channel countermeasures into security-enabled commercial devices, evaluation labs are seeking a standard approach that enables a fast, reliable and robust evaluation of the side-channel vulnerability of the given products. To this end, standardization bodies such as NIST intend to establish a leakage assessment methodology fulfilling these demands. One of such proposals is the Welch\u27s t-test, which is being put forward by Cryptography Research Inc., and is able to relax the dependency between the evaluations and the device\u27s underlying architecture. In this work, we deeply study the theoretical background of the test\u27s different flavors, and present a roadmap which can be followed by the evaluation labs to efficiently and correctly conduct the tests. More precisely, we express a stable, robust and efficient way to perform the tests at higher orders. Further, we extend the test to multivariate settings, and provide details on how to efficiently and rapidly carry out such a multivariate higher-order test. Including a suggested methodology to collect the traces for these tests, we point out practical case studies where different types of t-tests can exhibit the leakage of supposedly secure designs
    corecore