An Analysis of Deep Learning Based Profiled Side-channel Attacks: Custom Deep Learning Layer, CNN Hyperparameters for Countermeasures, and Portability Settings

Abstract

A side-channel attack (SCA) recovers secret data from a device by exploiting unintended physical leakages such as power consumption. In a profiled SCA, we assume an adversary has control over a target and copy device. Using the copy device the adversary learns a profile of the device. With the profile, the adversary exploits the measurements from a target device and recovers the secret key. As SCAs have shown to be a realistic attack vector, countermeasures have been invented to harden these kinds of attacks. In the last few years, deep learning has been applied in a wide variety of domains. For example, convolutional neural networks have shown to be effective for object recognition in images and recurrent neural networks for text generation. In the side-channel analysis domain, deep learning has shown to be successful. Up until recently, no deep learning layer existed that was specifically designed for SCAs. In this work, we analyze this layer, called the spread layer, and demonstrate the flaws of this layer. We improve the flaws and show the spread layer does not enhance the performance of SCAs. Additionally, we show there is no need to develop a deep learning layer specifically for SCAs on unprotected implementations. For implementations where countermeasures are present, literature demonstrated that convolutional neural networks are the most successful. However, for both the masking and random delay countermeasure, little is known about the influence of the kernel size and depth of the network. In this work, we illustrate that increasing the kernel size and depth of the network both increase the attack efficiency for the random delay countermeasure. For the masking countermeasure, we demonstrate that higher kernel sizes and shallow networks perform the best. Additionally, in this work, we consider a portability setting where the probe position has been changed in between the measurements of the profiling and attack measurements. Here, we show that the probe position causes a typical deep learning SCA to be ineffective. We introduce a normalization method such that the attack becomes effective, and show this method enables the attack to perform as expected.Computer Scienc

    Similar works

    Full text

    thumbnail-image

    Available Versions