56 research outputs found

    Improved Study of Side-Channel Attacks Using Recurrent Neural Networks

    Get PDF
    Differential power analysis attacks are special kinds of side-channel attacks where power traces are considered as the side-channel information to launch the attack. These attacks are threatening and significant security issues for modern cryptographic devices such as smart cards, and Point of Sale (POS) machine; because after careful analysis of the power traces, the attacker can break any secured encryption algorithm and can steal sensitive information. In our work, we study differential power analysis attack using two popular neural networks: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). Our work seeks to answer three research questions(RQs): RQ1: Is it possible to predict the unknown cryptographic algorithm using neural network models from different datasets? RQ2: Is it possible to map the key value for the specific plaintext-ciphertext pair with or without side-band information? RQ3: Using similar hyper-parameters, can we evaluate the performance of two neural network models (CNN vs. RNN)? In answering the questions, we have worked with two different datasets: one is a physical dataset (DPA contest v1 dataset), and the other one is simulated dataset (toggle count quantity) from Verilog HDL. We have evaluated the efficiency of CNN and RNN models in predicting the unknown cryptographic algorithms of the device under attack. We have mapped to 56 bits key for a specific plaintext-ciphertext pair with and without using side-band information. Finally, we have evaluated vi our neural network models using different metrics such as accuracy, loss, baselines, epochs, speed of operation, memory space consumed, and so on. We have shown the performance comparison between RNN and CNN on different datasets. We have done three experiments and shown our results on these three experiments. The first two experiments have shown the advantages of choosing CNN over RNN while working with side-channel datasets. In the third experiment, we have compared two RNN models on the same datasets but different dimensions of the datasets

    Deep-Learning-Based Radio-Frequency Side-Channel Attack on Quantum Key Distribution

    Full text link
    Quantum key distribution (QKD) protocols are proven secure based on fundamental physical laws, however, the proofs consider a well-defined setting and encoding of the sent quantum signals only. Side channels, where the encoded quantum state is correlated with properties of other degrees of freedom of the quantum channel, allow an eavesdropper to obtain information unnoticeably as demonstrated in a number of hacking attacks on the quantum channel. Yet, also classical radiation emitted by the devices may be correlated, leaking information on the potential key, especially when combined with novel data analysis methods. We here demonstrate a side-channel attack using a deep convolutional neural network to analyze the recorded classical, radio-frequency electromagnetic emissions. Even at a distance of a few centimeters from the electronics of a QKD sender employing frequently used electronic components we are able to recover virtually all information about the secret key. Yet, as shown here, countermeasures can enable a significant reduction of both the emissions and the amount of secret key information leaked to the attacker. Our analysis methods are independent of the actual device and thus provide a starting point for assessing the presence of classical side channels in QKD devices.Comment: 14 pages, 10 figures. Comments welcome

    Enhancing the Performance of Practical Profiling Side-Channel Attacks Using Conditional Generative Adversarial Networks

    Get PDF
    Recently, many profiling side-channel attacks based on Machine Learning and Deep Learning have been proposed. Most of them focus on reducing the number of traces required for successful attacks by optimizing the modeling algorithms. In previous work, relatively sufficient traces need to be used for training a model. However, in the practical profiling phase, it is difficult or impossible to collect sufficient traces due to the constraint of various resources. In this case, the performance of profiling attacks is inefficient even if proper modeling algorithms are used. In this paper, the main problem we consider is how to conduct more efficient profiling attacks when sufficient profiling traces cannot be obtained. To deal with this problem, we first introduce the Conditional Generative Adversarial Network (CGAN) in the context of side-channel attacks. We show that CGAN can generate new traces to enlarge the size of the profiling set, which improves the performance of profiling attacks. For both unprotected and protected cryptographic algorithms, we find that CGAN can effectively learn the leakage of traces collected in their implementations. We also apply it to different modeling algorithms. In our experiments, the model constructed with the augmented profiling set can reduce the required attack traces by more than half, which means the generated traces can provide useful information as the real traces

    How Diversity Affects Deep-Learning Side-Channel Attacks

    Get PDF
    Deep learning side-channel attacks are an emerging threat to the security of implementations of cryptographic algorithms. The attacker first trains a model on a large set of side-channel traces captured from a chip with a known key. The trained model is then used to recover the unknown key from a few traces captured from a victim chip. The first successful attacks have been demonstrated recently. However, they typically train and test on power traces captured from the same device. In this paper, we show that it is important to train and test on traces captured from different boards and using diverse implementations of the cryptographic algorithm under attack. Otherwise, it is easy to overestimate the classification accuracy. For example, if we train and test an MLP model on power traces captured from the same board, we can recover all key byte values with 96% accuracy from a single trace. However, the single-trace attack accuracy drops to 2.45% if we test on traces captured from a board different from the one we used for training, even if both boards carry identical chips

    Learning when to stop: a mutual information approach to fight overfitting in profiled side-channel analysis

    Get PDF
    Today, deep neural networks are a common choice for conducting the profiled side-channel analysis. Such techniques commonly do not require pre-processing, and yet, they can break targets protected with countermeasures. Unfortunately, it is not trivial to find neural network hyper-parameters that would result in such top-performing attacks. The hyper-parameter leading the training process is the number of epochs during which the training happens. If the training is too short, the network does not reach its full capacity, while if the training is too long, the network overfits, and is not able to generalize to unseen examples. Finding the right moment to stop the training process is particularly difficult for side-channel analysis as there are no clear connections between machine learning and side-channel metrics that govern the training and attack phases, respectively. In this paper, we tackle the problem of determining the correct epoch to stop the training in deep learning-based side-channel analysis. We explore how information is propagated through the hidden layers of a neural network, which allows us to monitor how training is evolving. We demonstrate that the amount of information, or, more precisely, mutual information transferred to the output layer, can be measured and used as a reference metric to determine the epoch at which the network offers optimal generalization. To validate the proposed methodology, we provide extensive experimental results that confirm the effectiveness of our metric for avoiding overfitting in the profiled side-channel analysis

    Far Field EM Side-Channel Attack on AES Using Deep Learning

    Get PDF
    We present the first deep learning-based side-channel attack on AES-128 using far field electromagnetic emissions as a side channel. Our neural networks are trained on traces captured from five different Bluetooth devices at five different distances to target and tested on four other Bluetooth devices. We can recover the key from less than 10K traces captured in an office environment at 15 m distance to target even if the measurement for each encryption is taken only once. Previous template attacks required multiple repetitions of the same encryption. For the case of 1K repetitions, we need less than 400 traces on average at 15 m distance to target. This improves the template attack presented at CHES\u272020 which requires 5K traces and key enumeration up to 2232^{23}
    corecore