10 research outputs found

    LDA-Based Clustering as a Side-Channel Distinguisher

    Get PDF
    Side-channel attacks put the security of the implementations of cryptographic algorithms under threat. Secret information can be recovered by analyzing the physical measurements acquired during the computations and using key recovery distinguishing functions to guess the best candidate. Several generic and model based distinguishers have been proposed in the literature. In this work we describe two contributions that lead to better performance of side-channel attacks in challenging scenarios. First, we describe how to transform the physical leakage traces into a new space where the noise reduction is near-optimal. Second, we propose a new generic distinguisher that is based upon minimal assumptions. It approaches a key distinguishing task as a problem of classification and ranks the key candidates according to the separation among the leakage traces. We also provide experiments and compare their results to those of the Correlation Power Analysis (CPA). Our results show that the proposed method can indeed reach better success rates even in the presence of significant amount of noise

    Horizontal Clustering Side-Channel Attacks on Embedded ECC Implementations (Extended Version)

    Get PDF
    Side-channel attacks are a threat to cryptographic algorithms running on embedded devices. Public-key cryptosystems, including elliptic curve cryptography (ECC), are particularly vulnerable because their private keys are usually long-term. Well known countermeasures like regularity, projective coordinates and scalar randomization, among others, are used to harden implementations against common side-channel attacks like DPA. Horizontal clustering attacks can theoretically overcome these countermeasures by attacking individual side-channel traces. In practice horizontal attacks have been applied to overcome protected ECC implementations on FPGAs. However, it has not been known yet whether such attacks can be applied to protected implementations working on embedded devices, especially in a non-profiled setting. In this paper we mount non-profiled horizontal clustering attacks on two protected implementations of the Montgomery Ladder on Curve25519 available in the µNaCl library targeting electromagnetic (EM) emanations. The first implementation performs the conditional swap (cswap) operation through arithmetic of field elements (cswap-arith), while the second does so by swapping the pointers (cswap-pointer). They run on a 32-bit ARM Cortex-M4F core. Our best attack has success rates of 97.64% and 99.60% for cswap-arith and cswap-pointer, respectively. This means that at most 6 and 2 bits are incorrectly recovered, and therefore, a subsequent brute-force can fix them in reasonable time. Furthermore, our horizontal clustering framework used for the aforementioned attacks can be applied against other protected implementations

    Profiling side-channel attacks on cryptographic algorithms

    Get PDF
    Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency

    Towards Human Dependency Elimination: AI Approach to SCA Robustness Assessment

    Get PDF
    Evaluating the side-channel resistance of a device in practice is a problematic and arduous process. Current certification schemes require to attack the device under test with an ever-growing number of techniques to validate its security. In addition, the success or failure of these techniques strongly depends on the individual implementing them, due to the fallible and human intrinsic nature of several steps of this path. To alleviate this problem, we propose a battery of automated attacks as a side-channel analysis robustness assessment of an embedded device. To prove our approach, we conduct realistic experiments on two different devices, creating a new dataset (AES_RA) as a part of our contribution. Furthermore, we propose a novel way of performing these attacks using Principal Component Analysis, which also serves as an alternative way of selecting optimal principal components automatically. In addition, we perform a detailed analysis of automated attacks against masked AES implementations, comparing our method with the state-of-the-art approaches and proposing two novel initialization techniques to overcome its limitations in this scenario. We support our claims with experiments on AES_RA and a public dataset (ASCAD), showing how our, although fully automated, approach can straightforwardly provide state-of-the-art results

    Advanced Signal Processing Solutions for Brain-Computer Interfaces: From Theory to Practice

    Get PDF
    As the field of Brain-Computer Interfaces (BCI) is rapidly evolving within both academia and industry, the necessity of improving the signal processing module of such systems becomes of significant practical and theoretical importance. Additionally, the employment of Electroencephalography (EEG) headset, which is considered as the best non-invasive modality for collecting brain signals, offers a relatively more user-friendly experience, affordability, and flexibility of design to the developers of a BCI system. Motivated by the aforementioned facts, the thesis investigates several venues through which an EEG-based BCI can more accurately interpret the users' intention. The first part of the thesis is devoted to development of theoretical approaches by which the dimensionality of the collected EEG signals can be reduced with minimum information loss. In this part, two novel frameworks are proposed based on graph signal processing theory, referred to as the GD-BCI and the GDR-BCI, where the geometrical structure of the EEG electrodes are employed to define and configure the underlying graphs. The second part of the thesis is devoted to seeking practical, yet facile-to-implement, solutions to improve the classification accuracy of BCI systems. Finally, in the last part of the thesis, inspired by the recent surge of interest in hybrid BCIs, a novel framework is proposed for cuff-less blood pressure estimation to be further coupled with an EEG-based BCI. Referred to as the WAKE-BPAT, the proposed framework simultaneously processes Electrocardiography (ECG) and Photoplethysmogram (PPG) signals via an adaptive Kalman filtering approach

    A Hybrid Approach to Formal Verification of Higher-Order Masked Arithmetic Programs

    Get PDF
    Side-channel attacks, which are capable of breaking secrecy via side-channel information, pose a growing threat to the implementation of cryptographic algorithms. Masking is an effective countermeasure against side-channel attacks by removing the statistical dependence between secrecy and power consumption via randomization. However, designing efficient and effective masked implementations turns out to be an error-prone task. Current techniques for verifying whether masked programs are secure are limited in their applicability and accuracy, especially when they are applied. To bridge this gap, in this article, we first propose a sound type system, equipped with an efficient type inference algorithm, for verifying masked arithmetic programs against higher-order attacks. We then give novel model-counting based and pattern-matching based methods which are able to precisely determine whether the potential leaky observable sets detected by the type system are genuine or simply spurious. We evaluate our approach on various implementations of arithmetic cryptographicprograms.The experiments confirm that our approach out performs the state-of-the-art base lines in terms of applicability, accuracy and efficiency

    Alzheimer’s Dementia Recognition Through Spontaneous Speech

    Get PDF

    Information-theoretic environment modeling for mobile robot localization

    Full text link
    To enhance robotic computational efficiency without degenerating accuracy, it is imperative to fit the right and exact amount of information in its simplest form to the investigated task. This thesis conforms to this reasoning in environment model building and robot localization. It puts forth an approach towards building maps and localizing a mobile robot efficiently with respect to unknown, unstructured and moderately dynamic environments. For this, the environment is modeled on an information-theoretic basis, more specifically in terms of its transmission property. Subsequently, the presented environment model, which does not specifically adhere to classical geometric modeling, succeeds in solving the environment disambiguation effectively. The proposed solution lays out a two-level hierarchical structure for localization. The structure makes use of extracted features, which are stored in two different resolutions in a single hybrid feature-map. This enables dual coarse-topological and fine-geometric localization modalities. The first level in the hierarchy describes the environment topologically, where a defined set of places is described by a probabilistic feature representation. A conditional entropy-based criterion is proposed to quantify the transinformation between the feature and the place domains. This criterion provides a double benefit of pruning the large dimensional feature space, and at the same time selecting the best discriminative features that overcome environment aliasing problems. Features with the highest transinformation are filtered and compressed to form a coarse resolution feature-map (codebook). Localization at this level is conducted through place matching. In the second level of the hierarchy, the map is viewed in high-resolution, as consisting of non-compressed entropy-processed features. These features are additionally tagged with their position information. Given the identified topological place provided by the first level, fine localization corresponding to the second level is executed using feature triangulation. To enhance the triangulation accuracy, redundant features are used and two metric evaluating criteria are employ-ed; one for dynamic features and mismatches detection, and another for feature selection. The proposed approach and methods have been tested in realistic indoor environments using a vision sensor and the Scale Invariant Feature Transform local feature extraction. Through experiments, it is demonstrated that an information-theoretic modeling approach is highly efficient in attaining combined accuracy and computational efficiency performances for localization. It has also been proven that the approach is capable of modeling environments with a high degree of unstructuredness, perceptual aliasing, and dynamic variations (illumination conditions; scene dynamics). The merit of employing this modeling type is that environment features are evaluated quantitatively, while at the same time qualitative conclusions are generated about feature selection and performance in a robot localization task. In this way, the accuracy of localization can be adapted in accordance with the available resources. The experimental results also show that the hybrid topological-metric map provides sufficient information to localize a mobile robot on two scales, independent of the robot motion model. The codebook exhibits fast and accurate topological localization at significant compression ratios. The hierarchical localization framework demonstrates robustness and optimized space and time complexities. This, in turn, provides scalability to large environments application and real-time employment adequacies
    corecore