5 research outputs found

    Side Channel Leakage Analysis - Detection, Exploitation and Quantification

    Get PDF
    Nearly twenty years ago the discovery of side channel attacks has warned the world that security is more than just a mathematical problem. Serious considerations need to be placed on the implementation and its physical media. Nowadays the ever-growing ubiquitous computing calls for in-pace development of security solutions. Although the physical security has attracted increasing public attention, side channel security remains as a problem that is far from being completely solved. An important problem is how much expertise is required by a side channel adversary. The essential interest is to explore whether detailed knowledge about implementation and leakage model are indispensable for a successful side channel attack. If such knowledge is not a prerequisite, attacks can be mounted by even inexperienced adversaries. Hence the threat from physical observables may be underestimated. Another urgent problem is how to secure a cryptographic system in the exposure of unavoidable leakage. Although many countermeasures have been developed, their effectiveness pends empirical verification and the side channel security needs to be evaluated systematically. The research in this dissertation focuses on two topics, leakage-model independent side channel analysis and security evaluation, which are described from three perspectives: leakage detection, exploitation and quantification. To free side channel analysis from the complicated procedure of leakage modeling, an observation to observation comparison approach is proposed. Several attacks presented in this work follow this approach. They exhibit efficient leakage detection and exploitation under various leakage models and implementations. More importantly, this achievement no longer relies on or even requires precise leakage modeling. For the security evaluation, a weak maximum likelihood approach is proposed. It provides a quantification of the loss of full key security due to the presence of side channel leakage. A constructive algorithm is developed following this approach. The algorithm can be used by security lab to measure the leakage resilience. It can also be used by a side channel adversary to determine whether limited side channel information suffices the full key recovery at affordable expense

    Towards more Effective Censorship Resistance Systems

    Get PDF
    Internet censorship resistance systems (CRSs) have so far been designed in an ad-hoc manner. The fundamentals are unclear and the foundations are shaky. Censors are, more and more, able to take advantage of this situation. Future censorship resistance systems ought to be built from strong theoretical underpinnings and be based on empirical evidence. Our approach is based on systematizing the CRS field and its players. Informed by this systematization we develop frameworks that have broad scope, from which we gain general insight as well as answers to specific questions. We develop theoretical and simulation-based analysis tools 1) for learning how to manipulate censor behavior using game-theoretic tactics, 2) for learning about CRS-client activity levels on CRS networks, and finally 3) for evaluating security parameters in CRS designs. We learn that there are gaps in the CRS designer's arsenal: certain censor attacks go unmitigated and the dynamics of the censorship arms race are not modeled. Our game-theoretic analysis highlights how managing the base rate of CRS traffic can cause stable equilibriums where the censor allows some amount of CRS communication to occur. We design and deploy a privacy-preserving data gathering tool, and use it to collect statistics to help answer questions about the prevalence of CRS-related traffic in actual CRS communication networks. Finally, our security evaluation of a popular CRS exposes suboptimal settings, which have since been optimized according to our recommendations. All of these contributions help support the thesis that more formal and empirically driven CRS designs can have better outcomes than the current state of the art

    Systematic Construction and Comprehensive Evaluation of Kolmogorov-Smirnov Test Based Side-Channel Distinguishers

    No full text
    Generic side-channel distinguishers aim at revealing the correct key embedded in cryptographic modules even when few assumptions can be made about their physical leakages. In this context, Kolmogorov-Smirnov Analysis (KSA) and Partial Kolmogorov-Smirnov analysis (PKS) were proposed respectively. Although both KSA and PKS are based on the Kolmogorov-Smirnov (KS) test, they really differ a lot from each other in terms of construction strategies. Inspired by this, we construct nine new variants by combining their strategies in a systematic way. Furthermore, we explore the effectiveness and efficiency of all these twelve KS test based distinguishers under various simulated scenarios in a univariate setting within a unified comparison framework, and also investigate how these distinguishers behave in practical scenarios. For these purposes, we perform a series of attacks against both simulated traces and real traces. Evaluation metrics such as Success Rate (SR) and Guessing Entropy (GE) are used to measure the efficiency of key recovery attacks in our evaluation. Our experimental results not only show how to choose the most suitable KS test based distinguisher in a particular scenario, but also clarify the practical meaning of all these KS test based distinguishers in practice

    Systematic Construction and Comprehensive Evaluation of Kolmogorov-Smirnov Test Based Side-Channel Distinguishers

    No full text
    Generic side-channel distinguishers aim at revealing the correct key embedded in cryptographic modules even when few assumptions can be made about their physical leakages. In this context, Kolmogorov-Smirnov Analysis (KSA) and Partial Kolmogorov-Smirnov analysis (PKS) were proposed respectively. Although both KSA and PKS are based on Kolmogorov-Smirnov (KS) test, they really differ a lot from each other in terms of construction strategies. Inspired by this, we construct nine new variants by combining their strategies in a systematic way. Furthermore, we explore the effectiveness and efficiency of all these twelve KS test based distinguishers under various simulated scenarios in a univariate setting within a unified comparison framework, and also investigate how these distinguishers behave in practical scenarios. For these purposes, we perform a series of attacks against both simulated traces and real traces. Success Rate (SR) is used to measure the efficiency of key recovery attacks in our evaluation. Our experimental results not only show how to choose the most suitable KS test based distinguisher in a particular scenario, but also clarify the practical meaning of all these KS test based distinguishers in practice

    Comparing the efficacy of different web page interface attributes in facilitating information retrieval for people with mild Learning Disabilities

    Get PDF
    This research aimed to determine what web page attributes facilitate optimal website design for use by learning-disabled people – a topic hitherto rarely addressed. Qualitative research developed methods appropriate for this cohort, determined attributes that impact on usability and explored ways of eliciting preferences. Attributes related to menu position, text size and images, which were then examined quantitatively by comparing web pages of different layouts. Task-times were analysed, determining which attributes have the greatest impact on performance. The main predictor of task-time was menu position, followed by text size. Images did not affect performance. The study also found that learning-disabled people have only ‘serial access’ to information when searching individual pages – it being imbibed sequentially until the required content is reached. Words on the left of horizontal menus were found quicker than those in the middle or right. Information access took longer from vertical menus, possibly because of the juxtaposition of distracting body text. Images were ignored until reached ‘serially’– and thus did not help signpost content. Small-text was consumed quicker than large, as the latter took up more lines and required more eye movements to negotiate. A three category rating scale and simple interviews elicited web design preferences. The ‘neutral’ category proved troublesome and so a refined four category scale without this mid-point was adopted which yielded a greater variety of results. In verbally eliciting preferences, ‘acquiescence bias’ was minimised by avoiding polar interrogatives - partly achieved by comparing different designs. Preferred designs were for large-text and images – the reverse of those facilitating fastest retrieval times, a discrepancy due to preferences being judged on aesthetic considerations. Design recommendations are offered which reconcile preference and performance findings. These include using a horizontal menu, juxtaposing images and text, and reducing text from sentences to phrases – facilitating preferred large-text without increasing task-times
    corecore