404 research outputs found

    Side Channel Attacks on IoT Applications

    Get PDF

    Medicine in Ancient Assur

    Get PDF
    In Medicine in Ancient Assur Troels Pank Arbøll offfers a microhistorical study of a single exorcist named Kiṣir-Aššur who practiced medical and magical healing in the ancient city of Assur (modern northern Iraq) in the 7th century BCE. The book provides the first detailed analysis of a healer’s education and practice in ancient Mesopotamia based on at least 73 texts assigned to specific stages of his career. by drawing on a microhistorical framework, the study aims at significantly improving our understanding of the functional aspects of texts in their specialist environment. Furthermore, the work situates Kiṣir-Aššur as one of the earliest healers in world history for whom we have such details pertaining to his career originating from his own time. Readership: Suited for everyone interested in ancient Near Eastern magico-medical texts and practices, ancient libraries, and the training of specialists, as well as anyone concerned with the history of ancient medicine

    Pushing the Limit of Non-Profiling DPA using Multivariate Leakage Model

    Get PDF
    Profiling power attacks like Template attack and Stochastic attack optimize their performance by jointly evaluating the leakages of multiple sample points. However, such multivariate approaches are rare among non-profiling Differential Power Analysis (DPA) attacks, since integration of the leakage of a higher SNR sample point with the leakage of lower SNR sample point might result in a decrease in the overall performance. One of the few successful multivariate approaches is the application of Principal Component Analysis (PCA) for non-profiling DPA. However, PCA also performs sub-optimally in the presence of high noise. In this paper, a multivariate model for an FPGA platform is introduced for improving the performances of non-profiling DPA attacks. The introduction of the proposed model greatly increases the success rate of DPA attacks in the presence of high noise. The experimental results on both simulated power traces and real power traces are also provided as an evidence

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Empire Fighting Chance: boxing based mentoring: feasibility and pilot trial report

    Get PDF
    What does this project involve? Empire Fighting Chance (EFC) aim to use non-contact boxing programmes accompanied with personal development support to reduce anti-social and criminal behaviour amongst at risk young people. Their programmes combine physical activity sessions with one-to-one or group mentoring support, where coaches encourage children to work on personal development points designed to improve behaviour. Why did YEF fund this project? As the YEF’s toolkit explains, sports programmes are associated with a high average impact on reducing serious youth violence and crime. However, there are considerable gaps in the evidence, particularly relating to robust evaluations conducted in an English or Welsh context. YEF, therefore, funded a feasibility and pilot evaluation of EFC’s programmes. The feasibility study examined several EFC’s interventions. It aimed to ascertain whether these programmes achieved their intended outputs for their intended target groups, explore the barriers and facilitators to delivery, detail how much of the interventions young people received, and assess quality, responsiveness, and reach. To explore these questions, programme monitoring data on 831 participants and an online satisfaction survey undertaken by 204 young people were analysed. Interviews were also conducted with 10 project staff, and 6 participants and their parents. 10-14 year olds who were at risk of involvement in crime and anti-social behaviour were targeted by the programmes, and the feasibility study ran from November 2019 to June 2021. The pilot study then evaluated a new, school-based, boxing mentoring programme, which combined elements of EFC programmes examined by the feasibility study. This new programme aimed to deliver a 12-week mentoring intervention in schools, where weekly physical activities (including skipping, circuit training, punch pads and boxing techniques) were delivered by an EFC coach. While leading these sessions, the coach would discuss ‘Personal Development Points’ with children (such as the importance of regulating mood, eating well, and taking responsibility for your actions). The programme targeted pupils in Year 8 and 9, who had demonstrated behavioural difficulties, poor attendance, and an interest in sport. The pilot evaluation aimed to assess how feasible an efficacy randomised controlled trial of the programme may be, inform the design of a future evaluation, and assess whether there is any preliminary evidence of promise. To explore these questions the evaluator analysed quantitative project delivery data, administered questionnaires featuring validated measures (such as the Strengths and Difficulties Questionnaire (SDQ) and the Problem Behaviour Frequency Scale (PBFS)), and interviewed 17 pupils, five project staff and six teachers. Of the 91 children in the pilot study, 64% identified as White, 13% as Black, 11% as Mixed Ethnicity and 9% as Asian. The pilot commenced in September 2021 and concluded in June 2022. Both the feasibility and pilot studies took place during the coronavirus pandemic, requiring both the delivery and evaluation teams to adapt to challenging circumstances

    From Improved Leakage Detection to the Detection of Points of Interests in Leakage Traces

    Get PDF
    Leakage detection usually refers to the task of identifying data-dependent information in side-channel measurements, independent of whether this information can be exploited. Detecting Points-Of-Interest (POIs) in leakage traces is a complementary task that is a necessary first step in most side-channel attacks, where the adversary wants to turn this information into (e.g.) a key recovery. In this paper, we discuss the differences between these tasks, by investigating a popular solution to leakage detection based on a t-test, and an alternative method exploiting Pearson\u27s correlation coefficient. We first show that the simpler t-test has better sampling complexity, and that its gain over the correlation-based test can be predicted by looking at the Signal-to-Noise Ratio (SNR) of the leakage partitions used in these tests. This implies that the sampling complexity of both tests relates more to their implicit leakage assumptions than to the actual statistics exploited. We also put forward that this gain comes at the cost of some intuition loss regarding the localization of the exploitable leakage samples in the traces, and their informativeness. Next, and more importantly, we highlight that our reasoning based on the SNR allows defining an improved t-test with significantly faster detection speed (with approximately 5 times less measurements in our experiments), which is therefore highly relevant for evaluation laboratories. We finally conclude that whereas t-tests are the method of choice for leakage detection only, correlation-based tests exploiting larger partitions are preferable for detecting POIs. We confirm this intuition by improving automated tools for the detection of POIs in the leakage measurements of a masked implementation, in a black box manner and without key knowledge, thanks to a correlation-based leakage detection test

    Side Channel Leakage Analysis - Detection, Exploitation and Quantification

    Get PDF
    Nearly twenty years ago the discovery of side channel attacks has warned the world that security is more than just a mathematical problem. Serious considerations need to be placed on the implementation and its physical media. Nowadays the ever-growing ubiquitous computing calls for in-pace development of security solutions. Although the physical security has attracted increasing public attention, side channel security remains as a problem that is far from being completely solved. An important problem is how much expertise is required by a side channel adversary. The essential interest is to explore whether detailed knowledge about implementation and leakage model are indispensable for a successful side channel attack. If such knowledge is not a prerequisite, attacks can be mounted by even inexperienced adversaries. Hence the threat from physical observables may be underestimated. Another urgent problem is how to secure a cryptographic system in the exposure of unavoidable leakage. Although many countermeasures have been developed, their effectiveness pends empirical verification and the side channel security needs to be evaluated systematically. The research in this dissertation focuses on two topics, leakage-model independent side channel analysis and security evaluation, which are described from three perspectives: leakage detection, exploitation and quantification. To free side channel analysis from the complicated procedure of leakage modeling, an observation to observation comparison approach is proposed. Several attacks presented in this work follow this approach. They exhibit efficient leakage detection and exploitation under various leakage models and implementations. More importantly, this achievement no longer relies on or even requires precise leakage modeling. For the security evaluation, a weak maximum likelihood approach is proposed. It provides a quantification of the loss of full key security due to the presence of side channel leakage. A constructive algorithm is developed following this approach. The algorithm can be used by security lab to measure the leakage resilience. It can also be used by a side channel adversary to determine whether limited side channel information suffices the full key recovery at affordable expense

    Towards practical tools for side channel aware software engineering:'grey box' modelling for instruction leakages

    Get PDF
    Power (along with EM, cache and timing) leaks are of considerable concern for developers who have to deal with cryptographic components as part of their overall software implementation, in particular in the context of embedded devices. Whilst there exist some compiler tools to detect timing leaks, similar progress towards pinpointing power and EM leaks has been hampered by limits on the amount of information available about the physical components from which such leaks originate. We suggest a novel modelling technique capable of producing high-quality instruction-level power (and/or EM) models without requiring a detailed hardware description of a processor nor information about the used process technology (access to both of which is typically restricted). We show that our methodology is effective at capturing differential data-dependent effects as neighbouring instructions in a sequence vary. We also explore register effects, and verify our models across several measurement boards to comment on board effects and portability. We confirm its versatility by demonstrating the basic technique on two processors (the ARM Cortex-M0 and M4), and use the M0 models to develop ELMO, the first leakage simulator for the ARM Cortex M0.</p
    • …
    corecore