1,342 research outputs found

    Enhancing Security in Internet of Healthcare Application using Secure Convolutional Neural Network

    Get PDF
    The ubiquity of Internet of Things (IoT) devices has completely changed the healthcare industry by presenting previously unheard-of potential for remote patient monitoring and individualized care. In this regard, we suggest a unique method that makes use of Secure Convolutional Neural Networks (SCNNs) to improve security in Internet-of-Healthcare (IoH) applications. IoT-enabled healthcare has advanced as a result of the integration of IoT technologies, giving it impressive data processing powers and large data storage capacity. This synergy has led to the development of an intelligent healthcare system that is intended to remotely monitor a patient's medical well-being via a wearable device as a result of the ongoing advancement of the Industrial Internet of Things (IIoT). This paper focuses on safeguarding user privacy and easing data analysis. Sensitive data is carefully separated from user-generated data before being gathered. Convolutional neural network (CNN) technology is used to analyse health-related data thoroughly in the cloud while scrupulously protecting the privacy of the consumers.The paper provide a secure access control module that functions using user attributes within the IoT-Healthcare system to strengthen security. This module strengthens the system's overall security and privacy by ensuring that only authorised personnel may access and interact with the sensitive health data. The IoT-enabled healthcare system gets the capacity to offer seamless remote monitoring while ensuring the confidentiality and integrity of user information thanks to this integrated architecture

    What does explainable AI explain?

    Get PDF
    Machine Learning (ML) models are increasingly used in industry, as well as in scientific research and social contexts. Unfortunately, ML models provide only partial solutions to real-world problems, focusing on predictive performance in static environments. Problem aspects beyond prediction, such as robustness in employment, knowledge generation in science, or providing recourse recommendations to end-users, cannot be directly tackled with ML models. Explainable Artificial Intelligence (XAI) aims to solve, or at least highlight, problem aspects beyond predictive performance through explanations. However, the field is still in its infancy, as fundamental questions such as “What are explanations?”, “What constitutes a good explanation?”, or “How relate explanation and understanding?” remain open. In this dissertation, I combine philosophical conceptual analysis and mathematical formalization to clarify a prerequisite of these difficult questions, namely what XAI explains: I point out that XAI explanations are either associative or causal and either aim to explain the ML model or the modeled phenomenon. The thesis is a collection of five individual research papers that all aim to clarify how different problems in XAI are related to these different “whats”. In Paper I, my co-authors and I illustrate how to construct XAI methods for inferring associational phenomenon relationships. Paper II directly relates to the first; we formally show how to quantify uncertainty of such scientific inferences for two XAI methods – partial dependence plots (PDP) and permutation feature importance (PFI). Paper III discusses the relationship between counterfactual explanations and adversarial examples; I argue that adversarial examples can be described as counterfactual explanations that alter the prediction but not the underlying target variable. In Paper IV, my co-authors and I argue that algorithmic recourse recommendations should help data-subjects improve their qualification rather than to game the predictor. In Paper V, we address general problems with model agnostic XAI methods and identify possible solutions

    A Review of the Role of Causality in Developing Trustworthy AI Systems

    Full text link
    State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world. Consequently, these models do not generalize to unseen data, often produce unfair results, and are difficult to interpret. This has led to efforts to improve the trustworthiness aspects of AI models. Recently, causal modeling and inference methods have emerged as powerful tools. This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models. We hope that our contribution will motivate future research on causality-based solutions for trustworthy AI.Comment: 55 pages, 8 figures. Under revie

    Moderne biostatistische Beiträge für Therapiestudien bei Schwindelsyndromen mit Tagebuch-basierten Attackendaten

    Get PDF

    Not a Free Lunch but a Cheap Lunch: Experimental Results for Training Many Neural Nets Efficiently

    Get PDF
    Neural Networks have become a much studied approach in the recent literature on profiled side channel attacks: many articles examine their use and performance in profiled single-target DPA style attacks. In this setting a single neural net is tweaked and tuned based on a training data set. The effort for this is considerable, as there a many hyper-parameters that need to be adjusted. A straightforward, but impractical, extension of such an approach to multi-target DPA style attacks requires deriving and tuning a network architecture for each individual target. Our contribution is to provide the first practical and efficient strategy for training many neural nets in the context of a multi target attack. We show how to configure a network with a set of hyper-parameters for a specific intermediate (SubBytes) that generalises well to capture the leakage of other intermediates as well. This is interesting because although we can\u27t beat the no free lunch theorem (i.e. we find that different profiling methods excel on different intermediates), we can still get ``good value for money\u27\u27 (i.e. good classification results across many intermediates with reasonable profiling effort)

    On the Worst-Case Side-Channel Security of ECC Point Randomization in Embedded Devices

    Get PDF
    Point randomization is an important countermeasure to protect Elliptic Curve Cryptography (ECC) implementations against side-channel attacks. In this paper, we revisit its worst-case security in front of advanced side-channel adversaries taking advantage of analytical techniques in order to exploit all the leakage samples of an implementation. Our main contributions in this respect are the following: first, we show that due to the nature of the attacks against the point randomization (which can be viewed as Simple Power Analyses), the gain of using analytical techniques over simpler divide-and-conquer attacks is limited. Second, we take advantage of this observation to evaluate the theoretical noise levels necessary for the point randomization to provide strong security guarantees and compare different elliptic curve coordinates systems. Then, we turn this simulated analysis into actual experiments and show that reasonable security levels can be achieved by implementations even on low-cost (e.g. 8-bit) embedded devices. Finally, we are able to bound the security on 32-bit devices against worst-case adversaries

    Exploitation of Unintentional Information Leakage from Integrated Circuits

    Get PDF
    Unintentional electromagnetic emissions are used to recognize or verify the identity of a unique integrated circuit (IC) based on fabrication process-induced variations in a manner analogous to biometric human identification. The effectiveness of the technique is demonstrated through an extensive empirical study, with results presented indicating correct device identification success rates of greater than 99:5%, and average verification equal error rates (EERs) of less than 0:05% for 40 near-identical devices. The proposed approach is suitable for security applications involving commodity commercial ICs, with substantial cost and scalability advantages over existing approaches. A systematic leakage mapping methodology is also proposed to comprehensively assess the information leakage of arbitrary block cipher implementations, and to quantitatively bound an arbitrary implementation\u27s resistance to the general class of differential side channel analysis techniques. The framework is demonstrated using the well-known Hamming Weight and Hamming Distance leakage models, and approach\u27s effectiveness is demonstrated through the empirical assessment of two typical unprotected implementations of the Advanced Encryption Standard. The assessment results are empirically validated against correlation-based differential power and electromagnetic analysis attacks

    Adversarial machine learning in IoT from an insider point of view

    Get PDF
    With the rapid progress and significant successes in various applications, machine learning has been considered a crucial component in the Internet of Things ecosystem. However, machine learning models have recently been vulnerable to carefully crafted perturbations, so-called adversarial attacks. A capable insider adversary can subvert the machine learning model at either the training or testing phase, causing them to behave differently. The vulnerability of machine learning to adversarial attacks becomes one of the significant risks. Therefore, there is a need to secure machine learning models enabling the safe adoption in malicious insider cases. This paper reviews and organizes the body of knowledge in adversarial attacks and defense presented in IoT literature from an insider adversary point of view. We proposed a taxonomy of adversarial methods against machine learning models that an insider can exploit. Under the taxonomy, we discuss how these methods can be applied in real-life IoT applications. Finally, we explore defensive methods against adversarial attacks. We believe this can draw a comprehensive overview of the scattered research works to raise awareness of the existing insider threats landscape and encourages others to safeguard machine learning models against insider threats in the IoT ecosystem
    corecore