48 research outputs found

    Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

    Full text link
    As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense

    Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models

    Full text link
    Model extraction emerges as a critical security threat with attack vectors exploiting both algorithmic and implementation-based approaches. The main goal of an attacker is to steal as much information as possible about a protected victim model, so that he can mimic it with a substitute model, even with a limited access to similar training data. Recently, physical attacks such as fault injection have shown worrying efficiency against the integrity and confidentiality of embedded models. We focus on embedded deep neural network models on 32-bit microcontrollers, a widespread family of hardware platforms in IoT, and the use of a standard fault injection strategy - Safe Error Attack (SEA) - to perform a model extraction attack with an adversary having a limited access to training data. Since the attack strongly depends on the input queries, we propose a black-box approach to craft a successful attack set. For a classical convolutional neural network, we successfully recover at least 90% of the most significant bits with about 1500 crafted inputs. These information enable to efficiently train a substitute model, with only 8% of the training dataset, that reaches high fidelity and near identical accuracy level than the victim model.Comment: Accepted at SECAI Workshop, ESORICS 202

    A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

    Full text link
    Deep neural network models are massively deployed on a wide variety of hardware platforms. This results in the appearance of new attack vectors that significantly extend the standard attack surface, extensively studied by the adversarial machine learning community. One of the first attack that aims at drastically dropping the performance of a model, by targeting its parameters (weights) stored in memory, is the Bit-Flip Attack (BFA). In this work, we point out several evaluation challenges related to the BFA. First of all, the lack of an adversary's budget in the standard threat model is problematic, especially when dealing with physical attacks. Moreover, since the BFA presents critical variability, we discuss the influence of some training parameters and the importance of the model architecture. This work is the first to present the impact of the BFA against fully-connected architectures that present different behaviors compared to convolutional neural networks. These results highlight the importance of defining robust and sound evaluation methodologies to properly evaluate the dangers of parameter-based attacks as well as measure the real level of robustness offered by a defense.Comment: Extended version from IEEE IOLTS'2022 short pape

    Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip

    Full text link
    With the large-scale integration and use of neural network models, especially in critical embedded systems, their security assessment to guarantee their reliability is becoming an urgent need. More particularly, models deployed in embedded platforms, such as 32-bit microcontrollers, are physically accessible by adversaries and therefore vulnerable to hardware disturbances. We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform. Contrary to most of state-of-the-art works dedicated to the alteration of the internal parameters or input values, our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is instruction skip. For that purpose, we assessed several modification attacks on the control flow of a neural network inference. We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models, which may be exploited by an attacker to alter the predictions of the target models with different adversarial goals.Comment: Accepted at DSD 2023 for AHSA Special Sessio

    Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection

    Full text link
    Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms. Until recently, most of research works focused on API-based attacks that consider a ML model as a pure algorithmic abstraction. However, new implementation-based threats have been revealed, emphasizing the urgency to propose both practical and simulation-based methods to properly evaluate the robustness of models. A major concern is parameter-based attacks (such as the Bit-Flip Attack, BFA) that highlight the lack of robustness of typical deep neural network models when confronted by accurate and optimal alterations of their internal parameters stored in memory. Setting in a security testing purpose, this work practically reports, for the first time, a successful variant of the BFA on a 32-bit Cortex-M microcontroller using laser fault injection. It is a standard fault injection means for security evaluation, that enables to inject spatially and temporally accurate faults. To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model.Comment: Accepted at 42nd International Conference on Computer Safety, Reliability and Security, SafeComp 202

    Design of a duplicated fault-detecting AES chip and yet using clock set-up time violations to extract 13 out of 16 bytes of the secret key

    Get PDF
    International audienceThe secret keys manipulated by cryptographic circuits can be extracted using fault injections associated with differential cryptanalysis techniques [1]. Such faults can be induced by different means such as lasers, voltage glitches, electromagnetic perturbations or clock skews. Several counter-measures have been proposed such as random delay insertions, circuit duplications or error correcting codes. In this paper, we focus on an AES chip in which the circuit duplication principle has been implemented to detect fault injection. We show that faults based on clock set-up time violations can nevertheless be used to defeat the implemented counter-measure

    ElectroMagnetic Analysis and Fault Injection onto Secure Circuits

    Get PDF
    International audienceImplementation attacks are a major threat to hardware cryptographic implementations. These attacks exploit the correlation existing between the computed data and variables such as computation time, consumed power, and electromagnetic (EM) emissions. Recently, the EM channel has been proven as an effective passive and active attack technique against secure implementations. In this paper, we review the recent results obtained on this subject, with a particular focus on EM as a fault injection tool

    Liver Transplantation because of Acute Liver Failure due to Heme Arginate Overdose in a Patient with Acute Intermittent Porphyria

    Get PDF
    In acute attacks of acute intermittent porphyria, the mainstay of treatment is glucose and heme arginate administration. We present the case of a 58-year-old patient with acute liver failure requiring urgent liver transplantation after erroneous 6-fold overdose of heme arginate during an acute attack. As recommended in the product information, albumin and charcoal were administered and hemodiafiltration was started, which could not prevent acute liver failure, requiring super-urgent liver transplantation after 6 days. The explanted liver showed no preexisting liver cirrhosis, but signs of subacute liver injury and starting regeneration. The patient recovered within a short time. A literature review revealed four poorly documented cases of potential hepatic and/or renal toxicity of hematin or heme arginate. This is the first published case report of acute liver failure requiring super-urgent liver transplantation after accidental heme arginate overdose. The literature and recommendations in case of heme arginate overdose are summarized. Knowledge of a potentially fatal course is important for the management of future cases. If acute liver failure in case of heme arginate overdose is progressive, super-urgent liver transplantation has to be evaluated
    corecore