42 research outputs found

    DeMiST: Detection and Mitigation of Stealthy Analog Hardware Trojans

    Full text link
    The global semiconductor supply chain involves design and fabrication at various locations, which leads to multiple security vulnerabilities, e.g., Hardware Trojan (HT) insertion. Although most HTs target digital circuits, HTs can be inserted in analog circuits. Therefore, several techniques have been developed for HT insertions in analog circuits. Capacitance-based Analog Hardware Trojan (AHT) is one of the stealthiest HT that can bypass most existing HT detection techniques because it uses negligible charge accumulation in the capacitor to generate stealthy triggers. To address the charge sharing and accumulation issues, we propose a novel way to detect such capacitance-based AHT in this paper. Secondly, we critically analyzed existing AHTs to highlight their respective limitations. We proposed a stealthier capacitor-based AHT (fortified AHT) that can bypass our novel AHT detection technique by addressing these limitations. Finally, by critically analyzing the proposed fortified AHT and existing AHTs, we developed a robust two-phase framework (DeMiST) in which a synchronous system can mitigate the effects of capacitance-based stealthy AHTs by turning off the triggering capability of AHT. In the first phase, we demonstrate how the synchronous system can avoid the AHT during run-time by controlling the supply voltage of the intermediate combinational circuits. In the second phase, we proposed a supply voltage duty cycle-based validation technique to detect capacitance-based AHTs. Furthermore, DeMiST amplified the switching activity for charge accumulation to such a degree that it can be easily detectable using existing switching activity-based HT detection techniques.Comment: Accepted at ACM Hardware and Architectural Support for Security and Privacy (HASP) 202

    Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: ``Are SNNs secure?'' Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference on Neural Networks (IJCNN

    QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

    Full text link
    Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a "fixed" number of quantization levels, while in TQ, the quantization levels are "iteratively learned during the training phase", thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source \textit{Cleverhans} library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8x8) - Conv2D(128, 6x6) - Conv2D(128, 5x5) - Dense(10) - Softmax()) available in \textit{Cleverhans} library

    Security for Machine Learning-based Systems: Attacks and Challenges during Training and Inference

    Full text link
    The exponential increase in dependencies between the cyber and physical world leads to an enormous amount of data which must be efficiently processed and stored. Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data. Although ML-based solutions address the efficient computing requirements of big data, they introduce (new) security vulnerabilities into the systems, which cannot be addressed by traditional monitoring-based security measures. Therefore, this paper first presents a brief overview of various security threats in machine learning, their respective threat models and associated research challenges to develop robust security measures. To illustrate the security vulnerabilities of ML during training, inferencing and hardware implementation, we demonstrate some key security threats on ML using LeNet and VGGNet for MNIST and German Traffic Sign Recognition Benchmarks (GTSRB), respectively. Moreover, based on the security analysis of ML-training, we also propose an attack that has a very less impact on the inference accuracy. Towards the end, we highlight the associated research challenges in developing security measures and provide a brief overview of the techniques used to mitigate such security threats
    corecore