104 research outputs found

    Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

    Full text link
    Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps. We propose two approaches to address the challenges of gradient input incompatibility and gradient vanishing. Specifically, we design a gradient to spike converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a gradient trigger to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs trained by supervised algorithms. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer, which indicates a "trap" region under the cross-entropy loss that can be escaped by threshold tuning. Extensive experiments are conducted to validate the effectiveness of our solution. Besides the quantitative analysis of the influence factors, we evidence that SNNs are more robust against adversarial attack than ANNs. This work can help reveal what happens in SNN attack and might stimulate more research on the security of SNN models and neuromorphic devices

    Вопросы реализации нейросетевых алгоритмов на мемристорных кроссбарах

    Get PDF
    The property of natural parallelization of matrix-vector operations inherent in memristor crossbars creates opportunities for their effective use in neural network computing. Analog calculations are orders of magnitude faster in comparison to calculations on the central processor and on graphics accelerators. Besides, mathematical operations energy costs are significantly lower. The essential feature of analog computing is its low accuracy. In this regard, studying the dependence of neural network quality on the accuracy of setting its weights is relevant. The paper considers two convolutional neural networks trained on the MNIST (handwritten digits) and CIFAR_10 (airplanes, boats, cars, etc.) data sets. The first convolutional neural network consists of two convolutional layers, one subsample layer and two fully connected layers. The second one consists of four convolutional layers, two subsample layers and two fully connected layers. Calculations in convolutional and fully connected layers are performed through matrix-vector operations that are implemented on memristor crossbars. Sub-sampling layers imply the operation of finding the maximum value from several values. This operation can be implemented at the analog level. The process of training a neural network runs separately from data analysis. As a rule, gradient optimization methods are used at the training stage. It is advisable to perform calculations using these methods on CPU. When setting the weights, 3—4 precision bits are required to obtain an acceptable recognition quality in the case the network is trained on MNIST. 6-10 precision bits are required if the network is trained on CIFAR_10.Присущее мемристорным кроссбарам свойство естественной параллелизации матрично-векторных операций создает возможности для их эффективного использования в нейросетевых вычислениях. Аналоговые вычисления производятся на порядки быстрее по сравнению с вычислениями на центральном процессоре и на графических ускорителях. Кроме того, значительно ниже энергозатраты на проведение математических операций. При этом существенной особенностью аналоговых вычислений является небольшая точность. В связи с этим актуальным является исследование зависимости качества работы нейронной сети от точности задания ее весов. Рассмотрены две сверточные нейронные сети, обученные на наборах данных MNIST (рукописные цифры) и CIFAR_10 (самолеты, лодки, машины и т. д.). Первая состоит из двух сверточных слоев, одного слоя подвыборки и двух полносвязанных слоев, а вторая — из четырех сверточных слоев, двух слоев подвыборки и двух полносвязаных слоев. Вычисления в сверточных и полносвязных слоях выполняются через матрично-векторные операции, которые эффективно реализуются на мемристорных кроссбарах. Слои подвыборки подразумевают операцию нахождения максимального значения из нескольких, которая также может быть реализована на аналоговом уровне. Процесс обучения нейронной сети происходит отдельно от анализа данных. Как правило, на этапе обучения используются градиентные методы оптимизации, реализацию которых целесообразно выполнять на центральном процессоре. Показано, что для получения приемлемого качества распознавания в случае с сетью, обученной на MNIST, требуется 3—4 бита точности при задании ее весов, а в случае с сетью, обученной на CIFAR_10, — 6—8 бит

    Mitigating State-Drift in Memristor Crossbar Arrays for Vector Matrix Multiplication

    Get PDF
    In this Chapter, we review the recent progress on resistance drift mitigation techniques for resistive switching memory devices (specifically memristors) and its impact on the accuracy in deep neural network applications. In the first section of the chapter, we investigate the importance of soft errors and their detrimental impact on memristor-based vector–matrix multiplication (VMM) platforms performance specially the memristance state-drift induced by long-term recurring inference operations with sub-threshold stress voltage. Also, we briefly review some currently developed state-drift mitigation methods. In the next section of the chapter, we will discuss an adaptive inference technique with low hardware overhead to mitigate the memristance drift in memristive VMM platform by using optimization techniques to adjust the inference voltage characteristic associated with different network layers. Also, we present simulation results and performance improvements achieved by applying the proposed inference technique by considering non-idealities for various deep network applications on memristor crossbar arrays. This chapter suggests that a simple low overhead inference technique can revive the functionality, enhance the performance of memristor-based VMM arrays and significantly increases their lifetime which can be a very important factor toward making this technology as a main stream player in future in-memory computing platforms

    ALL-MASK: A Reconfigurable Logic Locking Method for Multicore Architecture with Sequential-Instruction-Oriented Key

    Full text link
    Intellectual property (IP) piracy has become a non-negligible problem as the integrated circuit (IC) production supply chain is becoming increasingly globalized and separated that enables attacks by potentially untrusted attackers. Logic locking is a widely adopted method to lock the circuit module with a key and prevent hackers from cracking it. The key is the critical aspect of logic locking, but the existing works have overlooked three possible challenges of the key: safety of key storage, easy key-attempt from interface and key-related overheads, bringing the further challenges of low error rate and small state space. In this work, the key is dynamically generated by utilizing the huge space of a CPU core, and the unlocking is performed implicitly through the interconnection inside the chip. A novel low-cost logic reconfigurable gate is together proposed with ferroelectric FET (FeFET) to mitigate the reverse engineering and removal attack. Compared to the common logic locking methods, our proposed approach is 19,945 times more time consuming to traverse all the possible combinations in only 9-bit-key condition. Furthermore, our technique let key length increases this complexity exponentially and ensure the logic obfuscation effect.Comment: 15 pages, 17 figure

    Achievable Rate and Capacity Analysis for Ambient Backscatter Communications

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. In this paper, we analyze the achievable rate for ambient backscatter communications under three different channels: the binary input and binary output (BIBO) channel, the binary input and signal output (BISO) channel, and the binary input and energy output (BIEO) channel. Instead of assuming Gaussian input distribution, the proposed study matches the practical ambient backscatter scenarios, where the input of the tag can only be binary. We derive the closed-form capacity expression as well as the capacity-achieving input distribution for the BIBO channel. To show the influence of the signal-to-noise ratio (SNR) on the capacity, a closed-form tight ceiling is also derived when SNR turns relatively large. For BISO and BIEO channel, we obtain the closed-form mutual information, while the semi-closed-form capacity value can be obtained via one dimensional searching. Simulations are provided to corroborate the theoretical studies. Interestingly, the simulations show that: (i) the detection threshold maximizing the capacity of BIBO channel is the same as the one from the maximum likelihood signal detection; (ii) the maximal of the mutual information of all channels is achieved almost by a uniform input distribution; and (iii) the mutual information of the BIEO channel is larger than that of the BIBO channel, but is smaller than that of the BISO channel
    corecore