22 research outputs found

    The Synthesis Of Graphene Films Via Graphene Oxide Reduction Using Green Tea

    Get PDF
    In recent years, graphene has emerged as the most promising nanomaterial for various potential applications especially in biomedical field owing to its unique two dimensional (2D) nanostructure and intriguing physicochemical properties. A simple method to produce graphene was developed by reducing graphene oxide (GO) using green tea polyphenol (GTP) in a batch reactor. The aforementioned method was non-detrimental to the environment, cost effective and scalable for high-volume production. The product of the reduction process was referred as reduced GO (RGO). The effects of weight ratio of GTP/GO and reaction temperature on the reduction of GO were examined in details. The ultraviolet-visible (UV-Vis) spectroscopy, Fourier transform infrared (FTIR), thermogravimetric analysis (TGA) and the measurement of zeta potential as well as the electrophoretic mobility reveal that a successful reduction of GO and the preparation of stable RGO dispersion in aqueous media could be attained by performing the reduction reaction of GO with GTP at 90 ºC using a weight ratio of GTP/GO=1. In addition, the UV-Vis spectroscopy and X-ray photoelectron spectroscopy (XPS) analysis show that the RGO prepared using GTP exhibits final position of absorption peak (271 nm) and intensity of sp2 carbon that almost similar to the RGO produced using hydrazine (N2H4) solution. This observation indicates that the effective reduction property of GTP as compared to the N2H4 solution as a standard reducing agent

    Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: ``Are SNNs secure?'' Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference on Neural Networks (IJCNN

    Security for Machine Learning-based Systems: Attacks and Challenges during Training and Inference

    Full text link
    The exponential increase in dependencies between the cyber and physical world leads to an enormous amount of data which must be efficiently processed and stored. Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data. Although ML-based solutions address the efficient computing requirements of big data, they introduce (new) security vulnerabilities into the systems, which cannot be addressed by traditional monitoring-based security measures. Therefore, this paper first presents a brief overview of various security threats in machine learning, their respective threat models and associated research challenges to develop robust security measures. To illustrate the security vulnerabilities of ML during training, inferencing and hardware implementation, we demonstrate some key security threats on ML using LeNet and VGGNet for MNIST and German Traffic Sign Recognition Benchmarks (GTSRB), respectively. Moreover, based on the security analysis of ML-training, we also propose an attack that has a very less impact on the inference accuracy. Towards the end, we highlight the associated research challenges in developing security measures and provide a brief overview of the techniques used to mitigate such security threats

    QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

    Full text link
    Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a "fixed" number of quantization levels, while in TQ, the quantization levels are "iteratively learned during the training phase", thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source \textit{Cleverhans} library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8x8) - Conv2D(128, 6x6) - Conv2D(128, 5x5) - Dense(10) - Softmax()) available in \textit{Cleverhans} library
    corecore