8,935 research outputs found

    Robustness of Image-Based Malware Classification Models Trained with Generative Adversarial Networks

    Get PDF
    As malware continues to evolve, deep learning models are increasingly used for malware detection and classification, including image based classification. However, adversarial attacks can be used to perturb images so as to evade detection by these models. This study investigates the effectiveness of training deep learning models with Generative Adversarial Network-generated data to improve their robustness against such attacks. Two image conversion methods, byte plot and space-filling curves, were used to represent the malware samples, and a ResNet-50 architecture was used to train models on the image datasets. The models were then tested against a projected gradient descent attack. It was found that without GAN generated data, the models’ prediction performance drastically decreased from 93-95% to 4.5% accuracy. However, the addition of adversarial images to the training data almost doubled the accuracy of the models. This study highlights the potential benefits of incorporating GAN-generated data in the training of deep learning models to improve their robustness against adversarial attacks

    Exploring the Space of Adversarial Images

    Full text link
    Adversarial examples have raised questions regarding the robustness and security of deep neural networks. In this work we formalize the problem of adversarial images given a pretrained classifier, showing that even in the linear case the resulting optimization problem is nonconvex. We generate adversarial images using shallow and deep classifiers on the MNIST and ImageNet datasets. We probe the pixel space of adversarial images using noise of varying intensity and distribution. We bring novel visualizations that showcase the phenomenon and its high variability. We show that adversarial images appear in large regions in the pixel space, but that, for the same task, a shallow classifier seems more robust to adversarial images than a deep convolutional network.Comment: Copyright 2016 IEEE. This manuscript was accepted at the IEEE International Joint Conference on Neural Networks (IJCNN) 2016. We will link the published version as soon as the DOI is availabl
    • …
    corecore