14 research outputs found

    Towards Robust Classification in Adversarial Learning using Bayesian Games

    Get PDF
    A well-trained neural network is very accurate when classifying data into different categories. However, a malicious adversary can fool a neural network through tiny changes to the data, called perturbations, that would not even be detectable to a human. This makes neural networks vulnerable to influence by an attacker. Generative Adversarial Networks (GANs) have been developed as one possible solution to this problem [1]. A GAN consists of two neural networks, a generator and a discriminator. The discriminator tries to learn how to classify data into categories. The generator stands in for the attacker and tries to discover the best way to cause the discriminator to make wrong classifications through perturbing the input. Our work improves on this method through the application of Bayesian games to model multiple generators and discriminators rather than one of each. Through training against multiple types of input perturbation, the discriminators will improve their classification of adversarial samples

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    A Survey of Adversarial Machine Learning in Cyber Warfare

    Get PDF
    The changing nature of warfare has seen a paradigm shift from the conventional to asymmetric, contactless warfare such as information and cyber warfare. Excessive dependence on information and communication technologies, cloud infrastructures, big data analytics, data-mining and automation in decision making poses grave threats to business and economy in adversarial environments. Adversarial machine learning is a fast growing area of research which studies the design of Machine Learning algorithms that are robust in adversarial environments. This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. We present privacy issues in these models and describe a cyber-warfare test-bed to test the effectiveness of the various attack-defence strategies and conclude with some open problems in this area of research.

    Adversarial classification: An adversarial risk analysis approach

    Full text link
    Classification problems in security settings are usually contemplated as confrontations in which one or more adversaries try to fool a classifier to obtain a benefit. Most approaches to such adversarial classification problems have focused on game theoretical ideas with strong underlying common knowledge assumptions, which are actually not realistic in security domains. We provide an alternative framework to such problem based on adversarial risk analysis, which we illustrate with several examples. Computational and implementation issues are discussed.Comment: Published in the International Journal for Approximate Reasonin
    corecore