1,300 research outputs found

    The art of defense: letting networks fool the attacker

    Full text link
    Some deep neural networks are invariant to some input transformations, such as Pointnet is permutation invariant to the input point cloud. In this paper, we demonstrated this property could be powerful in defense of gradient-based attacks. Specifically, we apply random input transformation which is invariant to the networks we want to defend. Extensive experiments demonstrate that the proposed scheme defeats various gradient-based attackers in the targeted attack setting, and breaking the attack accuracy into nearly zero. Our code is available at: {\footnotesize{\url{https://github.com/cuge1995/IT-Defense}}}

    Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples

    Full text link
    Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of security by causing gradient-based attacks to fail, and they have been broken under more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic manner. In this work, we overcome these limitations by (i) defining a set of quantitative indicators which unveil common failures in the optimization of gradient-based attacks, and (ii) proposing specific mitigation strategies within a systematic evaluation protocol. Our extensive experimental analysis shows that the proposed indicators of failure can be used to visualize, debug and improve current adversarial robustness evaluations, providing a first concrete step towards automatizing and systematizing current adversarial robustness evaluations. Our open-source code is available at: https://github.com/pralab/IndicatorsOfAttackFailure

    Image Based Attack and Protection on Secure-Aware Deep Learning

    Get PDF
    In the era of Deep Learning, users are enjoying remarkably based on image-related services from various providers. However, many security issues also arise along with the ubiquitous usage of image-related deep learning. Nowadays, people rely on image-related deep learning in work and business, thus there are more entries for attackers to wreck the image-related deep learning system. Although many works have been published for defending various attacks, lots of studies have shown that the defense cannot be perfect. In this thesis, one-pixel attack, a kind of extremely concealed attacking method toward deep learning, is analyzed first. Two novel detection methods are proposed for detecting the one-pixel attack. Considering that image tempering mostly happens in image sharing through an unreliable way, next, this dissertation extends the detection against single attack method to a platform for higher level protection. We propose a novel smart contract based image sharing system. The system keeps full track of the shared images and any potential alteration to images will be notified to users. From extensive experiment results, it is observed that the system can effectively detect the changes on the image server even in the circumstance that the attacker erases all the traces from the image-sharing server. Finally, we focus on the attack targeting blockchain-enhanced deep learning. Although blockchain-enhanced federated learning can defend against many attack methods that purely crack the deep learning part, it is still vulnerable to combined attack. A novel attack method that combines attacks on PoS blockchain and attacks on federated learning is proposed. The proposed attack method can bypass the protection from blockchain and poison federated learning. Real experiments are performed to evaluate the proposed methods

    Public Evidence from Secret Ballots

    Full text link
    Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.Comment: To appear in E-Vote-Id '1

    Methods For Defending Neural Networks Against Adversarial Attacks

    Get PDF
    Convolutional Neural Networks (CNNs) have been at the frontier of the revolution within the field of computer vision. Since the advent of AlexNet in 2012, neural networks with CNN architectures have surpassed human-level capabilities for many cognitive tasks. As the neural networks are integrated in many safety critical applications such as autonomous vehicles, it is critical that they are robust and resilient to errors. Unfortunately, it has recently been observed that deep neural network models are susceptible to adversarial perturbations which are imperceptible to human vision. In this thesis, we propose a solution to defend neural networks against white box adversarial attacks. The proposed defense is based on activation pattern analysis in the frequency domain. The technique is evaluated and compared with state-of-the-art techniques on the CIFAR-10 dataset
    corecore