4,678 research outputs found

    TOWARDS DEEP LEARNING ROBUSTNESS FOR COMPUTER VISION IN THE REAL WORLD

    Get PDF
    Deep learning has been successful in computer vision in recent years. Deep learning models achieve state-of-the-art results on many popular visual benchmarks with additional benefits compared with previous models. However, many recent studies illustrate that deep learning models are not robust towards imperceptible or perceptible changes. This robustness gap makes applying deep learning models to real-world applications challenging due to safety and reliability concerns. This thesis mainly focuses on the robustness of deep learning models in the real world. In the real world, the attackers usually don't know the details of the deep learning models. Besides, even though there are no attackers, the deep learning models are still challenged by many complex cases such as input corruptions, stylized images, and out-of-distribution data. In the first part of this thesis, we study the adversarial robustness in the real world: (1) we successfully attack several deep learning models for different tasks, and then defend against those attacks; (2) we develop universal perturbations that successfully attack unseen deep learning models without knowing architectures, parameters, and tasks. In the second part of this thesis, we discuss more general types of robustness in the real world. Besides adversarial perturbations, we address the more commonly occurred complex cases in the real world, such as input corruptions, natural adversarial examples, stylized images, and out-of-distribution data. We found two strategies that can effectively improve the robustness: (1) address the short-cut learning issue of the deep neural network so that models can collect all helpful information from the input image; (2) use complementary information from different modalities

    Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems

    Full text link
    We show that end-to-end learning of communication systems through deep neural network (DNN) autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature) of the wireless channel, an adversary transmitter can increase the block-error-rate of a communication system by orders of magnitude by transmitting a well-designed perturbation signal over the channel. We reveal that the adversarial attacks are more destructive than jamming attacks. We also show that classical coding schemes are more robust than autoencoders against both adversarial and jamming attacks. The codes are available at [1].Comment: to appear at IEEE Communications Letter
    • …
    corecore