1,284 research outputs found

    Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

    Full text link
    Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.Comment: 15 pages, 11 figure

    ANALYSIS OF MULTIPLE ADVERSARIAL ATTACKS ON CONVOLUTIONAL NEURAL NETWORKS

    Get PDF
    The thesis studies different kind of adversarial attacks on Convolutional Neural Network by using electric load data set in order to fool deep neural network. With the improvement of Deep Learning methods, their securities and vulnerabilities have become an important research subject. An adversary who gains access to the model and data sets may add some perturbations to the datasets, which may cause significant damage to the system. By using adversarial attacks, it shows how much these attacks affect the system and shows the attacks\u27 success in this research

    The Impact of Artificial Intelligence on Military Defence and Security

    Get PDF
    The twenty-first century is now being shaped by a multipolar system characterized by techno-nationalism and a post-Bretton Woods order. In the face of a rapidly evolving digital era, international cooperation will be critical to ensuring peace and security. Information sharing, expert conferences and multilateral dialogue can help the world's nation-states and their militaries develop a better understanding of one another's capabilities and intentions. As a global middle power, Canada could be a major partner in driving this effort. This paper explores the development of military-specific capabilities in the context of artificial intelligence (AI) and machine learning. Building on Canadian defence policy, the paper outlines the military applications of AI and the resources needed to manage next-generation military operations, including multilateral engagement and technology governance

    Adversarial attacks on spiking convolutional neural networks for event-based vision

    Full text link
    Event-based dynamic vision sensors provide very sparse output in the form of spikes, which makes them suitable for low-power applications. Convolutional spiking neural networks model such event-based data and develop their full energy-saving potential when deployed on asynchronous neuromorphic hardware. Event-based vision being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received little attention so far. We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and demonstrate smaller perturbation magnitudes at higher success rates than the current state-of-the-art algorithms. For the first time, we also verify the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations, the effect of adversarial training as a defense strategy, and future directions
    • …
    corecore