290 research outputs found

    Towards Adversarial Robustness of Deep Vision Algorithms

    Full text link
    Deep learning methods have achieved great success in solving computer vision tasks, and they have been widely utilized in artificially intelligent systems for image processing, analysis, and understanding. However, deep neural networks have been shown to be vulnerable to adversarial perturbations in input data. The security issues of deep neural networks have thus come to the fore. It is imperative to study the adversarial robustness of deep vision algorithms comprehensively. This talk focuses on the adversarial robustness of image classification models and image denoisers. We will discuss the robustness of deep vision algorithms from three perspectives: 1) robustness evaluation (we propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness improvement (HAT, TisODE, and CIFS are developed to robustify vision models), and 3) the connection between adversarial robustness and generalization capability to new domains (we find that adversarially robust denoisers can deal with unseen types of real-world noise).Comment: PhD thesi

    Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness

    Full text link
    Adversarial attacks can mislead neural network classifiers. The defense against adversarial attacks is important for AI safety. Adversarial purification is a family of approaches that defend adversarial attacks with suitable pre-processing. Diffusion models have been shown to be effective for adversarial purification. Despite their success, many aspects of diffusion purification still remain unexplored. In this paper, we investigate and improve upon three limiting designs of diffusion purification: the use of an improved diffusion model, advanced numerical simulation techniques, and optimal control of randomness. Based on our findings, we propose Purify++, a new diffusion purification algorithm that is now the state-of-the-art purification method against several adversarial attacks. Our work presents a systematic exploration of the limits of diffusion purification methods

    Contractivity of neural ODEs: an eigenvalue optimization problem

    Full text link
    We propose a novel methodology to solve a key eigenvalue optimization problem which arises in the contractivity analysis of neural ODEs. When looking at contractivity properties of a one layer weight-tied neural ODE uΛ™(t)=Οƒ(Au(t)+b)\dot{u}(t)=\sigma(Au(t)+b) (with u,b∈Rnu,b \in {\mathbb R}^n, AA is a given nΓ—nn \times n matrix, Οƒ:Rβ†’R+\sigma : {\mathbb R} \to {\mathbb R}^+ denotes an activation function and for a vector z∈Rnz \in {\mathbb R}^n, Οƒ(z)∈Rn\sigma(z) \in {\mathbb R}^n has to be interpreted entry-wise), we are led to study the logarithmic norm of a set of products of type DAD A, where DD is a diagonal matrix such that diag(D)βˆˆΟƒβ€²(Rn){\mathrm{diag}}(D) \in \sigma'({\mathbb R}^n). Specifically, given a real number cc (usually c=0c=0), the problem consists in finding the largest positive interval Ο‡βŠ†[0,∞)\chi\subseteq \mathbb [0,\infty) such that the logarithmic norm ΞΌ(DA)≀c\mu(DA) \le c for all diagonal matrices DD with DiiβˆˆΟ‡D_{ii}\in \chi. We propose a two-level nested methodology: an inner level where, for a given Ο‡\chi, we compute an optimizer D⋆(Ο‡)D^\star(\chi) by a gradient system approach, and an outer level where we tune Ο‡\chi so that the value cc is reached by ΞΌ(D⋆(Ο‡)A)\mu(D^\star(\chi)A). We extend the proposed two-level approach to the general multilayer, and possibly time-dependent, case uΛ™(t)=Οƒ(Ak(t)…σ(A1(t)u(t)+b1(t))…+bk(t))\dot{u}(t) = \sigma( A_k(t) \ldots \sigma ( A_{1}(t) u(t) + b_{1}(t) ) \ldots + b_{k}(t) ) and we propose several numerical examples to illustrate its behaviour, including its stabilizing performance on a one-layer neural ODE applied to the classification of the MNIST handwritten digits dataset.Comment: 23 pages, 5 figures, 3 table

    An axiomatized PDE model of deep neural networks

    Full text link
    Inspired by the relation between deep neural network (DNN) and partial differential equations (PDEs), we study the general form of the PDE models of deep neural networks. To achieve this goal, we formulate DNN as an evolution operator from a simple base model. Based on several reasonable assumptions, we prove that the evolution operator is actually determined by convection-diffusion equation. This convection-diffusion equation model gives mathematical explanation for several effective networks. Moreover, we show that the convection-diffusion model improves the robustness and reduces the Rademacher complexity. Based on the convection-diffusion equation, we design a new training method for ResNets. Experiments validate the performance of the proposed method

    Enhancing Adversarial Robustness via Score-Based Optimization

    Full text link
    Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations. Developing algorithms that can mitigate the effects of these attacks is crucial for ensuring the safe use of artificial intelligence. Recent studies have suggested that score-based diffusion models are effective in adversarial defenses. However, existing diffusion-based defenses rely on the sequential simulation of the reversed stochastic differential equations of diffusion models, which are computationally inefficient and yield suboptimal results. In this paper, we introduce a novel adversarial defense scheme named ScoreOpt, which optimizes adversarial samples at test-time, towards original clean data in the direction guided by score-based priors. We conduct comprehensive experiments on multiple datasets, including CIFAR10, CIFAR100 and ImageNet. Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both robustness performance and inference speed
    • …
    corecore