290 research outputs found
Towards Adversarial Robustness of Deep Vision Algorithms
Deep learning methods have achieved great success in solving computer vision
tasks, and they have been widely utilized in artificially intelligent systems
for image processing, analysis, and understanding. However, deep neural
networks have been shown to be vulnerable to adversarial perturbations in input
data. The security issues of deep neural networks have thus come to the fore.
It is imperative to study the adversarial robustness of deep vision algorithms
comprehensively. This talk focuses on the adversarial robustness of image
classification models and image denoisers. We will discuss the robustness of
deep vision algorithms from three perspectives: 1) robustness evaluation (we
propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness
improvement (HAT, TisODE, and CIFS are developed to robustify vision models),
and 3) the connection between adversarial robustness and generalization
capability to new domains (we find that adversarially robust denoisers can deal
with unseen types of real-world noise).Comment: PhD thesi
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness
Adversarial attacks can mislead neural network classifiers. The defense
against adversarial attacks is important for AI safety. Adversarial
purification is a family of approaches that defend adversarial attacks with
suitable pre-processing. Diffusion models have been shown to be effective for
adversarial purification. Despite their success, many aspects of diffusion
purification still remain unexplored. In this paper, we investigate and improve
upon three limiting designs of diffusion purification: the use of an improved
diffusion model, advanced numerical simulation techniques, and optimal control
of randomness. Based on our findings, we propose Purify++, a new diffusion
purification algorithm that is now the state-of-the-art purification method
against several adversarial attacks. Our work presents a systematic exploration
of the limits of diffusion purification methods
Contractivity of neural ODEs: an eigenvalue optimization problem
We propose a novel methodology to solve a key eigenvalue optimization problem
which arises in the contractivity analysis of neural ODEs. When looking at
contractivity properties of a one layer weight-tied neural ODE
(with , is a given matrix, denotes an
activation function and for a vector , has to be interpreted entry-wise), we are led to study the
logarithmic norm of a set of products of type , where is a diagonal
matrix such that . Specifically,
given a real number (usually ), the problem consists in finding the
largest positive interval such that the
logarithmic norm for all diagonal matrices with . We propose a two-level nested methodology: an inner level where, for a
given , we compute an optimizer by a gradient system
approach, and an outer level where we tune so that the value is
reached by . We extend the proposed two-level approach to
the general multilayer, and possibly time-dependent, case and we
propose several numerical examples to illustrate its behaviour, including its
stabilizing performance on a one-layer neural ODE applied to the classification
of the MNIST handwritten digits dataset.Comment: 23 pages, 5 figures, 3 table
An axiomatized PDE model of deep neural networks
Inspired by the relation between deep neural network (DNN) and partial
differential equations (PDEs), we study the general form of the PDE models of
deep neural networks. To achieve this goal, we formulate DNN as an evolution
operator from a simple base model. Based on several reasonable assumptions, we
prove that the evolution operator is actually determined by
convection-diffusion equation. This convection-diffusion equation model gives
mathematical explanation for several effective networks. Moreover, we show that
the convection-diffusion model improves the robustness and reduces the
Rademacher complexity. Based on the convection-diffusion equation, we design a
new training method for ResNets. Experiments validate the performance of the
proposed method
Enhancing Adversarial Robustness via Score-Based Optimization
Adversarial attacks have the potential to mislead deep neural network
classifiers by introducing slight perturbations. Developing algorithms that can
mitigate the effects of these attacks is crucial for ensuring the safe use of
artificial intelligence. Recent studies have suggested that score-based
diffusion models are effective in adversarial defenses. However, existing
diffusion-based defenses rely on the sequential simulation of the reversed
stochastic differential equations of diffusion models, which are
computationally inefficient and yield suboptimal results. In this paper, we
introduce a novel adversarial defense scheme named ScoreOpt, which optimizes
adversarial samples at test-time, towards original clean data in the direction
guided by score-based priors. We conduct comprehensive experiments on multiple
datasets, including CIFAR10, CIFAR100 and ImageNet. Our experimental results
demonstrate that our approach outperforms existing adversarial defenses in
terms of both robustness performance and inference speed
- β¦