9,799 research outputs found
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification
Deep neural networks (DNNs) have been shown lack of robustness for the
vulnerability of their classification to small perturbations on the inputs.
This has led to safety concerns of applying DNNs to safety-critical domains.
Several verification approaches have been developed to automatically prove or
disprove safety properties of DNNs. However, these approaches suffer from
either the scalability problem, i.e., only small DNNs can be handled, or the
precision problem, i.e., the obtained bounds are loose. This paper improves on
a recent proposal of analyzing DNNs through the classic abstract interpretation
technique, by a novel symbolic propagation technique. More specifically, the
values of neurons are represented symbolically and propagated forwardly from
the input layer to the output layer, on top of abstract domains. We show that
our approach can achieve significantly higher precision and thus can prove more
properties than using only abstract domains. Moreover, we show that the bounds
derived from our approach on the hidden neurons, when applied to a
state-of-the-art SMT based verification tool, can improve its performance. We
implement our approach into a software tool and validate it over a few DNNs
trained on benchmark datasets such as MNIST, etc.Comment: SAS 2019: 26th Static Analysis Symposium, Porto, Portugal, October
8-11, 201
Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming
Certifying the safety or robustness of neural networks against input
uncertainties and adversarial attacks is an emerging challenge in the area of
safe machine learning and control. To provide such a guarantee, one must be
able to bound the output of neural networks when their input changes within a
bounded set. In this paper, we propose a semidefinite programming (SDP)
framework to address this problem for feed-forward neural networks with general
activation functions and input uncertainty sets. Our main idea is to abstract
various properties of activation functions (e.g., monotonicity, bounded slope,
bounded values, and repetition across layers) with the formalism of quadratic
constraints. We then analyze the safety properties of the abstracted network
via the S-procedure and semidefinite programming. Our framework spans the
trade-off between conservatism and computational efficiency and applies to
problems beyond safety verification. We evaluate the performance of our
approach via numerical problem instances of various sizes
Verification of Deep Convolutional Neural Networks Using ImageStars
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in
many real-world applications, such as facial recognition, image classification,
human pose estimation, and semantic segmentation. Despite their success, CNNs
are vulnerable to adversarial attacks, where slight changes to their inputs may
lead to sharp changes in their output in even well-trained networks. Set-based
analysis methods can detect or prove the absence of bounded adversarial
attacks, which can then be used to evaluate the effectiveness of neural network
training methodology. Unfortunately, existing verification approaches have
limited scalability in terms of the size of networks that can be analyzed.
In this paper, we describe a set-based framework that successfully deals with
real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
Our approach is based on a new set representation called the ImageStar, which
enables efficient exact and over-approximative analysis of CNNs. ImageStars
perform efficient set-based analysis by combining operations on concrete images
with linear programming (LP). Our approach is implemented in a tool called NNV,
and can verify the robustness of VGG networks with respect to a small set of
input states, derived from adversarial attacks, such as the DeepFool attack.
The experimental results show that our approach is less conservative and faster
than existing zonotope methods, such as those used in DeepZ, and the polytope
method used in DeepPoly
Verification for Machine Learning, Autonomy, and Neural Networks Survey
This survey presents an overview of verification techniques for autonomous
systems, with a focus on safety-critical autonomous cyber-physical systems
(CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances
in artificial intelligence (AI) and machine learning (ML) through approaches
such as deep neural networks (DNNs), embedded in so-called learning enabled
components (LECs) that accomplish tasks from classification to control.
Recently, the formal methods and formal verification community has developed
methods to characterize behaviors in these LECs with eventual goals of formally
verifying specifications for LECs, and this article presents a survey of many
of these recent approaches
A Dual Approach to Scalable Verification of Deep Networks
This paper addresses the problem of formally verifying desirable properties
of neural networks, i.e., obtaining provable guarantees that neural networks
satisfy specifications relating their inputs and outputs (robustness to bounded
norm adversarial perturbations, for example). Most previous work on this topic
was limited in its applicability by the size of the network, network
architecture and the complexity of properties to be verified. In contrast, our
framework applies to a general class of activation functions and specifications
on neural network inputs and outputs. We formulate verification as an
optimization problem (seeking to find the largest violation of the
specification) and solve a Lagrangian relaxation of the optimization problem to
obtain an upper bound on the worst case violation of the specification being
verified. Our approach is anytime i.e. it can be stopped at any time and a
valid bound on the maximum violation can be obtained. We develop specialized
verification algorithms with provable tightness guarantees under special
assumptions and demonstrate the practical significance of our general
verification approach on a variety of verification tasks
Towards a Robust Deep Neural Network in Texts: A Survey
Deep neural networks (DNNs) have achieved remarkable success in various tasks
(e.g., image classification, speech recognition, and natural language
processing). However, researches have shown that DNN models are vulnerable to
adversarial examples, which cause incorrect predictions by adding imperceptible
perturbations into normal inputs. Studies on adversarial examples in image
domain have been well investigated, but in texts the research is not enough,
let alone a comprehensive survey in this field. In this paper, we aim at
presenting a comprehensive understanding of adversarial attacks and
corresponding mitigation strategies in texts. Specifically, we first give a
taxonomy of adversarial attacks and defenses in texts from the perspective of
different natural language processing (NLP) tasks, and then introduce how to
build a robust DNN model via testing and verification. Finally, we discuss the
existing challenges of adversarial attacks and defenses in texts and present
the future research directions in this emerging field
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
Can engineering neural networks be approached in a disciplined way similar to
how engineers build software for civil aircraft? We present
nn-dependability-kit, an open-source toolbox to support safety engineering of
neural networks for autonomous driving systems. The rationale behind
nn-dependability-kit is to consider a structured approach (via Goal Structuring
Notation) to argue the quality of neural networks. In particular, the tool
realizes recent scientific results including (a) novel dependability metrics
for indicating sufficient elimination of uncertainties in the product life
cycle, (b) formal reasoning engine for ensuring that the generalization does
not lead to undesired behaviors, and (c) runtime monitoring for reasoning
whether a decision of a neural network in operation is supported by prior
similarities in the training data. A proprietary version of
nn-dependability-kit has been used to improve the quality of a level-3
autonomous driving component developed by Audi for highway maneuvers.Comment: Tool available at
https://github.com/dependable-ai/nn-dependability-ki
Provable defenses against adversarial examples via the convex outer adversarial polytope
We propose a method to learn deep ReLU-based classifiers that are provably
robust against norm-bounded adversarial perturbations on the training data. For
previously unseen examples, the approach is guaranteed to detect all
adversarial examples, though it may flag some non-adversarial examples as well.
The basic idea is to consider a convex outer approximation of the set of
activations reachable through a norm-bounded perturbation, and we develop a
robust optimization procedure that minimizes the worst case loss over this
outer region (via a linear program). Crucially, we show that the dual problem
to this linear program can be represented itself as a deep network similar to
the backpropagation network, leading to very efficient optimization approaches
that produce guaranteed bounds on the robust loss. The end result is that by
executing a few more forward and backward passes through a slightly modified
version of the original network (though possibly with much larger batch sizes),
we can learn a classifier that is provably robust to any norm-bounded
adversarial attack. We illustrate the approach on a number of tasks to train
classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a
convolutional classifier that provably has less than 5.8% test error for any
adversarial attack with bounded norm less than ),
and code for all experiments in the paper is available at
https://github.com/locuslab/convex_adversarial.Comment: ICML final versio
Robustness Certification of Generative Models
Generative neural networks can be used to specify continuous transformations
between images via latent-space interpolation. However, certifying that all
images captured by the resulting path in the image manifold satisfy a given
property can be very challenging. This is because this set is highly
non-convex, thwarting existing scalable robustness analysis methods, which are
often based on convex relaxations. We present ApproxLine, a scalable
certification method that successfully verifies non-trivial specifications
involving generative models and classifiers. ApproxLine can provide both sound
deterministic and probabilistic guarantees, by capturing either infinite
non-convex sets of neural network activation vectors or distributions over such
sets. We show that ApproxLine is practically useful and can verify interesting
interpolations in the networks latent space.Comment: Prior version submitted to ICLR 202
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Neural networks have demonstrated considerable success on a wide variety of
real-world problems. However, networks trained only to optimize for training
accuracy can often be fooled by adversarial examples - slightly perturbed
inputs that are misclassified with high confidence. Verification of networks
enables us to gauge their vulnerability to such adversarial examples. We
formulate verification of piecewise-linear neural networks as a mixed integer
program. On a representative task of finding minimum adversarial distortions,
our verifier is two to three orders of magnitude quicker than the
state-of-the-art. We achieve this computational speedup via tight formulations
for non-linearities, as well as a novel presolve algorithm that makes full use
of all information available. The computational speedup allows us to verify
properties on convolutional networks with an order of magnitude more ReLUs than
networks previously verified by any complete verifier. In particular, we
determine for the first time the exact adversarial accuracy of an MNIST
classifier to perturbations with bounded norm : for
this classifier, we find an adversarial example for 4.38% of samples, and a
certificate of robustness (to perturbations with bounded norm) for the
remainder. Across all robust training procedures and network architectures
considered, we are able to certify more samples than the state-of-the-art and
find more adversarial examples than a strong first-order attack.Comment: Accepted as a conference paper at ICLR 201
- …